Abby Sydnes (Temple
Guest
|
Post by Abby Sydnes (Temple on Feb 17, 2017 19:29:35 GMT -7
Hi Everyone!
At Temple University, we have developed a system that is capable of tracking the position of the sun using OpenCV in order to find the brightest spot in the sky (the sun) using only the Raspberry Pi camera.
Currently it is frame based and Montana said that they are currently in the works with Stream on updating the code. The problem we are currently encountering is that the Raspberry Pi is only capable of capturing one video session using the Raspberry Pi camera and therefore we need to rely on using the video as it streams. Basically, they have to be interconnected scripts/programs and can't be parallel. We are wondering if anyone knows anything about the new version of the code and whether there is a current working version or basic information about how the new code will function.
We would be happy to help with anything that could make this easier!
Thank you!
|
|
|
Post by Skylar MSGC on Feb 24, 2017 11:41:51 GMT -7
You could always look into using the command tee to split your video into a feed and a file that you read from check out what it does here: www.computerhope.com/unix/utee.htmSetting up your code would be something like this to run the raspivid and push to a file. Which you could run this in the background while you run your CV program raspivid (your settings) -n -o | tee -a video.h264 | vlc (settings) & That ampersand(&) at the end runs the command in the background but raspivid will still try to output to the terminal. I'm not too sure if there is a command to make raspivid run quietly or not at this time. The file that you would look at would be the video.h264, in which you should be able to look at each frame by frame in CV.
|
|
|
Post by asydnes on Mar 1, 2017 18:44:14 GMT -7
Hi Skylar!
That is essentially what we are trying to do and we have emulated that we are receiving the stream by just creating an instance of the camera. Since the Montana code already writes to a file, we don't really want to create another file (and it's slower to read/write). We more just want to test with a finished version of the new Stream code to be able to split that video without creating another instance of the Raspberry Pi camera especially since we are concerned a little bit with processing power (we are at about 40-55% CPU usage with our current code).
|
|
|
Post by Skylar MSGC on Mar 3, 2017 16:29:25 GMT -7
Well I'm not sure that the Stre.am code is finished yet. I'll have to get in contact with the guys on the east coast working on that. We sent them a full build package to look into using ffserver to stream from the pi.
Otherwise I would recommend that you could make a python script that has two threads running 1. the picamera library to run the camera and send its data out through the socket module to a streaming server somewhere. You can split this data with the picamera library I think, or create a numpy array to store values to that your openCV thread will analyze as it runs. 2. the OpenCV library to run your light intensity sensing software. this is fed some of the feed through a stream that you create while running the picamera side.
Doing this will require a bit a research and time, managing multiple threads, even how python handles it, might be a little tricky. Depending on how fast your OpenCV program and parse through 25fps video there may be a situation where your program hits a EOF (end of file) before picamera can provide the information. I have looked into this a little bit and I think this is possible but it will take some time to get everything going well.
P.S. The multithreading will spread the tasks out across the CPU and lowering the workload on the main Cores.
Also do you have a hardware experienced student working on this, what I just explained in this reply is fairly complex when you dive into it. If you could use a professor for reference that is knowledgeable in computer science or FPGAs could help out with explaining some of this.
|
|
|
Post by leviw on Mar 4, 2017 9:35:02 GMT -7
It would be a LOT easier to just use a second camera and pi to find and point at the sun. If the cameras were attached to the same servo, they would point in the same direction. One pi to point, one to stream.
You could probably save yourself dozens (if not more) of hours of work by adding ~100 grams of weight and ~$50 in hardware.
|
|
|
Post by asydnes on Mar 8, 2017 19:11:24 GMT -7
Leviw, the only problem with adding a second camera is that they would have to be mounted on top of each other which would then cause the vertical direction to be shifted (also correctable by code but we would prefer not to do that). We are also graduating in May and are trying to do a full system test April 15th. The entirety of our mechanical design is done and the passive stabilization is based on weight and center of gravity so adding another camera and pi would require a complete redesign of the construction of our video payload and we wouldn't be able to make that deadline.
Skylar, is there a time that you're available to talk about this (possibly on Friday)? My email address is asydnes@temple.edu
|
|
|
Post by davidaz on Apr 30, 2017 20:54:33 GMT -7
Hi everyone! I will have a RFD ground station set up at my base camp in Douglas, Wy for the eclipse. If anyone would like an additional ground station to receive your images from your balloon please let me know. I will need your RFD still camera settings in order to sync my ground station still RFD radio with your balloon's RFD radio. Please contact me here or by e-mail at kf7mzy (at) yahoo (dot) com. Here is my eclipse blog if you need more info about my eclipse activities. papasgreatamericaneclipseexpedition.blogspot.com/ Thanks! Prof David Iadevaia.
|
|
|
Post by David MSGC on May 1, 2017 13:15:04 GMT -7
Hi everyone! I will have a RFD ground station set up at my base camp in Douglas, Wy for the eclipse. If anyone would like an additional ground station to receive your images from your balloon please let me know. I will need your RFD still camera settings in order to sync my ground station still RFD radio with your balloon's RFD radio. Please contact me here or by e-mail at kf7mzy (at) yahoo (dot) com. Here is my eclipse blog if you need more info about my eclipse activities. papasgreatamericaneclipseexpedition.blogspot.com/ Thanks! Prof David Iadevaia. How are you synchronizing the two ground station RFD programs, or are you only controlling the payload side with one ground station at a time? The software provided can only send a picture to the ground station that requested it, a "third" ground station would not be able to receive a picture unless you are using different software, in which I would be curious to see how you are receiving the picture at more than one ground station.
|
|
|
Post by davidaz on May 2, 2017 13:10:13 GMT -7
I will be a second ground station which can send a request but only one station at a time. If the payload "sees" my request it should answer it provided the two RFD radios are in sync...with the same settings.
|
|
|
Post by David MSGC on May 3, 2017 17:17:01 GMT -7
Yes, but it is important that the 'other' ground station has its radio off or the program stopped before the second ground station RFD is turned on. Once the second ground RFD has a solid green led then start the program on the second ground station. The serial buffer of one ground station can get filled with unexpected data from the payload side radio, the ground station side RFD program will not empty the serial buffer unless read from, or if it sends a request to the payload. You can switch between two different ground station RFDs but all have to be on the same settings, only one ground station radio on at a time, and to be safe I would close the program on the ground side that is not "talking" to the payload. This should flush the buffer when you restart the ground station side RFD program. Don't worry about loosing connection with the payload, the program on the payload side will always be listening to serial for a command and there are checks in the program to prevent it from ever getting stuck in a loop. You just have to coordinate with the other ground station to have only one on at at time, and always restart the program when you want to be 'seen' by the payload.
|
|
|
Post by davidaz on May 3, 2017 22:45:57 GMT -7
Whew! So if there are any balloon teams launching west of Douglas who would like a second ground station to attempt to collect images and have ham radio for communication or cell phones might work...if the cell service isn't overwhelmed due to a huge influx of users...let me know. kf7mzy (at) yahoo (dot) com.
|
|