|
Post by leviw on Jan 6, 2017 11:45:00 GMT -7
The LBCC team is experimenting with streaming multiple raspberry pi cameras to the ground station, then stitching them together in real-time to create one really large-scale video.
Depending on the source resolutions, the output could be cropped to a central area to remove the swinging motion of the payload or simply create a super-wide image without distortion from fisheye.
The downside is that we'll probably need to reduce the frames per second to fit several streams in the same limited bandwidth. We're hoping that will not be noticeable because of how far away the cameras will be, especially if we crop out any swinging motion before broadcasting the stream.
We're still in the early phases but should have some results in the next couple weeks, we'll post back to let you know what we learn.
If anyone else is exploring something like this, we'd like to talk and share ideas.
|
|
|
Post by CofC_HiBal on Jan 26, 2017 12:26:23 GMT -7
this is very interesting please keep us posted! Currently, the only option offered for multi-camera payload requires the multiplexer's produced by MSU. Would it be efficient to use a single camera that spins quickly enough to be able to stitch a 360 view?
|
|
|
Post by leviw on Jan 26, 2017 22:24:32 GMT -7
I don't know of a way to do it with a spinning camera, but it should be possible. It seems like the challenges would be setting up your rotation so that each frame was consistently in the same place. Also, your stitching software would have to be capable of breaking the video into still shots and finding the next matching image from the sequence - that sounds harder than grabbing the most recent frame from a nearby camera. Our approach will use 4 different cameras pointing mostly downwards at slightly overlapping angles based on the FOV of the pi cameras. The video software ( Vahana VR) uses a custom camera angle profile to combine our multiple video feeds into a single, wide-angle video. Depending on our video feeds this image could be very wide-angle with no distortion, or very high resolution, but either way it will be very low fps (probably 5-7 fps) due to the limited ubiquity bandwidth. We believe we'll be able to do all this video processing in nearly real-time, adding almost no delay to the stream. We needed time to buy a new video card and 3d print a mount with the right camera angles, but I hope to have an update this Saturday. There are still 99 reasons this might be a bad idea, but if we starting seeing promising results I'll do a quick write-up.
|
|
|
Post by David MSGC on Jan 31, 2017 12:31:34 GMT -7
Rotating a camera also needs to know the rate of rotation of the payload. Just my observation of other teams trying to rotate cameras(little over 3 years now) it seems more difficult than using multiple cameras, from hardware and software perspectives.
Levi,Thanks for all the input to the forum.
|
|