This is a single lens spherical vision camera capable of streaming 360 x 240 degrees Field of View (FOV) video. It is optimized for drones and robotics due to its small size and weight.

 

The viewer can visualize the live or recorded stream as either a 3D georegistered video bubble placed on the map or enter the sphere for a more immersive experience. Additional viewing modes are: “Virtual pan-tilt”, “target lock-on” and “video draping” where the video Region of Interest (ROI) is rectified in real time. The VR-Eye cam software is also compatible with Android based VR headsets i.e. Gear-VR or other generic VR headsets that work with Android Smartphones i.e. Google cardboard. An iOS version is on the works.

 

The VR-Eye cam software can combine the spherical video with the MAVlink stream and offer a unique “geo-registered” view where the video sphere is placed on its exact location and orientation on the map.

The video below is a screen recording of a real-time stream as received by the ground station. The immersive video can be viewed live using the VR-Eye cam software or VR headsets and simultaneously recorded. The recorded video can be reproduced for post mission analysis where the viewer can select and monitor the camera surroundings from any desired camera angle combined with geolocation information.

For more information about the VR-Eye cam click here

The VR-Eye cam is expected to be in production early September 2016

Views: 4594

Comment by Gary McCray on July 8, 2016 at 11:30am

This seems like a very interesting concept, but 720P @ 25fps and 1080P @ 15fps seems pretty limiting.

You don't mention the resolution of the camera chip itself, but I'm guessing - large.

It seems you are sampling a small section of the chip to produce the selected image.

Also no mention is made of stabilization that I could find anyway.

With a quadcopter you have continuous movement in roll, pitch and yaw and these are hard to compensate in real time computationally, I would imagine even more so with the enormous field of view this camera has.

EIS seems like it would be very difficult to implement, have you done so?

From what you have said your record mode records the whole semi-sphere and allows you to select which part or parts you want to look at afterwards.

This seems like it would make the resolution moot since you are basically choosing what to look at from a much higher resolution main video.

If this is true why do you say you are limited to 1080P or 720P since the overall resolution would seem to be the actual data of interest.

If that is so, what is the overall resolution?

Best Regards,

Gary

Comment by MAGnet Systems on July 8, 2016 at 12:45pm

Hello Gary,

There is no image stabilization implemented. The camera image sensor is an 1/1.8" SONY IMX178 CMOS. It can reach up to 5MP but for the time being even rectifying the 1080p resolution in real time without using external GPUs is a constant battle between resolution and frame rate.

As we explain on the write up, we don’t claim extreme resolutions at this point as that was not our main intention. This camera is addressed mostly for live 360 FPV and surveillance purposes. Having said that, we already identified areas for improvement for both resolution and frame rate and we are working on that. We are confident that soon we will reach the 3MP region with a descent frame rate and then we will release a second (software) version.

Currently, there are many initiatives for real time spherical video streaming (see FB and youtube) and all of them are facing the same issues which is a balance between resolution, bandwidth and frame rate. Please note that we are talking about real time video rectification, no post processing. The VR-Eye cam is a straight forward approach to spherical video streaming along these lines. Single camera, low bandwidth, same result as multicamera rigs for live streaming. This is a work in progress and there is a lot of room for improvement so we really appreciate your comments. EIS is the next thing we are looking at. But if you take a look on the thread video, it was taken with the camera attached with simple vibration dampeners on an IRIS drone without EIS. 

Comment by Global Innovator on July 8, 2016 at 1:30pm

Dear MAGnetSystems,

10 years ago I have tested spherical vision camera in Germany, building 3D car navigation system.

The problem is in spherical resolution.

If you play back spherical video you get real 3D dimensions lost or distracted video not fir to control your car.

You don't get uniform resolution for the sphere due to spherical construction of the lens.

To get 3D video extracted from spherical video takes a lot of GPU's power if processed in real time.

To get uniform fullHD resolution for the sphere you should start from 10Mpx camera and have video processor embedded into camera hardware.

You need 2 more cameras to extract Z-depth dimension for use in spherical to flat video or 3D video conversion.

A lot of math, trigonometry and powerful computer. required.

Google lost interest in spherical lenses building spherical cameras made of a large number of flat cameras.

Comment by Gary McCray on July 8, 2016 at 1:59pm

After looking more carefully at your video I can definitely see it's potential for aerial and ground vehicles for FPV.

Assuming frame rate and latency issues can be alleviated, this could be a superior FPV navigation tool for use with head sets with head position tracking and certainly excellent for surveillance among other things.

I'd love to have one of these on My XMAXX rover hooked up to my Oculus Rift.

Look forward to seeing how you progress with this, please keep us posted.

Definitely let us know when you are ready to start selling them and what it will cost.

@ Global,

I really don't think they were claiming real time autonomous 3D navigation use, just that they could extract a single point of Geolocation for position identification uses.

Best Regards,

Gary

Comment by Nick Turner on July 8, 2016 at 2:02pm

This looks great, possibly even for a rover build going through confined space where a gimal or more bulky 360 rig would not be ideal. 

Any estimates on price? 

Comment by MAGnet Systems on July 8, 2016 at 2:07pm

@ Global innovator, I agree with your comments about the effort it takes to produce a viable 3D spherical live video stream. We have 2 mathematicians working on these advanced algorithms for several years doing constant optimizations on the 3D sphere model. But in our case, we don’t describe things we are planning to do. We show with the video above how we currently doing it without GPUs and multi-camera rigs. It is all in the algorithms used and how fast they can rectify the raw image without stalling the CPU. Thanks for your comments in all cases

Comment by Global Innovator on July 8, 2016 at 2:50pm

@MAGnetSystems,

details and resolution are lost with sphere projection to flat camera chip.

There is a number of 360 Sphere apps for smartphones (Android http://www.addictivetips.com/android/install-android-4-2-camera-app...)

Reverse projection rectangle to sphere requires approximation or lower resolution image is generated

so my suggestion to start with 10Mpx camera matrix (smartphone) is not bad, to get fullHD in play back mode.

Smartphone is really smart, so you can affix fish-eye or alike 360 sphere view lens as option.

You need to build depth map from another camera view since in play back mode z-depth is lost for close and far objects.

What I have got in 360 sphere camera video is

alike

http://cloud.addictivetips.com/wp-content/uploads/2012/11/Android-4...

not fit for 3D video car navigation

z-depth is lost in playback mode

Smartphone comes nowadays with high-res camera (10Mpx+) sophisticated video processing algorithms and high power processor + GPU

video processing is already embedded into video libraries and optimized , so there is a little chance for third party mathematicians to offer higher efficiency solution (video API is provided).

I am aware of many efforts to embed 360 panorama view into security cameras which failed due to lack of depth map in playback mode.

Wish you luck and prosperity (math + IT graduated myself)

Comment by MAGnet Systems on July 8, 2016 at 2:53pm

@ Gary, The Gear-VR we use works great. The immersion feeling we get is really impressive. I guess the Oculus Rift will feel even better but less mobile though?

Let me explain briefly how the video georegistration works for navigation purposes in our case:

A prerequisite is to alight the VR-Eye camera with the drone cardinal axis (front of the camera FOV is aligned with the front of the drone).Then we use the MAVlink feed in conjunction with the video and correlate every pixel of the sphere with actual bearings starting from the drone and extending into space.

When these bearings intersect the ground (we use the digital model of the earth with DEM) we extract geographic coordinates as you very well pointed out. This means that you can simply click on objects that you see on the video and get its geographic coordinates.

We are currently developing a Gear-VR version that will allow you to point the headset to an object that you see inside the spherical rectified video i.e. a tree and have the drone flying to it as now we have its coordinates.

@ Nick, Thanks for your comments, we are targeting at a price of less than $250, even $220 on bulk quantities.   The analog version will be close to $170. The software for both the PC and the VR versions is included in these prices. 

Comment by Global Innovator on July 8, 2016 at 3:09pm

This means that you can simply click on objects that you see on the video and get its geographic coordinates.

- how can you extract Z-depth ?

Comment by Gary McCray on July 8, 2016 at 4:12pm

Hi GI, I think I can answer that.

Based on knowing your copters exact location and on extracting the exact vector towards the object you want to know the location of the location is the point of intersection between that vector and the place it intersects the Earth based on the DEM Digital Elevation Model of the Earth that is contained in the software/firmware.

It is probably not useful for fine navigation, but reasonably accurate for coarse position identification and it certainly doesn't take into account obstacles not included in the DEM.

Basically not very suited for ground navigation, but potentially useful for limited aerial navigation and ground position acquisition.

MagNet, please correct me if I have failed to describe this correctly.

I would think that either progressive or staged zoom from full sphere to max zoom would be a key feature for surveillance and for FPV navigation, which seems to be what you are describing on your site.

I understand that a staged zoom is easier and cleaner due to pixel aliasing, a higher resolution camera would permit a smoother transition with more stages.

Best Regards,

Gary

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service