VR-Eye Spherical vision camera

3689696107?profile=original

This is a single lens spherical vision camera capable of streaming 360 x 240 degrees Field of View (FOV) video. It is optimized for drones and robotics due to its small size and weight.

 

The viewer can visualize the live or recorded stream as either a 3D georegistered video bubble placed on the map or enter the sphere for a more immersive experience. Additional viewing modes are: “Virtual pan-tilt”, “target lock-on” and “video draping” where the video Region of Interest (ROI) is rectified in real time. The VR-Eye cam software is also compatible with Android based VR headsets i.e. Gear-VR or other generic VR headsets that work with Android Smartphones i.e. Google cardboard. An iOS version is on the works.

 

The VR-Eye cam software can combine the spherical video with the MAVlink stream and offer a unique “geo-registered” view where the video sphere is placed on its exact location and orientation on the map.

The video below is a screen recording of a real-time stream as received by the ground station. The immersive video can be viewed live using the VR-Eye cam software or VR headsets and simultaneously recorded. The recorded video can be reproduced for post mission analysis where the viewer can select and monitor the camera surroundings from any desired camera angle combined with geolocation information.

For more information about the VR-Eye cam click here

The VR-Eye cam is expected to be in production early September 2016

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • You guys are doing some brilliant stuff. I imagine this would pair very well with your Pharos antenna system (with the $600 5.8Ghz modems). Definitely will keep an eye on this.

  • @ Marc, Please see the answers below:

     

    You got it right. Any object that appears on the video can be used as a “video target”. You click on it on the video window and its coordinates are generated and used from that point on as a standard waypoint.

     

    Yes, this can be used to track objects over water.

     

    We have actually included a video tracking algorithm in our software capable of tracking moving objects based on pixel correlation (comparison of current with previous pixels). It works well when the object is moving at slow speeds and does not change its shape much. But on fast moving objects it tends to brake the lock. We haven’t invested much effort towards this direction as we added it as a proof of concept but we have it on the “To do” list.

     

    The latency is close to 150-200ms mostly due to the onboard H.264 compression. The VR-Eye cam is an IP camera so you need an IP based digital data link. We are currently testing an analog camera version that will allow using analog links. But on the receiving end you will have to digitize the feed in any case. Simple, low cost, analog-to-USB video converters can be used on the receiving end but the expected resolution will be less than the IP version i.e. PAL or NTSC. We just received these cameras and soon we will post a review on how they work with our software.

     

    Yes, we record the whole flight plus the georegistered video as a single file and we extract coordinates from any desired object that we see inside the sphere. This process allows to not only go back in time but go back in time AND space in a unified manner and observe events that you might missed during the real time monitoring in both time and exact location.

     

    We are always open to suggestions for cooperation and we enjoy creating new products with enhanced capabilities.

     

    Thank you for your comments. You actually give us the opportunity to clarify some hard-to-see aspects of this technology. We do not use our own autopilot. We use the 3DR Pixhawk an all our drones and systems (even on some manned systems). But we have developed our own GCS software that expands the capabilities of Mission Planner at some areas to include the 3D visualization and navigation techniques described in this thread.

     

     

     

  • Sorry to spam your thread but looking at your website you have a full suite of products coming on line. Do you have your own autopilot and mission planner GCS?

  • I will be buying one of these to put on a plane when they are released. Seems like if you guys teamed up with Emlid could come up with something very interesting as they have very lightweight L1 RTK.

  • You could record an entire flight and extract GPS coordinates from any point of interest after the fact?

  • What is latency like? How is this transmitted back to the GCS? Ideally, do you need a digital link?

  • This is extraordinary actually. With this properly integrated in a plane you select an object of interest and pass the co-ordinates to the autopilot to circle around. The plane would circle around an object and the camera would remain fixed on the object. Or is that too fanciful? Would it work over water? If there was a static body in the water could you get a plane to circle it. I suppose with some extra processing a moving object could be tracked as well.

  • @ Gary, GI, the coordinates extraction we do is actually very accurate and pretty much adequate for drones navigation. Actually, the only limiting factor is the accuracy of the GPS sensor used. With the standard 3DR uBlox sensors we get an accuracy of ~ 2 meters which is more than we need for aerial navigation. Adding an RTK GPS will definitely improve that accuracy but in my opinion it is an overkill for any conventional drone use. Anyway the option is there.

    Let me give you some additional information on the way we do the coordinates extraction so it becomes more obvious:

    First, a sector of the spherical image called ROI (region of interest) is isolated and rectified to remove the fisheye distortion. We use both equirectangular and rectilinear rectification algorithms. The rectified ROI now looks like it was taken from a standard camera. This is the most demanding part depending on the accuracy you wish to achieve. Less distortion come to a price of more processing and vice versa but as I said we managed to get a very good balance. The fact that we only rectify the ROI with increased accuracy (rectilinear) while we keep the rest sphere rectified with equirectangular rectification saves a lot of processing.

    Then the MAVlink data are injected into the rectified ROI. Since we know the exact location of the drone its yaw, roll and pitch angles we assign a vector (bearing line) to each individual pixel assuming that the camera sensor pixels are equally distributed over the sensor (this is the case obviously).

    Additional algorithms are used to generate the exact geographic coordinates at the points where these bearing lines are crossing the earth model we use (which is very precise as it originates from NASA).

    This is the theory behind it but lets take a look at the facts:

    Look at the video on our VR-Eye product page called “video draping”. This video shows how the rectified video layer is draped on top of the mapping layer. At aprox. 00:25 of the video you will see a small building (a shed) appearing on the map and then the video layer covers it with the real time video feed. You will see a spatial correlation between the mapping layer and the video layer and this shed object can be used as a comparison point. This makes it easy to realize that clicking on the map to get coordinates is now the same thing as clicking on the video pixels to get GEOGRAPHIC coordinates. The video that we produce is like a metadata enriched image (i.e. TIFF) so every pixel is geotagged and this happens for every pixel for every frame. We have invested a lot of effort to make this as light weighted as possible and of course transparent to the user. The user simply clicks on the video to get coordinates, navigate, etc. And there are more features under the hood based on this technique that we’ll soon release. 

  • Hi GI,

    That is actually pretty much what I was trying to say, usually DEM does not have obstacles (like Tree) on them.

    So you approximate the location as though it did not have the obstacle because the vector intersection is represented by the information in the DEM.

    I do not think that this is supposed to be a way to build a 3D depth map or 3D point cloud.

    That is much better handled by a stereoscopic 2 camera system or, better scanning laser rangefinder, or even better both.

    Best,

    Gary

  • @Gary,

    real 3D objects like trees are not represented by DEM model, so you can exactly get a vector.

    To get depth map you need to know at what distance this vector intersects that tree.

    To build depth map from 360 spherical video is not easy job, as I have pointed earlier ( 360 panoramas, 360 sphere

    video, image examples from search engine).

    Ok, modern TV sets come with 2D to 3D live conversion chip embedded.

    Do you offer 360 sphere to  3D 360 sphere VR technology ?

This reply was deleted.