VR-Eye Spherical vision camera

3689696107?profile=original

This is a single lens spherical vision camera capable of streaming 360 x 240 degrees Field of View (FOV) video. It is optimized for drones and robotics due to its small size and weight.

 

The viewer can visualize the live or recorded stream as either a 3D georegistered video bubble placed on the map or enter the sphere for a more immersive experience. Additional viewing modes are: “Virtual pan-tilt”, “target lock-on” and “video draping” where the video Region of Interest (ROI) is rectified in real time. The VR-Eye cam software is also compatible with Android based VR headsets i.e. Gear-VR or other generic VR headsets that work with Android Smartphones i.e. Google cardboard. An iOS version is on the works.

 

The VR-Eye cam software can combine the spherical video with the MAVlink stream and offer a unique “geo-registered” view where the video sphere is placed on its exact location and orientation on the map.

The video below is a screen recording of a real-time stream as received by the ground station. The immersive video can be viewed live using the VR-Eye cam software or VR headsets and simultaneously recorded. The recorded video can be reproduced for post mission analysis where the viewer can select and monitor the camera surroundings from any desired camera angle combined with geolocation information.

For more information about the VR-Eye cam click here

The VR-Eye cam is expected to be in production early September 2016

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • @Ravakith <slow hand clap> well done for linking to a non existent web page, you idiot spammer </slow hand clap>

  • @ Fnoop, Thanks for your nice comments.  About new technologies, it is really amazing what can come out when you assign real coordinates to video pixels. A drone is full blown system of systems that combines all the required sensors with a video camera to generate geo-registered video. What is really interesting is how this technology applies to smartphones that already have all the above sensors (including a camera) in a miniaturized and compact device. The video below shows a comparable outcome when you assign geographic coordinates and bearing vectors to video pixels but this time a smartphone is used. The way it’s done, is similar to the VR-Eye cam. We stream the video and location metadata over the internet (using 3G/4G) into the remote viewing application where all the processing takes places. The outcome is a “geo-registered” video projection that requires no specialized hardware, just a plain smartphone.

    On the lower right of the video you can see the physical movement of the smartphone (camera). On the main screen you see how the video stream is combined with the smartphone GPS/IMU data to generate a real-time geo-registered video that is projected on the map on its exact location/orientation. We call this app “Flashlight” (that’s why the flashlight icon).  

     

    We developed Flashlight for body worn camera applications where the video is enhanced with geo meta-data for improving the remote viewers situational awareness. But we think that there is a potential for drone applications especially the smartphone powered drones (yes, there are a few out there).

    This is what comes next!  

    YouTube
  • If you get near your projected projected price point you are going to be busy.
  • This is excellent,

    the product your are developing  does match all my requirements. Keep us updated and feel free to PM me if you have additional information and if your are looking for beta testers.

    Hope you will get great success with it !!

  • @ Patrick, We already developed 2 different methods of reference for the VR-Eye cam video when the VR headsets are used:

    One is referenced to True North. As the VR headset has an integrated compass, when you point your head to the North you will be monitoring the camera sector that faces the North bearings. The same goes of course for any true bearing i.e. point your head to the South and you’ll be monitoring the South camera regions. As explained in previous messages, the VR-Eye cam video sphere is a geo-oriented object so it is referenced with True Bearings.

    The second method of reference is with regards to the drone frame. In other words, front camera view is the front of the drone fuselage regardless of the drone’s True (or magnetic) heading.

    Both options are there and can be interchanged with a press of a button. Keep in mind that on the map we plot a 3D model that represents the drone that is also geo-oriented i.e. you see the orientation of the drone and the orientation of the camera “look vector” on the map.

    What we can also do is add an indicator letter to the VR headset view so when you point your head to i.e. the front of the drone the indicator “F” will appear. Subsequently “R” means right wing, “B” means back, “L” means left.

    It will also be easy to add 4 “preset” Front, Right, Back and Left views. This is very close as to what you refer as “resetting” the view vector.

    Note that as the VR-Eye cam vertical field of view is 240 degrees, it can see well above the horizon. See the thread video where the IRIS drone motors are visible (the camera is mounted at the IRIS belly). Another easy way to keep a visual reference with regards to the drone cardinal axis is to put a colored sticker on one of the arms. This will be another physical “spatial awareness” enchantment. 

  • @MAGnet systems 

    Your description is really compatible with what I am  planning  to integrate; please take  a look on this architecture: http://diydrones.com/group/companion_computers/forum/topics/proposa...

     The BBBmini  -  http://diydrones.com/group/bbbmini - is a BeagleBone based Companion Computer that acts as well as the Autopilot with the MMM Mini Cape.  I am using a 802.11a (ofdm) downlink on 5,8 Ghz (about 30 Mbps effective throughput) down to the ground station. For the GCS, depends on what computational requirement for ''flatting out'' the ROI, the ''multilink Gateway'' can be a laptop or an embedded .. I can adjust, ideally that would be a small Android tablet, but once again this is not the deal-breaker on this project.

    What is really needed is a method of aligning the IMU/Magnetometer mounted on the VR headset so we can keep get a reference with the flight path. This could be a vector (an arrow) with the tail ''attached '' to the center of the drone and the  head pointing to the ROI. A simple analogy is to use this system as an inverted celestial navigation system:

    3702269407?profile=original

    Basically the standard OSD is represented by the equator so, this half dome could move on the 3 axis ( pitch, roll and yaw)  according to the IMU and the ROI  ( the star in this representation) could move accordingly or could be fixed, if a the Image Stabilization System is activated ,  just like with a gimbal system.

    Keeping  the ROI steady is easier to implement with a joystick because the axis are fixed, but with your head, turning left - right - up - down is OK but centering and return to zero (aligned with the nose of aircraft) can be pretty akward. On the Edtraker , there is a homing button that you can press to reset the axis, so the ROI is 0-0-0 (RPY  -000 : Roll - Pitch and Yaw are zeroed)  I guess that would be an interesting feature, so your are ''getting back to the cockpit'' in a second.

    Keep on this good work !!

     

  • @MAGnet systems - great idea, this is potentially even cooler than your antennae :)  Love the innovation, looking forward to seeing what you come up with next.

  • @Jason K., @Sgt Ric

    Darius Jack was notorious negative troll here a while back.  He would come onto most blog posts or topics where people are posting something cool or new and trash it, usually obsess over some random technical issue he googled about relating to the topic, usually claiming to be an expert or to have been involved in some major project or company related about 10 years ago, usually with random links googled and usually with long often nonsensical posts with odd spacing, grammar and line spacing.  The comments from Global Innovator on this thread and looking back at previous posts on other threads show a distinct similarity...

    I enjoy reading the blogs and posts that come up here as they are often interesting (sometimes hairbrained!).  Healthy discussion is one thing, but I can't understand why some people come on here to bash new ideas and make everything about themselves.  If you don't like or agree with some new idea, move on!

  • @ Patrick, Here is how the VR-Eye cam architecture works with our GCS and VR headsets: The video is transmitted to the GCS using the corresponding datalink (in our case it is digital but we are testing analog links for the analog VR-Eye cam version as well). The spherical video or a sector of it (ROI) can be viewed on the GCS laptop but it is also relayed from the GCS laptop to the VR headsets through the integrated WiFi. The VR headsets utilize their own sensors (3D compass, accelerometers, etc) to define which part of the spherical view is monitored (the whole spherical video is relayed to the headsets).

    In any case, the MAVlink data received to the GCS data can also be relayed to the VR-headsets so that along with the video so that wouldn’t be an issue. Also, a separate OSD board is not required (as we extract OSD data directly from the MAVlink stream arriving to the GCS).  

    What OSD information you would be interested in projecting on the VR headset? 

  • @ Patrick,

    The OSD integration is feasible on the VR headsets. As long as the OSD data arrive at the GCS they can be relayed at the VR headsets. We will look into that. 

This reply was deleted.