The new Viooa smart-camera combines 360-degree optics with an on-board Nvidia processor to add some pretty amazing functionality to Solo.
TECHNICAL SPECIFICATIONS
The Inevitable Smart Camera Evolution
Size and Weight:
Dimension
- ∞ Camera: 133mmX118mmX50mm / 5.2”X4.6”X2”
- ∞ Adapter: 85mmX75mmX27mm / 3.3”X3”X1”
Weight
- ∞ Camera: 176g
- ∞ Adapter: 110g
- ∞ Optional impact and scratches-resistant cover: 12g
Temperature
∞ From -20° to 50°C / From -4° to 122°F
Recording:
Video Mode
- ∞ UHD-4K 3864 X 2160 30fps
- ∞ 2K 2048 X 1080 30fps
- ∞ HD 1080p 1920X1080 60fps
- ∞ HD 720p 1280X720 60fps
- ∞ SVGA 800X600 60fps
Video File Formats
∞ MP4 / H.264
Photo Mode
∞ 8.5 Megapixel 3864 X 2202
Photo Mode
∞ JPG / RAW
Audio Format
∞ Mono, 48 kHz, 64 kbps AAC
Viewers:
Computer
∞ Mac OS X, Windows players
Online
∞ YouTube and Facebook 360° video players
Smartphone and Tablet
∞ Windows, Android, and iOS
Virtual Reality (VR)
∞ Oculus Rift, Samsung Gear, FatShark, Zeiss, and more
Supported Platforms:
Multirotors
∞ Phantom 1, Phantom 2, Phantom 3, Inspire 1, S900, S1000+, Matric 100, Solo, Iris, X8, Yuneec, A2, WooKong-M, Naza-M, Pixhawk, APM, Walkera, custom-built, and more
Fixed-wing
∞ Quest UAV, Lehmann, UAV Factory, X8 platform, custom-built, and more
Optic/Lens:
Field of View
∞ 360° X 180°
Number of Cameras
∞ 3
Sensor Technologies
∞ Fully synchronized Global Release technology with constant exposure, white balance, and image enhancement
Features:
Stitching
∞ Real-time, onboard
GPS
∞ High precision GNSS < 1m, 56-channels, GPS/QZSS and GLONASS
Image Enhancement
∞ Gyroscopic/accelerometer sensors and image processing algorithms for camera orientation and stabilization
RC Receiver / Autopilot
∞ S.BUS and CAN-Bus inputs
USB
∞ MicroUSB 3.0 for updates and image/footage downloading
Bluetooth
∞ Bluetooth 4.0 to interface with the camera’s settings
Microphone
∞ Built-in microphone
Storage
∞ MicroSD up to 256GB card for full 360° or snapshot coverage recording
HDMI Video Output
∞ Lightbridge, Connex, Teradek, and more
Analog Video Output
∞ FatShark, HobbyKing, ImmersionRC, Boscam, and more
RC receiver
∞ Supports Futaba, Spektrum, JETI, and more
Autopilots
∞ Supports DJI, 3DR, Airware, MicroPilot, Paparazzi, and more
Power:
Power Source
∞ Supports input LiPo 3S-6S (11.1v-22.2V)
Modes:
Live
∞ Real time point-of-Interest (POI) output, recording 360° footage
First Person View (FPV)
∞ Real time point-of-Interest (POI) output , recording 360° footage – image orientation is disabled
Mosaic
∞ Leveled geotagged snapshot triggered by time/distance or by autopilot
Comments
+1 that Patrick!
As they say... Where's the beef ?!?!
Show me more than specs. , show me pictures, diagrams , code , convince me that this thing really works and that is more than another KickStarter video ....
I can do optical flow and deep learning on my laptop no problem.. but doing sense and avoid in real time on a flying vehicle is another ball game, there are many contender in this field, but actually very few can achieve this goal. I am pretty convinced that it will be mainstream in 2 years, but now if you pretend you do, show me how
addition comments:
i see the video carefully to find the point of optical flow and deep learning, the arrow of red and blue show takeoff and landing operation, and the moving car blocked with linear frame show the motion object tracking and recognization, also the road is marked and the longside forest is isolated with different color. how powerful processing capability of on board processing unit that is ! I could not imagine to provide lots function through real time processing algorithm.but i believe this is true for out DIYer. thanks !
well , it is perfect to unveil more details about this product to grasp the customer rather than termology piling up.
Well the idea is pretty solid, I believe there is a market for VR on drones, eager to see more!
Never mind, I just saw Nvidia on your post. Let me ask then, is it Tk1 or Tx1 board?
Hi @Chris, do they use Nvidia's tegra series as a computational platform? We are doing similar things as a thesis (indoor navigation and online mapping of the environment)
Hopefully we will get real spec and operational details... There is no information on the advanced capabilities, like the ground facing optical flow real time deep learning, just superficial information and the proverbial ''patent pending''.. So, until someone provide us with serious information, I will consider this as a 3 x 120 deg camera arrangement with stitching
ONe question; the usual question: when will it be available?