Well, not the wild, perhaps the most mapped place on the planet; the Berkeley Marina.
(Disclaimer: I work for Parrot. I did this on my own time, and for my own knowledge and confidence in the product. I was excited by the results, and thought this group would be as well.)
I know many of you here are like me, exploring the capabilities of sUAS vehicles, sensors and processing workflows. It is a quickly changing and dramatic landscape. The business side of drones is equitable to the “Wild West” with dramatic story-lines and pricing shoot-outs happening in the streets. Similarly, the technology side of drones is equitable to the steam engines of the same era. Modern engineers are quickly iterating and surpassing “legacy” technology whose life-span was measured in months, not years.
Enabled by this rapid pace of sensor innovation, Sequoia is a multi-spectral sensor with four discreet-bands, a separate RGB sensor and a fully integrated light incidence sensor including GPS and IMU packaged in a common physical form factor. When Parrot announced the Sequoia sensor at the World Ag Expo a few weeks ago, I still had many questions. “The consumer-product company, Parrot, really made this sensor?” was quickly followed by more useful questions like, “Does it really do what I think it does? What’s the processing workflow? How do I explain this to people?”
As part of my job at Parrot, I have early access to these sensors. While I’m generally a very optimistic person, I’m also a skeptic at heart. I know how difficult it is to engineer, manufacture, test and ship hardware/software products at volume. When a Sequoia came across my desk last week, I couldn’t pass on the opportunity to put it in the air and perform my own, fairly un-scientific, end-to-end validation test. I had very little to do with bringing this product to life, but I am excited by the quality and capabilities. Next time you see a Parrot engineer, give them a high-five.
Sequoia was designed to be extremely easy to integrate into existing hardware configurations. The only connection the system needs is 5v3A. I added an appropriate BEC to my 3DR Y6, spliced in a female USB connector and the sensor was fully functional. I took a FR4 plate and cut it to fit around the lens array and the camera fit perfectly into the gimbal. The USB cables that the sensor comes with are a little bulky and stiff for this kind of installation, but with some cable routing the package was quickly ready to be put in the air.
Sequoia has an IMU in both the irradiance sensor and the camera itself. What does that mean? The compass dance. Fortunately, it’s painless and very similar to a BeBop2 in duration and complexity. It took about 20 seconds to get through yaw, pitch and roll for both sensors.
There is an SD Card expansion slot in the irradiance sensor. I put a 16GB card in there to make the image transfer super easy. With the amount of data this thing is collecting, it seems that the fastest SD cards you can buy would be a good choice.
In addition to USB PTP control, there is also a WiFi connection available in Sequoia. This allows you to access a browser based camera configuration tool where among many other things you can set time or distance based triggering, or telnet into a root prompt on the camera. Since you can also do this with many other Parrot products, I’m assuming it will also be available on the production version of the sensor. I’m going to keep checking back on http://developer.parrot.com/ to see when they post more information on the API.
Once powered up in the field, a green light on both the camera and irradiance sensor means that it’s ready to produce fully geo-tagged images. Below is a screenshot of what that means. I did zero configuration to this camera. After initialization, I pressed the shutter button twice to start time based capture at one second intervals (more on this later). As you can see, every time the sensor is triggered, it takes five pictures that are all geo-referenced.
One of the unique things about this sensor is the discrete bands that it captures simultaneously. The advantage is that the sensor can more precisely measure specific wavelengths than standard RGB sensors. When processing these wavelengths on the back-end, users are able to combine the discrete bands in different indices that best match their specific requirements.
After validating that the sensor was working properly on the ground, I started up Mission Planner. I wasn’t sure of Field Of View of the sensor, so I took a guess and used the S110 camera model in the grid planning tool. At 50m planned altitude, this reasonable flight plan is what was generated. After preflight checks and starting the sensor, I sent the Y6 off on its mission.
The flight went smoothly, and the data was easy to validate in the field. I pulled the SD card from the irradiance sensor, put it in my laptop and made sure there were enough geo-tagged images. I was pleasantly surprised at the quality of the RGB images. Multi-rotors are an inherently bad place to be if you are trying to be precise. I was a little worried about rolling shutter or noise, but those worries were unfounded.
RGB: https://www.dropbox.com/s/pb16jzu5dbltz3y/IMG_160227_224926_0238_RGB.JPG?dl=0
Green: https://www.dropbox.com/s/lxbwa6xm9wz0tsv/IMG_160227_224926_0238_GRE.TIF?dl=0
Red Edge: https://www.dropbox.com/s/nd2rcuyshx924ip/IMG_160227_224926_0238_REG.TIF?dl=0
Red: https://www.dropbox.com/s/tpdcaxc9jxcapjl/IMG_160227_224926_0238_RED.TIF?dl=0
NIR: https://www.dropbox.com/s/17zx8k93srs7ujv/IMG_160227_224926_0238_NIR.TIF?dl=0
Speaking of enough images, a one second interval is clearly too frequent. On this flight, with 5 images taken every second, there are 1791 images for Pix4D to process. 90%+ overlap is unnecessary for this type of processing.
The next flight will use distance based triggering to test proper overlap, but for now, I manually culled two out of three picture sets to get to a more reasonable 771 pictures. Keep in mind that this is total pictures including all discrete bands and RGB.
It’s still not pretty spacing, but we will see what Pix4D thinks of it. Using a beta Pix4D version 2.1.34, the Sequoia images are seamlessly identified and put into an appropriate Camera Rig. https://goo.gl/2Y3Epz
I am not a Pix4D expert, and I wanted to see how the default configurations worked, so I selected the 3D Maps template and “Start Processing Now!”. Again, I’m more interested in the workflow and time required than fine tuning the knobs available in Pix4D.
First things first, the quality report. https://db.tt/4kZm0mYA
Well, not the best looking quality report I’ve ever seen. I assume it’s primarily due to the “shotgun” style triggering and image culling. I like to see the first four checks be green here. The GSD is 5cm at 50M, which seems pretty good to me. Full initial processing took 5 hours, not too bad on a 8GB RAM machine with an Nvidia 970 GPU. I think we should see closer to ten thousand keypoints and I’m not sure why there are three image blocks, I know we want one. It seems to me we’re fairly close on all of these checks though, how did the results come out?
Coverage area is OK, the top right hand point is not accurate, but sidelap was apparently sufficient. Not a bad result for poor image collection technique and providing nearly zero input into the processing! I’m sure someone who has more experience in Pix4D could clean this up nicely.
Zoom quality of the ortho also looks pretty good. Nice straight lines, good level of detail, not much ghosting. Here’s a link to the full ortho. https://www.dropbox.com/s/zx1ax2mc30b13fl/second_marina_geotag_transparent_mosaic_group1.tif?dl=0
Let’s do a quick sanity check on those aggregate piles.
The larger pile is around six cubic meters and the smaller around five. These smaller piles are a little easier to estimate by eye, and by my expert eye measurements, six and five seems pretty close!
Finally, it’s time to check out what Sequoia was designed to do, produce accurate multi-spectral imagery. Pix4D has done a great job on their Index Calculator, and the changes for Sequoia in this latest beta really help simplify the process of producing vegetative indices.
After first generating the reflectance map, you then define the region to analyze, and then the index you want to process. In this case, we have the NDVI index. I think the data in the top right corner (red shading) is questionable, and should ideally be removed from the analysis. The region tool is helpful for excluding information that doesn’t need to be processed. There are many of you here who understand this better than I do, so I will not pretend to be able to interpret this map.
Finally, you can generate and export variable rate prescription shapefiles to import into your precision farming applicator of choice. For example, Ag Leader.
http://www.agleader.com/blog/loading-prescription-files-into-ag-leader-integra-and-versa-displays/
An agronomist or other expert eye is still required to make sense of these vegetative indices in relation to individual crops, but there are companies like Airinov http://airinov.fr working to automate the analysis based on crop specific phenology.
We’ve come a long way in the past few years. While much of this work has been done for decades using satellites and manned aircraft, we are just now able to begin discussing viable end-to-end workflows for drones in the agricultural space.
It is a fact that this multi-spectral and aerial technology are merging in a way that can quickly and accurately produce vast amounts of data for one of the largest industries on the planet. Sequoia takes a great step in commoditizing complex data acquisition for agriculture. Gathering cost-effective, timely, high-resolution data is no longer the challenge it once was.
The challenge we must now overcome is listening to farmer and agronomist requirements on ease-of-use, interoperability and data life-cycle so we as a drone industry provide products and services that can build trust at scale in an industry that is far more experienced, confident in its roots, and wary of newcomers.
Comments
Thanks for the non-sales pitch(y) post, Jon. We are currently buzzing around with a RE3, but I'm thinking about getting a sequoia for the Solo as well. The IMU and GPS in the irradiance sensor + the 16mp RGB camera make this pretty damn appealing for anything that doesn't need the discrete blue channel.
Just to be sure, the RGB photos are tagged with the orientation information from the IMU as well, correct?
Got to get one of this :-D thanks for the info
Thanks John for the user guide. The PTP preview command doesn't seem to be implemented, so no live video from the RGB camera.
Great, thanks, this helped me quite a bit - but no, I am definitely not an expert in this. It looks, however, as if my guess was right then, correcting the lens focal length of the monochrome sensor to 3.02mm, instead of 3.98mm.
BTW, for the interested ones: horizontal angle of view is then 76.9°, vertical AOV 61.6°.
Thanks again for your help!
@Pascal - Not a problem at all! Great questions. I did just receive a follow up note with this. You obviously know more about this than I do, so please let me know if this helps the calculations.
Actually, the D-FOV (diagonal) are:
Monochrome: 89.6°
RGB: 73.5°
@John,
thanks for the feedback. However, it seems as if there's a mistake somewhere in the data: if you take the resolution of 1280x960 and the pixel size of 0.00375 mm, you'll get a focal plane size of 4.6 x 3.615 mm of the sensor. With a focal length of 3.98mm (btw. why is the FOV exactly equal to the focal length?), you'll get a imagery GSD of 9.4cm at an altitude of 100m. In the user guide, however, it states a ground resolution of 12.4 cm at an height of 100m, which would indicate a focal length of 3.02mm. For the RGB camera, the data so far seems coherent.
Resolution of the raw RGB pics you provided kindly to us is 4608 x 3456 and 1280 x 964 for the monochrome sensor, respectively.
Sorry for bothering you with my issue.
Best, Pascal
@Daryl - The PTP API is published in the documentation. I'm not aware of a video output stream from the sensor. I'd be interested to know if you learn something different. Here's a link to the User Guide.
https://www.dropbox.com/s/cs5xwntwujsp5rl/Sequoia_userguide_vdef.pd...
@Pascal - Here is the FOV information. You'll see additional information on the overlap at different altitudes in the user guide. A big THANK YOU to the quick response from Parrot Engineers.
- Monochrome Camera (Global Shutter)
o pixel size: 3.75 μm
o focal length: 3.98 mm
o resolution: 1280x960
o FOV : 3.98 mm ±5%
- RGB Camera (Rolling Shutter)
o pixel size: 1.34 μm
o focal length: 4.88 mm
o resolution: 4608x345
o FOV : 4.88 mm ±5%
Hey John ( and Nick), thanks for sharing your impressions of that great device.
Can you give us some more details about the sensors, i.e. focal lengths and/or FOV. I asked the hotline of Parrot, but they obviously don't know more than I do already, telling me something like "comparable to my smartphone cam", which doesn't really help. Would be great to get more infos, so we can calculate capture efficiency, etc.
Thanks for the post John. I'm interested in attaching one to my Solo. Has Parrot given any hints about the API? I'm keen to know whether I can get a video stream from the RGB camera through Sequoia's USB, and also whether the Sequoia can be programatically put into mass storage device mode so I can retrieve still images from each of the cameras.
I'm not sure what the minimum focal distance is but, yes, you can definitely use this in your hand, on a ground based vehicle, or on a pole!
It should start shipping end of March from what I've heard to date.