Well, not the wild, perhaps the most mapped place on the planet; the Berkeley Marina.

(Disclaimer: I work for Parrot. I did this on my own time, and for my own knowledge and confidence in the product. I was excited by the results, and thought this group would be as well.)

I know many of you here are like me, exploring the capabilities of sUAS vehicles, sensors and processing workflows. It is a quickly changing and dramatic landscape. The business side of drones is equitable to the “Wild West” with dramatic story-lines and pricing shoot-outs happening in the streets. Similarly, the technology side of drones is equitable to the steam engines of the same era. Modern engineers are quickly iterating and surpassing “legacy” technology whose life-span was measured in months, not years.

Enabled by this rapid pace of sensor innovation, Sequoia is a multi-spectral sensor with four discreet-bands, a separate RGB sensor and a fully integrated light incidence sensor including GPS and IMU packaged in a common physical form factor. When Parrot announced the Sequoia sensor at the World Ag Expo a few weeks ago, I still had many questions. “The consumer-product company, Parrot, really made this sensor?” was quickly followed by more useful questions like, “Does it really do what I think it does? What’s the processing workflow? How do I explain this to people?”

As part of my job at Parrot, I have early access to these sensors. While I’m generally a very optimistic person, I’m also a skeptic at heart. I know how difficult it is to engineer, manufacture, test and ship hardware/software products at volume. When a Sequoia came across my desk last week, I couldn’t pass on the opportunity to put it in the air and perform my own, fairly un-scientific, end-to-end validation test. I had very little to do with bringing this product to life, but I am excited by the quality and capabilities. Next time you see a Parrot engineer, give them a high-five.

Sequoia was designed to be extremely easy to integrate into existing hardware configurations. The only connection the system needs is 5v3A. I added an appropriate BEC to my 3DR Y6, spliced in a female USB connector and the sensor was fully functional. I took a FR4 plate and cut it to fit around the lens array and the camera fit perfectly into the gimbal. The USB cables that the sensor comes with are a little bulky and stiff for this kind of installation, but with some cable routing the package was quickly ready to be put in the air.

Sequoia has an IMU in both the irradiance sensor and the camera itself. What does that mean? The compass dance. Fortunately, it’s painless and very similar to a BeBop2 in duration and complexity. It took about 20 seconds to get through yaw, pitch and roll for both sensors.

There is an SD Card expansion slot in the irradiance sensor. I put a 16GB card in there to make the image transfer super easy. With the amount of data this thing is collecting, it seems that the fastest SD cards you can buy would be a good choice.

In addition to USB PTP control, there is also a WiFi connection available in Sequoia. This allows you to access a browser based camera configuration tool where among many other things you can set time or distance based triggering, or telnet into a root prompt on the camera. Since you can also do this with many other Parrot products, I’m assuming it will also be available on the production version of the sensor. I’m going to keep checking back on http://developer.parrot.com/ to see when they post more information on the API.

Once powered up in the field, a green light on both the camera and irradiance sensor means that it’s ready to produce fully geo-tagged images. Below is a screenshot of what that means. I did zero configuration to this camera. After initialization, I pressed the shutter button twice to start time based capture at one second intervals (more on this later). As you can see, every time the sensor is triggered, it takes five pictures that are all geo-referenced.

One of the unique things about this sensor is the discrete bands that it captures simultaneously. The advantage is that the sensor can more precisely measure specific wavelengths than standard RGB sensors. When processing these wavelengths on the back-end, users are able to combine the discrete bands in different indices that best match their specific requirements.

After validating that the sensor was working properly on the ground, I started up Mission Planner. I wasn’t sure of Field Of View of the sensor, so I took a guess and used the S110 camera model in the grid planning tool. At 50m planned altitude, this reasonable flight plan is what was generated. After preflight checks and starting the sensor, I sent the Y6 off on its mission.

The flight went smoothly, and the data was easy to validate in the field. I pulled the SD card from the irradiance sensor, put it in my laptop and made sure there were enough geo-tagged images. I was pleasantly surprised at the quality of the RGB images. Multi-rotors are an inherently bad place to be if you are trying to be precise. I was a little worried about rolling shutter or noise, but those worries were unfounded.

RGB: https://www.dropbox.com/s/pb16jzu5dbltz3y/IMG_160227_224926_0238_RGB.JPG?dl=0

Green: https://www.dropbox.com/s/lxbwa6xm9wz0tsv/IMG_160227_224926_0238_GRE.TIF?dl=0

Red Edge: https://www.dropbox.com/s/nd2rcuyshx924ip/IMG_160227_224926_0238_REG.TIF?dl=0

Red: https://www.dropbox.com/s/tpdcaxc9jxcapjl/IMG_160227_224926_0238_RED.TIF?dl=0

NIR: https://www.dropbox.com/s/17zx8k93srs7ujv/IMG_160227_224926_0238_NIR.TIF?dl=0

Speaking of enough images, a one second interval is clearly too frequent. On this flight, with 5 images taken every second, there are 1791 images for Pix4D to process. 90%+ overlap is unnecessary for this type of processing.

The next flight will use distance based triggering to test proper overlap, but for now, I manually culled two out of three picture sets to get to a more reasonable 771 pictures. Keep in mind that this is total pictures including all discrete bands and RGB.

It’s still not pretty spacing, but we will see what Pix4D thinks of it. Using a beta Pix4D version 2.1.34, the Sequoia images are seamlessly identified and put into an appropriate Camera Rig. https://goo.gl/2Y3Epz

I am not a Pix4D expert, and I wanted to see how the default configurations worked, so I selected the 3D Maps template and “Start Processing Now!”. Again, I’m more interested in the workflow and time required than fine tuning the knobs available in Pix4D.

First things first, the quality report. https://db.tt/4kZm0mYA

Well, not the best looking quality report I’ve ever seen. I assume it’s primarily due to the “shotgun” style triggering and image culling. I like to see the first four checks be green here. The GSD is 5cm at 50M, which seems pretty good to me. Full initial processing took 5 hours, not too bad on a 8GB RAM machine with an Nvidia 970 GPU. I think we should see closer to ten thousand keypoints and I’m not sure why there are three image blocks, I know we want one. It seems to me we’re fairly close on all of these checks though, how did the results come out?

Coverage area is OK, the top right hand point is not accurate, but sidelap was apparently sufficient. Not a bad result for poor image collection technique and providing nearly zero input into the processing! I’m sure someone who has more experience in Pix4D could clean this up nicely.

Zoom quality of the ortho also looks pretty good. Nice straight lines, good level of detail, not much ghosting. Here’s a link to the full ortho. https://www.dropbox.com/s/zx1ax2mc30b13fl/second_marina_geotag_transparent_mosaic_group1.tif?dl=0

Let’s do a quick sanity check on those aggregate piles.

The larger pile is around six cubic meters and the smaller around five. These smaller piles are a little easier to estimate by eye, and by my expert eye measurements, six and five seems pretty close!

Finally, it’s time to check out what Sequoia was designed to do, produce accurate multi-spectral imagery. Pix4D has done a great job on their Index Calculator, and the changes for Sequoia in this latest beta really help simplify the process of producing vegetative indices.

After first generating the reflectance map, you then define the region to analyze, and then the index you want to process. In this case, we have the NDVI index. I think the data in the top right corner (red shading) is questionable, and should ideally be removed from the analysis. The region tool is helpful for excluding information that doesn’t need to be processed. There are many of you here who understand this better than I do, so I will not pretend to be able to interpret this map.

Finally, you can generate and export variable rate prescription shapefiles to import into your precision farming applicator of choice. For example, Ag Leader.

http://www.agleader.com/blog/loading-prescription-files-into-ag-leader-integra-and-versa-displays/

An agronomist or other expert eye is still required to make sense of these vegetative indices in relation to individual crops, but there are companies like Airinov http://airinov.fr working to automate the analysis based on crop specific phenology.

We’ve come a long way in the past few years. While much of this work has been done for decades using satellites and manned aircraft, we are just now able to begin discussing viable end-to-end workflows for drones in the agricultural space.

It is a fact that this multi-spectral and aerial technology are merging in a way that can quickly and accurately produce vast amounts of data for one of the largest industries on the planet. Sequoia takes a great step in commoditizing complex data acquisition for agriculture. Gathering cost-effective, timely, high-resolution data is no longer the challenge it once was.

The challenge we must now overcome is listening to farmer and agronomist requirements on ease-of-use, interoperability and data life-cycle so we as a drone industry provide products and services that can build trust at scale in an industry that is far more experienced, confident in its roots, and wary of newcomers.





Views: 11279

Comment by Matt™ on March 1, 2016 at 7:32am

@Darius Jack...where are you? :)

Comment by jender lee on March 1, 2016 at 7:45am

Really appreciate for the detailed post... 

I have been thinking and hesitating if I should just get a Sequoia and try it. After reading the post, maybe I can try it myself. 

 

Comment by Rob_Lefebvre on March 1, 2016 at 8:00am

Do I assume correctly that the sensor head is designed to fit on a standard GoPro mount?


T3
Comment by Stephen Zidek on March 1, 2016 at 8:22am

Neat!

Comment by John C. on March 1, 2016 at 10:35am

@Rob - That's correct. The main imaging sensor is that exact size.

Comment by Dries Raymaekers on March 1, 2016 at 1:02pm

Thanks for this review! I really want to get my hands on one of these sensors to try them out! I have been looking at the images you provided and seems that the images of the individual bands are clear shots without too much blur which is a good sign. When stacking the images together however, they are not aligned, a problem which other sensors like Tetracam also have. Is this alignment/co-registration done in the PIX4D software? Could you share data/screenshot of the final multispectral mosaic to see the result? 

Comment by Dries Raymaekers on March 1, 2016 at 1:08pm

oh yes ..one other question. Can you also share the output of the irradiance sensor? Or do you need PIX4D software to open it? One small advice on that sensor: try to put it as high as possible on your drone. You don't want to have any shade coming into that sensor as otherwise your calculations on incoming light and thus reflectance, NDVI, .. will be off (by a bit ;)) Cheers!

Comment by Rob_Lefebvre on March 1, 2016 at 1:36pm

@Rob - That's correct. The main imaging sensor is that exact size.

How about balance?  Is the CG in the same place, or the gimbal would require some balance weights?

It's a very good idea to match that form factor, as it allows other gimbals to work.

How about the wire between the two components?  (I assume there's a wire) How stiff is it?  

Comment by John C. on March 1, 2016 at 4:28pm

@Dries - Yes, I believe that offset is common in any multi-sensor solution. There's some kind of physical offset that you have to account for in the image alignment. Pix4D does handle that correction in the new "Camera Rig" feature from my understanding. I'm honestly not an expert in Pix4D. I can tell you the new beta version has a lot of new ease of use features and support of this sensor. Do you know where I would look in Pix4D for that IR Sensor Data? (not the last screenshot?) I'm sure some Pix4D gurus will surface shortly.

@Rob - The wires are a fairly standard gauge USB. Kind of stiff. I'd assume it's best practice to re-balance the gimbal. I got these wires to where the lenses naturally pointed down, and just ran with it. I think there are a ton of ways and places to mount these things.

Comment by Nick Sargeant on March 1, 2016 at 11:38pm

Thank you for the detailed post John. I have been waiting for Sensefly or Pix4D to publish a full raw dataset so am very glad i came across this.  I have been a long time user of the RedEdge (beta tested the RE2 and was Micasense's first Australian customer). Very excited to hear about the Sequoia and to see some real-world datasets; It looks like a great evolution. Is it possible for you to share a complete (or culled) dataset so I could have a go processing it? Cheers!

Comment

You need to be a member of DIY Drones to add comments!

Join DIY Drones

Groups

Season Two of the Trust Time Trial (T3) Contest 
A list of all T3 contests is here. The current round, the Vertical Horizontal one, is here

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service