2832_2bc8ae25856bc2a6a1333d1331a3b7a6.png(Image of Megalopta Genalis by Michael Pfaff, linked from Nautilus article)

How would you like your drone to use vision to hover, see obstacles, and otherwise navigate, but do so at night in the presence of very little light? Research on nocturnal insects will (in my opinion) give us ideas on how to make this possible.

A recent article in Nautilus describes the research being performed by Lund University Professor Eric Warrant on Megalopta Genalis, a bee that lives in the Central American rainforest and does it's foraging after sunset and before sunrise when light levels are low enough to keep most other insects grounded, but just barely adequate for the Megalopta to perform all requisite bee navigation tasks. This includes hovering, avoiding collisions with other obstacles, visually recognizing it's nest, and navigating out and back to it's nest by recognizing illumination openings in the branches above. Deep in the rainforest the light levels are much lower than out in the open- Megalopta seems able to perform these tasks when the light levels are as low as two or three photons per ommatidia (compound eye element) per second!

Professor Warrant and his group theorize that the Megalopta's vision system uses "pooling" neurons that combine the acquired photons from groups of ommatidia to obtain the benefit of higher photon rates, a trick similar to how some camera systems extend their ability to operate in low light levels. In fact, I believe even the PX4flow does this to some extent when indoors. The "math" behind this trick is sound, but what is missing is hard neurophysiological evidence of this in the Megalopta, which Prof. Warrant and his colleagues are tying to obtain. As the article suggests, this work is sponsored in part by the US Air Force.

You have to consider the sheer difference between the environment of Megalopta and the daytime environments in which we normally fly. On a sunny day, the PX4flow sensor probably acquires around 1 trillion photons per second. Indoors, that probably drops to about 10 billion photons per second. Now Megalopta has just under 10,000 ommatidia, so at 2 to 3 photons per ommatidia per second it experiences around 30,000 photons per second. That is a difference of up to seven orders of magnitude, which is even more dramatic when you consider that Megalopta's 30k photons are acquired omnidirectionally, and not just over a narrow field of view looking down.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • @Gary- Sorry about the delayed response to this. I agree with what you are saying. I think the anthropomorphism issue you describe is even more complex- There is a difference between how insects do it and how humans do it. But there is probably an even bigger difference between how we humans do it with our own wet-ware (brain) and how we engineer artificial systems to do it: One thing you find in a lot of human engineered systems, whether in an quadrotor's IMU or a ground robot's SLAM system, is the extensive use of Kalman filters. On the other hand, neither the human brain nor the flying insect's brain uses Kalman filters.

  • Great research work ! Let us wait till June and see what come up.

  • Very interesting information Geoffrey,

    One of our biggest problems as people is that we always break down vision related considerations in how close to it is the same way as humans do it.

    And that anthropomorphism often (some pun intended) blinds us to many superior and more economical methods.

    I think machine vision, path finding, navigation and object recognition are just getting started and a lot of what will finally shake out will be very different from what we currently think.

    I also think this is one of if not the most important endeavors of the next ten or twenty years.

    Best Regards,

    Gary

This reply was deleted.