Hi All,

this is Jan from http://www.diy-streetview.org.
I do streetview as a hobby.

Recently I noticed diydrones.com when searching for a solution to a leveling challenge. My 5-camera-rig is attached to a backpack. I take images while walking, without stopping. Therefore the rig is never leveled, and the resulting panoramas are not straight.

Illustration:
http://www.diy-streetview.org/data/development/20110126a/Screenshot...

Now, If I just knew the roll and pitch at the exact moment the image was taken, I could automatically level the panoramas.

Your thoughts please?
How would you go at this?

Thanks,

Jan
janmartin AT diy-streetview DOT org

Views: 483

Reply to This

Replies to This Discussion

Interesting question. Any IMU/AHRS can do this, but the question is what sort of datalogging you want. If you want to hack something together a bit, ArduIMU plus a Sparkfun serial logger would do it, but you'd need to modify the code to output the fields you want.

Or use ArduPilotMega with the IMU shield and the onboard datalogger (a config file allows you to choose what data fields to log). Onboard memory is probably only enough for a day, however.

Ultimately, I suspect the best solution is a phone app. An iPhone 4 or Nexus S has all the sensors you need and tons of datalogging memory. If you google around a bit you may already find an app that does what you need. Just remember to strap your phone to your backpack, not keep it in your pocket ;-)

Hi Chris,

 

the challenge is I need to find a low-tech way to get the logging data out of the device. Without using a Netbook or a mobile phone network.

Logging to an SD-card sounds just right.

The ArduIMU and LogoMatic from Sparkfun has been suggested to me before.


Given that the backpack and therefore the rig swings quite a bit while walking I wonder how often the ArduIMU Kalman delivers the roll and pitch data?

Hz?

If possible I would like to log the GPS data too while at it.

 

What do you all think?

 

Jan

I think ArduIMU is running the AHRS loop (DCM, not Kalman) at 100hz.

The trick with just strapping a logger of any type onto the system is synchronization.  How are you going to determine what point in the logger's history corresponds to when an image was taken. 

Ideally you'd have some cable from the logger to the camera control (is there a camera control?), that records the attitude when each image is taken, but that could be difficult (and would have some latency that would have to be accounted for). 

Another option is to periodically map the difference in orientation between two images to when the same change is seen in the logger history, but that could be mathematically difficult. If you can do that, why not just run that algorithm over all the images to determine their change in orientation and correct for it?  All the data necessary to fix the problem is available in the images (with enough overlap), its just a tough problem to solve.

Hi Tom,

 

regarding synchronization:

Best would be to have the logger/ArduIMU releasing the cameras. At the moment I use a programmable Intervallometer to do so. One could wire it to the logger/ArduIMU, then have the logger/ArduIMU detect the "closed contact" of the Intervallometer and release the cameras and log at the same moment?

 

I don't get your second idea. Somehow I think that errors will accumulate over time.

 

So what needs to be done is:

- detect a "closed contact", then

- close another contact to release the cameras AND

- log the data the same moment

 

How complicated is this to do?

Maybe one will have to have a fixed delay  between "releasing the cameras" and "logging data", e.g. 600ms.

 

Thanks,

Jan

 

Yeah, if you've got the infrastructure to wire it in, that is definitely the way to go.  In theory (who knows about in practice) the latency between the command to take the picture and the actual capture should be approximately fixed.  You could just try various numbers (like 600ms, sounds good), or you could setup a test by swinging it back and fourth or something and try to find a fixed number that seems to match the picture up with what you're seeing from the IMU data. 

Yeah, the second idea is complicated and lots of math, but if you're generally moving across flat ground (or even have a simple every minute gps log of altitude) you could use the fact that images are supposed to be next to each other to keep error from building up.  I think what you proposed would be much easier though.

UAV DevBoard running MatrixPilot and set to provide SERIAL_OUTPUT_ARDUSTATION will also log to the openlog from sparkfun onto a removable SD card, the Roll and Pitch and GPS position of the backpack.

 

It might be possible to use the DEADRECKONING information, and or, the data about acceleration to time pictures to be taken at time of slower movement moment (or when the user is still). I'm wondering whether for good clear pictures the cameras should be still for a moment.

 

Have you decided how accurate you need the pitch and roll measurements to be ?

 

The synchronization of the pitch, roll log with the pictures is the main issue. Not just timing, but ensuring that both picture and log entry both have the same unique handle (usuall a number e.g. 212344).

Hi Pete,

good hint, somehow the UAV DevBoard slipped my attention.

 

Ready-to-work DEAD RECKONING is what I am looking for!

So far I found dead reckoning in MatrixPilot only. Is this what you meant?

Links please!

 

Would this work for this scenario:

There is a street with supermarkets on the right and left.

I am walking down the street and into each supermarket, then down all the aisles and on to the next supermarket.

Is the info logged from dead reckoning sufficient to map the aisles inside the supermarkets, where there is no GPS signal available?

 

Regarding cameras:

Tests have shown that one can release them outdoors at any time, and get sharp images. Indoors one needs to stop walking for a moment due to known bad low-light performance of the cameras. Not a problem. Stopping indoors actually helps with getting images at the right spots. It's a bit different from outdoors because things are way closer together.

 

Roll and pitch accuracy:

So far I corrected images manually using whole degrees only. Like +7 or -8 or +11 degrees. It' sufficient. Range is somethings like +/-10 degrees pitch and +/-5 degrees roll when walking on flat ground. More when hiking up and down mountains.

What accuracy can one get?

 

Synchronization indeed is a challenge. The cameras have wires soldered to the release buttons. When connected the cameras take an image.

The Idea is to have the DevBoard trigger the cameras AND log e.g. every 10 seconds.

Then later simply assume that

log_1 = image set_1

log_2 = image set_2

...

Also the images have an EXIF timestamp: DateTimeOriginal that can be used for geotagging the images with the logged GPS (or dead reckoning) data. I could create a track from the logging data (including the roll and pitch) that then will match the images perfectly without any interpolation. One could then do a "reverse geotagging" later and match the images by lat and lon to the roll and pitch data, then use that for stitching leveled panoramas.

 

Assuming that the first log data is for the first image etc. seems a lot easier thought.

 

I need more info on dead reckoning with the DevBoard please.

 

Jan

 

Jan,

 

Until now, DEADRECKONING, in MatrixPIlot has really been High Bandwidth Dead Reckoning (HBDR) delivering a position to the autopilot 40 times / seconds, when the GPS might be providing 1 position per second. HBDR also provides positioning exactly on time, while the GPS position is always the position that we were at, 1.25 to 3 seconds ago (GPS Ublox and EM406A will always provide a position from a second or so ago).  The improvements of HBDR allow the plane to fly more accurately around waypoints at high speed, and allow the plane to maintin altitude much more smoothly and accurately.

 

You are now discussing the more traditional Traditional Dead Reckoning.(TDR)  (I am a sailor and used to use that for navigation purposes). The fundamental problem with TDR is that we are integrating the accelormeters twice, and also we have to  accurate orientation information from the gyros. So the errors are exponential with time. e.g. After 15 seconds you might be out by 10 meters, and after a minute you might be out by 500 meters. Your application would require very accurate accerlometer and gyro information.

 

With the acclerometers, it might be better to re-code the firmware so it detects  the accleration patterns of walking, and knows when each step is made, and then makes an assumption about how long each step of the walker is. This could reduce the exponential errors from the accelerometers greatly.

 

The good news of course, is that we just improved our sampling scheme for both accelorometers and gyros. (see my post of a day ago). Yaw Gyro drift has just dropped very significantly. So we do potentially have a new improved TDR capability, but we have not tried it out yet. I did check our sub 1 meter TDR with a quick program (application would be Quad Hover and Hold, and I'm afraid that, for some reason, has not improved yet. We have not spent much time on that. yet.

 

To take this discussion forward, I think we need a cleaer mathematical expression of the accuracy that you need. " For example: In the supermarket you want to be able to be in there for say 15 minutes before returning outside for GPS fix, and you want a grid accuracy of about +/- 1/2 a meter  ?  And presumably you are taking a 3D picture about every 10 meters along the paths ?  We need to assume that the magnetometer is only roughly accurate, and that it may be subject to stray magnetic fields." Perhaps you could alter that statement to relfect your own requirements ?

 

If I have time today, I shall graph the TDR error of a UAV DevBoard with the new gyro low drift, when it is sitting still on my desk. That should provde a base line for the best possible performance with the firmware as it is to date. Then we will be working with better facts.

 

For your background interest, I enclose a KMZ plot of a plane flying. The plot includes both the HDBR and the GPS position. The plot that includes the small model planes is the Dead Reckoning Plot.

 

Best wishes, Pete

Attachments:

Hi Pete,

 

amazing things you do.

I will answer paragraph by paragraph:

As there is no need for navigation in my application, we simply can grant ourselfs a time-frame of e.g. 5 seconds to integrate the sensor data with the GPS data, and then when logging it just deduct the 5 seconds from the GPS time.

 

I wasn't aware of "HBDR". I got my first sailing (and motorboat) license more then 25 years ago when I was just 15 years old. So yes, we are talking about TDR.

 

Detecting step width and use that for distance is the way to go. One can have automatic calculation of step length using GPS outside. Is this realistic at all? Or is the amount of work needed to detect step width totally out of what's doable at all?

 

It would be very helpful if you could pack the sensors in a backpack and walk around the block while logging all the sensors data. Then make the data set available for development. One could work on TDR and compare to GPS data to see how well one does this way.

 

In the supermarket I need to be able to be in there for  e.g. 15 minutes before returning outside for GPS fix, and need a grid accuracy of about +/- 0.5 meter.  I am stopping every 5 meters to take an image.

Is this realistic at all?

 

Cool.

 

Can't wait for it.

 

Jan

@Jan, I ran some tests late last night on Traditional Dead Reckoning, and as expected the results are not good for TDR. The IMU is not currently designed for that.

After 90 seconds the IMU has an error in the X axis (East) of 400 meters.

 

So one would have to try to use the accelerometers to gauge the average step size of the walker when outside (and GPS is available), and then detect and use the same step pattern when the user is inside the super market. I'm not sure yet whether that is possible.

 

The accelermoter function on the IMU is primarily to find the gravity vector and use that to calibrate the roll and pitch of the gyros. The acclerometers will have to be more sensitve in order to spot the walking pattern of the user. It would require more testing.

 

The good news is that the yaw gyro in the above test dirfted linearly by about 15 degrees over the first 60 seconds, and then stayed at 15 degrees for the next 3 minutes.

 

So all we can say at the moment, is there is possibly a solution for the supermarket diy-street-view, but it all would require more investigation.

 

Best wishes, Pete

 

Pete,

I understand it needs serious development.

So I will have to use a ready-to-use off-the-shelf product for now when indoor navigation is needed.

 

What about this scenario:

For outdoor usage, I need to log the GPS and roll & pitch data to a SD card.

I get it that would work right now?

 

Specific functionality needed is:

1) Press a button.

2) Board releases cameras by "closing a contact".

3) 400 ms later log the GPS and roll & pitch data.

Repeat.

 

This is needed for a walking, the cameras swings back & forth and left to right on-top of a backpack, so the real question is:

How often is the roll & pitch data updated? Hz?

I am OK with GPS being behind real position by 1-2 seconds.

 

When getting myself the hardware and assemble it, how much coding is needed (by someone who did it before) to get this to work?


Thanks,

Jan

Reply to Discussion

RSS

© 2019   Created by Chris Anderson.   Powered by

Badges  |  Report an Issue  |  Terms of Service