Object tracking

ok so for people who havent seen the discusion "getting started newbie" this introduction is for you.

Me and my partner plan to make a UAV for science fair that can track vehicles, give there speed, and give a relitive gps location. we also hope to have it be able to do crop imaging. We have done a fair amount of research and planning so far. we have decided to start the project off with solving the velocity of the vehicle and tracking the vehicle with out a uav platform. we Decided to do this because we dont want to waste money making the uav and then find out we are unabel to do all the image programming. so my question is: Does anyone know of a website or book that deals with image programming. or just getting started with it. i have done research and i am having a hard time finding info on the subject, but that maybe because i am searching the wrong thing( right now im just calling it image programming because i dont know the actual name). Haha. currently i am using the arduino programming enviorment and visual studios so im pretty familiar with code. i dont know if either of these enviorments(mostly VB) are suitable for programming these types of programs. oh also all the proccessing and calculation are going to be done on a ground computer(station) which will be receiving all the sensor input and live video feed, so the program will need to be able to run on the computer.

Thanks,

Daniel

 

You need to be a member of diydrones to add comments!

Join diydrones

Email me when people reply –

Replies

  • I know nothing of the image processing, but when it comes to the math of finding velocity and gps, I know a little bit. You would need to find a way to estimate the linear distance to the object you are tracking, and you can use this, along with the angle of declination from the plane to find the distance between your target and the spot on the ground below your plane.

    To better visualize what I'm saying, you might want to draw the following:
    A point labeled "A" which will be the target
    A point 315 degrees from A (directly down and to the right) labeled B which is the point on the ground directly below your plane
    A line between A and B labeled ground distance.
    A line running horizontally from A to the right
    A line running vertically from B going upwards.
    A point at the intersection of those lines labeled C

    Now Imagine your plane is directly over point B on your paper which we will call D. This creates an oblique triangular pyramid. You need to know the altitude of your plane (line BD), the distance from the plane to your target (line AD) and the angle of declination to the target. (angle ADB) This will allow you to solve for one face of the pyramid. Next, you need to know the compass heading of your plane, and the angle of the camera in relation to the nose of the plane. You can use these values to find the value of angle ABC. Now you can use more trig to solve for the triangle ABC. Now you can use the haversine formula (don't use the law of spherical cosines) to get GPS coordinates for all of these. Then you do it again, when things have moved and you compare the changes.

    If you have any questions or want to know more, just pm me. I'm also working on formulas for aiming cameras and antennas at planes from ground stations, and at targets from planes using GPS. If you want to know about that I'd be happy to chat about it.
  • T3
    @Daniel Nugent:

    One of the best ways to learn about something new is to experiment with an existing system: Roborealm is a "Robotic Machine Vision Software" that I've used previously to create some UGVs with simple image processing capabilities with the aid of a base station. The software has a simple GUI so you really don't need to know the first thing about processing images on the programming level. This might be helpful for you too in understanding the basic concepts of image processing; all you need is a web camera.

    Sami F.
  • Moderator
    I know the Kestral UAS works with object tracking... They state... "while tracking the target, our Localization algorithms provide the user with the moving target’s heading, velocity, and estimated GPS location.
    Human size targets can be tracked as well."

    I know the miltary has target GPS, velocity calculations, but then again, they have everything!

    Here's a video off the Procerusuav site.
  • @Daniel
    "Do you personally have any experiance with image processing?" - yes. Not for UAV (yet) but different robotics platforms ( moving and stationary) I used bunch of different tools including openCV, matlab, LabVIEW IMAQ, etc.

    "Do you know if most colleges(like the u of m) have classes on that kind of programming." - :)) I don't know. But someone in CS dept., or Engineering ( ME, aerospace) should have some knowledge. Just ask around.

    Image processing is fun but not trivial....
  • Hi Daniel,

    Try http://code.google.com/p/aforge/ . c# and microsoft friendly. Good for prototyping and for developing algorithms and allows for a more interactive development process.

    Look at PIL for python. Does not do what aforge does (hough lines, etc) but is multiplatform.

    Imagemagick might be a starting point.

    Image processing consists of going through the image, pixel by pixel and doing something based on the location and properties of the pixel. So blob tracking. Take a picture image1, take another picture Image 2 . Create a new blank image3 Transform both pictures to gray scale. Subtract the images. Take pixel x,y in Image1 and compare it to pixel x,y in image 2 . Now determine if it is the same or similar shade. If it is , make Image3 pixelx,y blank. If not make image3 pixel x,y some value (255,255,255) . At the end you have a new picture with the difference between the 2 . Now do that 30 times per seconds and you have the motion.
    Now you need to write something to identify a shape or a corner, etc and track that. Different ball game if the camera is moving too.

    Aforge is a very good tool for learning how this stuff works.

    Look at http://letsmakerobots.com/node/13354 as the starting point for image processing with micros.

    http://www.davidchatting.com/arduinoeyeshield for use of video with arduino.

    The thing with the arduino is that with only one serial it is hard to interface with a video and still see the output. The new Mega will make a big difference, but likely an ST32 or similiar would be needed (AVRCam used a Mega)

    Good Luck

    Diarmuid
  • Try OpenCV ˇhttp://opencv.willowgarage.com/wiki/

    AVRcam

    Look for optical flow, blob tracking, that sort of thing. Automatic tracking an object the size of a car at distance is non trivial unless you tell it it's a car first. Or else use optical flow to see what points of interest are moving over the background.

    Cheers

    Diarmuid
  • Hi Daniel,
    image processing is tough but interesting discipline in itself. To get you started look at some of the books on the subject

    image processing books on Amazon

    Also take a look at OpenCV. It's an open source image processing library - take a look at examples they have. Good companion book for OpenCV can be found here
This reply was deleted.

Activity

DIY Robocars via Twitter
RT @TinkerGen_: "The Tinkergen MARK ($199) is my new favorite starter robocar. It’s got everything — computer vision, deep learning, sensor…
Monday
DIY Robocars via Twitter
Monday
DIY Robocars via Twitter
RT @roboton_io: Join our FREE Sumo Competition 🤖🏆 👉 https://roboton.io/ranking/vsc2020 #sumo #robot #edtech #competition #games4ed https://t.co/WOx…
Nov 16
DIY Drones via Twitter
First impressions of Tinkergen MARK robocar https://ift.tt/36IeZHc
Nov 16
DIY Robocars via Twitter
Our review of the @TinkerGen_ MARK robocar, which is the best on the market right now https://diyrobocars.com/2020/11/15/first-impressions-of-tinkergen-mark-robocar/ https://t.co/ENIlU5SfZ2
Nov 15
DIY Robocars via Twitter
RT @Ingmar_Stapel: I have now explained the OpenBot project in great detail on my blog with 12 articles step by step. I hope you enjoy read…
Nov 15
DIY Robocars via Twitter
RT @DAVGtech: This is a must attend. Click the link, follow link to read the story, sign up. #chaos2020 #digitalconnection #digitalworld ht…
Nov 15
DIY Robocars via Twitter
RT @a1k0n: Got a new chassis for outdoor races (hobbyking Quantum Vandal) but I totally didn't expect that it might cause problems for my g…
Nov 11
DIY Drones via Twitter
First impressions of the Intel OpenBot https://ift.tt/36qkVV4
Nov 10
DIY Robocars via Twitter
Nov 9
DIY Robocars via Twitter
Excellent use of cardboard instead of 3D printing! https://twitter.com/Ingmar_Stapel/status/1324960595318333441
Nov 7
DIY Robocars via Twitter
RT @chr1sa: We've got a record 50 teams competing in this month's @DIYRobocars @donkey_car virtual AI car race. Starting today at 10:00am…
Nov 7
DIY Robocars via Twitter
Nov 6
DIY Robocars via Twitter
RT @a1k0n: Car's view, using a fisheye camera. The ceiling light tracking algorithm gave me some ideas to improve ConeSLAM, and having grou…
Nov 5
DIY Robocars via Twitter
RT @a1k0n: To get ground truth I measured the rug, found the pixel coordinates of its corners, calibrated my phone camera with my standard…
Nov 5
DIY Robocars via Twitter
RT @a1k0n: @DIYRobocars is back in December, but outside. Time to reinvestigate ConeSLAM! I rigged up a quick and dirty ground-truth captur…
Nov 5
More…