A place for West Australian UAV / UAS enthusiasts or businesses to discuss topics, arrange meets or share experiences.

43 Members
Join Us!

You need to be a member of diydrones to add comments!

Join diydrones

Comments are closed.

Comments

  • I can see that I am way out of my depth in the imaging so perhaps part of Sunday can be explaining the imaging problems and run through the optical calculations.

    I suggest we organise about different aspects of the mission.  Perhaps we can have four groups.  Logistics coordination, Airframe and Power, Imaging and Guidance and Comms.  This was people could be in groups that fit with their expertise.

    Logistics would be food, water, shirts, ground station consoles, etc and overall co-ordination of the groups.

    Imaging would be the camera and search

    Airframe and Power would be the aircraft, recovery system, and clean power

    Guidance and Comms would be the autopilot and communications

    Obviously there would be overlap in the groups however some of could be done in parallel.

    This way the possible 4 man flight team would be one comms with a console, one imager with a console for PTZ, the pilot with the TX and mission planner and a flight co-ordinator to liase with the organisers.

    We do have to remember in our enthusiasm the winner will be a team that gets a working camera over the search area in winds up to 20knots and has the ability to drop accurately in winds of 20 knots.  If any of these conditions are not met it will not matter what MP the camera is or how fast the link is.

    As the UAV pilot will need an FPV camera anyway this can be the backup if everything else goes wrong.  We will need LRS radio control as well as I would always like to to be able to fly the plane in real manual mode with an FPV camera no matter where it is.

  • Just to add some more to the optic calculations below:

    If the Samsung camera is at 400m altitude, on full wide (25mm), it will capture a 400m wide path, at full telephoto (480mm) it will capture a 21m wide path or roughly at a 5mm per pixel resolution.

    I could imagine doing multipule focal lengths shots in the same pass to achieve faster results:

    1st round of passes: fly roughly 6 rows at ALT450m and do 500m wide passes at 25mm and 10cm per pixel res, and in between shots at the same altitude do shots of 200m width passes at around 4cm per pixel res.

    For target confirmation at Alt450m, the 470mm lens on a pan/tilt will give us 5mm per pixel res at 450m range, and still around 1cm per pixel at 1km range!

    Higher altitude also has the benefit of providing better comms and downlink, provided the weather at the time doesn't have any low cloud coverage. If we had 2-3 of these cameras on separate PTZs we might be able to do it all in just 2-3 passes...and if we can keep a stable downlink at range, we could even more than double our pass velocity. Does anybody think it's possible to find Joe in under 5 minutes? We might just make it if we can get the downlink speed high enough!

  • Not sure where the best places to fly around Malaga are. Perhaps Hai can recommend a park or open area that would be suitable?

  • James i agree with segregating the individual ideas and solutions. We need stringent testing to achieve repeatable results. At a minimum we would need bimonthly meetings, at least monthly flight testings etc. and then 1-2weeks to put in some flying hours, 2 months or so before the OBC event to work out all the last minute kinks.

    But priority wise I think we need to first specify the camera, downlink and to see if we can implement a search or recognition code. Then we'll at least have the minimum requirements for a air frame, and be able to start on that part as well.

    On Sunday afternoon, were can we go for that? I'll just bring my gear up anyway.

    BTW the UAV pilot will need his own dedicated FPV camera setup with OSD and downlink.

  • James said: "Higher MP cameras will make automated detection more time consuming in an exponential way."

    This is relative I think. The pixel resolution per sqm is not determined just by the MP of the camera, rather the MP at what altitude. If we can stitch accurately (which we need to do in the direction of the pass anyway) without to much under/overlap then the amount of pixels per sqm will be the same regardless of the camera MP used and the amount of data to be crunched. So a high altitude and high MP solution will always achieve more, without necessarily increasing processor load, especially if the requirements on the airframe ie airspeed and range are taken into account. The less stitching required the less overlap etc as well, so higher MP is more effective.

    On path direction stitching: A 16MP photo need only to be sent every 40seconds, as at 10m/s it takes this long for the UAV to pass over the previous shot. That's a meager 0.071MB/s or 71kB/s continuous  We could even do that with 3G! Even if we miss a few frames via the down link whilst the UAV is on a pass, the aircraft would still have enough time to go to a higher altitude to get in range of the GS and dump the data. At full 300mbps or 37MB/s you could downlink about 5km of path or about 2.4sqkm at 10 cm resolution in just one second!

    So for analyzing purposes the quad-core 1.4ghz camera processor would only have to scan though a 3MB picture every 40 seconds. I don't think that is too much to ask. We could even use the same camera to PTZ the positives whilst it's processing the last path photo, and send us a low res but optically zoomed potential targets on the fly for visual confirmation from the ground.

    You'd also only need to program in Android/Linux with nearly all the peripherals  included onboard...it would be invaluable for commercial purposes as well, with the software being able to take advantage of the latest hardware that comes out on that platform. Provided we can keep the software "light" enough, even a prepaid mobile phone could be used for DIY versions. I think I might have to buy one for some trials... ;)

  • Sunday arvo test run sounds good assuming it's not too windy.

    When we catch up it would be good to break this project down into key parts eg. Search method testing, detection algorithms and approach, camera selection, airframe, software requirements, etc, etc. If we keep the parts independent enough, they can be worked upon in parallel without impacting the other parts too much.

     

    We also don't need to solve it all up front, there probably should be stages to prove up the various ideas.

  • Thanks JB. The camera option and pass width/altitude is the key to all of this. Higher MP cameras will make automated detection more time consuming in an exponential way. Most object detection algorithms work on scaled down images. Still, I think we need both humans and machines looking at the images so it's better to start with high res and go down from there than start with low res and have no-where to go.

    It may be that we can only process every nth frame if they are big. It probably won't be possible to do 30fps, but that's still ok.

     

    Having an analogue back up camera may be a good idea. They are light and cheap. Alternatively, I think we should at least go for a pan/tilt in addition to the nadir oriented camera(s) as that can help confirm the many false positives we will get by looking straight down for a 2 pixel object :-)

  • Further to the video discussion below (who only writes 4000 character posts anyway? dumb limitation! ;)

    I cannot see any reason to have live video apart for target confirmation and UAV control. Joe is not moving, so video will only offer a change in perspective "advantage" over still photos whilst the plane does a pass. But stills taken at 1-2sec intervals will be far superior in quality and resolution, and the perspective angular advantage will become worthless from video in comparison. If Joe was on the run then live video would probably be essential, but not while he is stationary.

    The other advantage with stills is that they can be sent in bursts without loss, plus a digital data link can make various attempts to transfer the same picture without concern for frame dropouts. Plus stills will require a fraction of the bandwidth of video. We simply don't need frame rates that allow moving pictures to find Joe. Those two factors alone makes the digital still pathway the only one I can envisage working for the OBC with any certainty. 

    I was thinking that there's also a possibility to "ping" Joe with a scanning laser which is calibrated to the camera, seeing that he is wearing  a reflective vest it should work.

    Also on the search areas: Is it possible, or even most likely that Joe will be "on display"? Like the last event, he was actually laying next to a ute... So will he only be in wide open spaces and we can avoid passing over and dense bush completely? Also can we count on Joe being too heavy to lug far from a car, so he'd be somewhere close to a track anyway? I doubt even Joe is dumb enough to hide under a bush, away from a road, if he wanted to be saved... ;)

    Maybe as a first step we should try some automated flights with a video feed and still shots to find our own Joe, then extrapolate the time differences between each method to see who can cover more terrain in a shorter time and with greater accuracy. 

    What is everyone up to afterwards on Sunday afternoon? I'm available. Maybe a dry run would be in order to get a feel for the whole OBC experience? Most of us already have FPV airframes with the APM, might as well give it a go. Those who don't have the gear yet can always play Joe... ;)

  • Yeah Hai thanks for offering your offices to meet up. I'll be there at 10am as well.

    @ Stephan

    Sorry I thought you were suggesting we buy a Siltvertone! I agree it's way overpriced, but the ideas are for "free". ;)

    I also agree on the ruggedness factor to a certain degree. We won't need to (or can afford to) go MIL spec though. As always the air is soft but the landings not. By then our mission will be over anyway. Value for money wise you typically can't beat mass produced appliances.

    On video TX: The COFDM modules look the part but the price is way out for digital. We could setup our own DVB-T transmitter with SDR for $700 in comparison. Doing liveview is one thing, but then skimping on the resolution I think is a bad idea. I don't think analog will cut it either. The human eye just doesn't pick up well on a few pixels in a relatively fast moving scene, especially when under pressure.

    Take COD BO2 on PS3 running at 1080p for example. There is simply so much going on that you barely notice a sniper gun pop through a window and blow your head off! Sure a bird eye view might not be so "active" but on the day of the event I'm sure we'll experience the same sort of rush, especially if we are aiming for around 30 minutes to win. From experience most errors are not mechanical or system ones, they're human ones.

    Cybertech writes this about the Kestral tracking system:

    "Flying well above the action, the Kestrel computer vision system is capable of detecting and tracking objects smaller than 2 pixels in size, even those that are too small, moving too slowly or that are too camouflaged for human operators to detect"

    Accordingly I can't imagine that a single, or even multi channel analog camera system that can physically pick out to the required level of detail to even produce 2 pixel Joes over that area in the given time.

    As I calculated before: If the SAR area is 2nm x 2nm and 5nm out that means we need about 5min with takeoff just to get there at 120kmh VNO, Then allow 3min for the drop, which is about 3-5 decent passes. I'm not sure if we need to make it back to base in 60 minutes but I hope not. So that leaves us with about 52min of SAR without the flight home.

    Just to get an idea of what needs to be achieved I've worked out the scenarios with different camera options for comparison below.

    So 52min for 13sqkm SAR operation equals:

    With a 16MP camera, 530m pass width, 11cm resolution, a 40kmh SAR flight speed, and 53.4 km range

    With a 5MP camera, 300m pass width, 11cm resolution, a 61kmh SAR flight speed, and 64 km range

    With a 0.5MP (PAL Analog) camera, 90m pass width, 11cm resolution, a 205kmh  SAR flight speed, and 165 km range (!!)

    I don't think live video is viable option anyway, we need to take photos that can be analysed after the pass as well, we simply don't have the time to look through the video recordings again and again.

    There's a possibility of using the Samsung galaxy camera that runs android as well. Running at 16MP we'd have the resolution to do wide passes without stitching, plus it has a quad-core running at 1.4Ghz built in for image processing on the fly. Together with some tweaking of the HDR mode we should be able to "liven up" the shots to the extent that anything out of the background color range stands out on the shots. Plus we could even use the 21x zoom to optimize path width to avoid excessive overlap/underlap to increase pass efficiency and a secondary camera on a pan and tilt could be used for live view with zoom confirmation of potential hits, without disrupting the pass progress. It can also run linux and even has built in 3G, GPS, ACC/Gyro/Mag etc as a backup system all for $400. You could even use it to receive way points via Androplane etc.

  • Thanks Hai for offering your offices for this meetup. I'll see you all at 10am sunday. 

    @JB, the security cam I am using is a 600TVL model which gives around 0.4mp resolution after capture (720x576). It is also interlaced so you can basically halve the vertical resolution. A proper IP camera would be much better than this. I think the idea of having multiple camera like the gorgon stare used in the military drones is a good idea and the only real way to cover ground in the time. 

    I haven't developed much in the way of object search or recognition code yet, just starting to learn more in that space for a hobby project of mine called seascan. The idea of that was to fly and locate sharks/dolphins/seals etc and then upload locations to an iphone app so people can see where these animals are. Not sure what the uses of it would be, it was just a fun project, and one that I will put on hold now for the OBC.

This reply was deleted.

Hexicopter

Hi AllPurchased a FlyPro X600 and I am trying to sort out what would be the most suitable camera to use for FPV and some video/stills photography.  The supplier has wired it up for a GoPro 3.  That model is a little out of date but a good camera to start with although if I am going to purchase a camera I would like to only spend on a one of that last for sometime.  I have had a look at the specs for the for what I will be flying it for Sony FDR-X1000V.  The software has stabilisation and some…

Read more…
0 Replies

UAV Long Range Video

I don't know if you have all seen the write up by CanberraUAV however Andrew replied to a question with this:"Comment by Andrew Tridgell 11 hours ago@Stephen,We ran the Ubiquity radios in normal AirMax mode. We used it to send UDP packets encapsulating a protocol we invented for the event that we call block_xmit. That is a reliable block sending protocol that is particularly good in high packet loss environments. We got about 25% packet loss during the flight, so sending images and data using…

Read more…
4 Replies

APM2 board: not able to get past radio calibration step. please help.

Would anybody be able to help me with Mission Planner comms to APM2 board? I've hit a hump that is troubling me. I'm not able to get past radio calibration step on new board and gear (Turnigy 9ch). I've got considerable IT experience and some APM1 successes but this prob is a B. Could just be a faulty dataflash card or PC security jamming actions, Be great to hear your thoughts 9453 3580problem described at http://diydrones.com/forum/topics/no-bars-to-calibrate-radio-signal...CheersBrett

Read more…
2 Replies