RoboNurse

3689577923?profile=originalHello, my name is Carlos Orozco, and I am a  High School student. My partner and I will be competing at the Greater San Diego Science Fair this March, and we are trying to develop a robot that can assist senior citizens and/or medicine users in a way that the administration of their medicine can be controlled by an autonomous robot. The robot is not finish yet, and I am asking for any type of technical help.

The first idea of creating the robot was of using a Raspberry Pi to control and Arduino UNO. The Arduino UNO is in charge of moving the motors so the robot can move around the house, we also tried to connect a Web Cam to the Raspberry Pi so with a Computer Vision Program the Raspberry Pi would be able to identify the medicine user. None of these ideas have worked so far. I am trying to buy an ArduPilot, but I am not very sure if that is why my robot need. 

I am still trying to figure out the mechanism to deliver the medicine at fixed hours.

This is the list of parts we have so far:

Chassis

Wheels

Web Cam

Arduino UNO

Arduno MotorShield

3 GearMotors

Raspberry Pi

*I spent all my Christmas money in this robot, any donation or technical support will be appreciated. 

*Special thanks to SparkFun Electronics for sponsoring us with the aluminum bars and the wheels.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • identifying a user with machine vision is probably going to take too long. I'd suggest scaling back and identifying a QR code as a proof of concept. (Like, maybe scanning a wristband which is how hospitals seem to do it sometimes)

    You need to start breaking this down into smaller tasks.

    • Navigation
    • Scheduling
    • Dispensing
    • Recognizing

    Navigation is a massive problem on its own. Why not just use Remote Control (cheap FlySky remote) and spend your time on the two other subsystems. You will have your hands massively full just dealing with the dispensing, scheduling, and recognizing.

  • Thanks for all your comments, I will figure out what ideas would be viable for the project and then I will post how I use  them in this blog. 

  • Hi Carlos, I find your project incredibly intresting, I see it has a lot of ways to help the comunity, congrats. Now, I would like to recomend some ideas, just so you get your mind moving, not really telling you what to do but instead giving you options so you yourself can create something amazing and entirely yours.

    I would recomend the use of a bracelet or necklace for 2 things: so the robot can identify the subject, and also for the robot to have a sense of direction to where the subject is located. Its easier than face or voice recognition.

    I would aslo recomend placing a big yellow flag that can be seen easily, that way the subject or any other wont have trouble kicking the robot or falling over it.

    Delivering the medicine can be done various ways. It can be by placing the pills on a tube with a spring on the bottom making pills go upward and have a mechanism that at certain time, 'X' number of pills. Something like a PEZ candy disposer. Or it can also be something like this: goo.gl/0U19eb just that instead of marbles, a little box with 'X' quantity of pills. Both can give you an idea of how to do it. This way you can control 3 variables, the quantity of each dose, the time in which it has to be taken, and how much medicine is left so you can alert the subject that it need to be refilled.

    I hope this helps you in any way. Good luck with your science fair. Its a great project!
    http://da.reallusion.com/Upload/JIC7e0466938d81fbc01/JIC7e0466938d81fbc01_20110925154004_Header.jpg
  • So if the goal is just to have a demonstration at a science fair, and not an actual hospital ready robot, then your job is much easier.  One easy way to navigate for a demonstration is to do line-following, the line is just tape stuck to the floor.  I believe sparkfun has sensors for line following, and I am sure a google search would give good results.  For a demonstration you probably could use the infra-red and ultrasonic sensors that Gary mentioned to stop the robot if somebody or something gets in the way,  they are easy to work with, and can be found online pretty cheap.  I hope this helps and good luck.

  • Thanks Gary, I appreciate the info you are giving me and the time you spent writing it. 

  • Hi Carlos,

    I'm going to take a quick stab at this.

    You have taken on an enormous task with trying to get machine vision from camera to identify user, certainly not impossible, but from what you have said I would guess way outside your budget.

    You seem to have several requirements:

    First the robot navigating around in general probably able to follow or better identify paths and obstacles.

    If the robot needs to respond to environmental cues to identify a path in an obstacle laden environment it is a computationally and sensor resource intensive application.

    As far as identifying individuals, the simplest system would be to have them interact with your robot in some way, push a button or if valid ID is needed issue them a RFID which could be used to corroborate their identity.

    Alternatively, you could also use one of the inexpensive fingerprint readers now available.

    And if you need to use a non-contact method speech recognition is way easier than facial recognition.

    The APM or better PIxhawk is really optimized for outdorr navigation using absolute coordinates provided by GPS and its firmware is oriented towards that use for a forward only moving "rover" rather than an environment interactive robot.

    You will probably want lots of computing power for your robot and at least the ability to run the ROS computing language.

    The Pixhawk does provide compatibility with ROS, but you will need to write quite a bit of the robot code yourself.

    Extracting "visual" information is very difficult and is most difficult from a single camera. To a computer a camera only provides light and dark and assorted colors and what your computer really wants to know about is objects and surfaces.

    At the high end the desired data format for information these days is the 3D point cloud, such as provided by the structured light Microsoft Kinect or Time of Flight camera of the Kinect "1", which are coincidentally by far the cheapest devices for supplying this king of information.

    At the low end Sonar and IR proximity detectors are often used for simple object avoidance, but they are really not very good at it and there are many circumstances where they cannot be depended on and will yield incorrect information.

    Laser ranging devices and better laser scanners can provide a nice 1 dimensional or 2 dimensional representation of obstacle / impediment distance and if appropriately scanned can also provide a 3D point cloud.

    There are some interesting and not too expensive TOF laser ranging devices that if mounted on an XY servo scanned gimbal could be used to produce 3D information of the area in front of you.

    Conventional 2D Laser scanners could be scanned with a servo in only 1 dimension to provide similar data but these are certainly outside your budget.

    You also need serious computer power to handle the requirements of a 3D vision system.

    On my robot this will be handled by actually having a small laptop onboard.

    The (free) Microsoft 3D Robotics development system assumes this approach and does have drivers for the Kinect.

    This is a simple diagram of my hanging pendulum robot that uses 2 electric bicycle wheels and which is under construction.

    3689556250?profile=original

    Best of Luck,

    Gary McCray

  • Yeah, that is what I have thought. I am still trying to figure out how to put the trays in order that they can be moved every day. And I never thought about the safety argument, that is a good one. 

    Thanks for the Comment. 

This reply was deleted.