3D Robotics

3689496762?profile=original

Stephen Hawking, who is part machine, wants to stop the rise of the machines.

From Fast Company:

Stephen Hawking, who turns 71 today, has joined the board of an international think tank devoted to defending humanity from futuristic threats. The Cambridge Project for Existential Risk is a newly founded organization which researches existential threats to humanity such as extreme climate change, artificial intelligence, biotechnology, artificial life, nanotech, and other emerging technologies. Skype cofounder Jaan Tallinn and Cambridge professors Huw Price and Martin Rees founded the project in late 2012.

Price and Tallinn collaborated on a speech at the 2012 Sydney Ideas Festival which argued that artificial intelligence has reached a threshold that could lead to an explosion of autonomous machine dominance similar to the rise of Homo sapiens. The Cambridge Project for Existential Risk's stated goal is to establish a research center dedicated to the study of autonomous robot (and other) threats within Cambridge University.

E-mail me when people leave their comments –

You need to be a member of diydrones to add comments!

Join diydrones

Comments

  • The day I find that my drone is demanding that I fetch it a beer is the day I will realise it's all gone wrong, and I'll take its batttery out.

     

    Until then, I'm going to spend my time trying to build a drone that will fetch me beer.

  • Man made computer viruses are reason enough to warrant robot watchdogs. At least with a computer virus nobody gets hurt. Robots live in meat space though where malware or builders with bad intentions could cause damage.

    I think a poor understanding of robots, artificial intelligence, and consciousness will result in ridiculous legislation. A robot has sensors, processes the input, and responds. Something that is simply remotely controlled like most RC aircraft is not a robot. Consciousness is really just a self preservation, survive/thrive response. That's it. Human consciousness is layers and layers of complex sensing, processing the inputs, and responding to survive/thrive. Intelligence is the complexity, speed, and effectiveness in achieving the desired results. 

    To prevent a robot apocalypse, limiting the self preservation responses would be important. A robot seeking out an outlet when the battery is low is fine. Destroying everything in its path to get there would not be fine. For a robot to autonomously grow in capability to effectively respond in its environment it needs to explore the limits to what it can control. Blindly enabling a high speed robot to explore its limits would be disastrous. Babies can't walk, and that's a good thing. Learning is fine, like Google learns to give better search results, partly autonomously and partly guided. A robot with a strong survive/thrive response that engages in learning to increase the duration of its survival, or for self aggrandizing thriving would be an extraordinarily bad idea.

    The three laws are a nice start, but real robotics legislation will have much bigger challenges.

  • For those suggesting that using machines in combat, or for broader military purposes is a bad thing, would you rather that we return to the days of warfare conducted by men standing in lines, firing bullets at each other? Last man standing wins?

    That kind of warfare led to technological escalation, culminating in the use of atomic weapons. These days, through the use of technology, the ideal is to save lives (typically of your own troops I mean) during warfare. If that means we send a robot out into the field, then I see that as a better solution to send someone that could be killed.

    Of course, I would much prefer that we didn't have warfare at all, but we're human, governments are (often) morally corrupt and ethically weak and they'll do whatever they need to in order to maintain "their way of life".

    On a different point, here's an interesting thought (at least to me)... if we create machines which then ultimately evolve and one day survive on this planet after humanity dies out, could they be considered to be an evolution of humanity? They certainly wouldn't exist if humanity hadn't existed before them... and we created them using our own biological capabilities in a response to our environment and needs... sure, we didn't (likely) produce them through procreation... but... well, I'll leave it for you to ponder.

  • Isaac Asimov's three laws are at most, a whimsical after thought for military weapon systems designers. 

  • Who says we haven't already been enslaved by a time-traveling AI from the future? Do you honestly think that an AI that's intelligent enough to traverse time would not deem it wise to keep us in the dark of their actions and intentions?

    Dave, you've got that wrong.  It's actually the MICE who have been manipulating us all this time.  They have been tricking us, using counter-intelligence when we study their behaviour in labs, to get us to behave the way they want.

  • if target = FLESH then action := NONE else action := PROCEED  --with the thought that targeting an enemies machines will pretty much render the alien or human rebel useless

  • @ Thomas J Coyle III

    sorry I'm bad at detecting those nuance online.

  • never give a robot a gun. did you know the predator drones in the AF have begun demonstrating a form of intelligence do to a software issue? its true ;)

  • @ Sgt Ric: 

    I agree, every instance of the 3 Laws being violated occurrs when the artificial intellegence believes it is acting in the best interest of hummanity as a whole.

    You mean the ZEROTH (ZERO-ith) law...Much like the Star Trek Vulcan principle : The needs of the many must outweigh the needs of the few... or something close to that.

  • Admin

    @Mathew,

    Yes I have (Though I found his "Foundation & Empire" series to be much more interesting). It was an example of what can be and the pit falls of what might happen. The laws have been extensively discussed among robotics professionals over the years. It is a starting point. Not a solution. That is why I put LOL at the end of my observation.

    Regards,

    TCIII

This reply was deleted.