Monday, July 9, 2012

4:16 AM - No comments

How to build robots that do humans no harm

With the relentless march of technological progress, robots and other automated systems are getting ever smarter. At the same time they are also being given greater responsibilities, driving cars, helping with childcareMovie Camera, carrying weapons, and maybe soon even pulling the trigger.

But should they be trusted to take on such tasks, and how can we be sure that they never take a decision that could cause unintended harm?

So are there things we can do to minimise the risks? Wallach and Allen take a look at six strategies that could reduce the danger from our own high-tech creations.

Keep them in low-risk situations

Make sure that all computers and robots never have to make a decision where the consequences can not be predicted in advance.

Likelihood of success: Extremely low. Engineers are already building computers and robotic systems whose actions they cannot always predict.

i Robot (image: 20th Century Fox)
Do not give them weapons

Likelihood of success: Too late. Semi-autonomous robotic weapons systems, including cruise missiles and Predator drones, already exist. A few machine-gun-toting robots were sent to Iraq and photographed on a battlefield, though apparently were not deployed.

However, military planners are very interested in the development of robotic soldiers, and see them as a means of reducing deaths of human soldiers during warfare.

Give them rules like Asimov's 'Three Laws of Robotics'

Likelihood of success: Moderate. Isaac Asimov's famous rules are arranged hierarchically: most importantly robots should not harm humans or through inaction allow harm to them, of secondary importance is that they obey humans, while robotic self-preservation is the lowest priority.

However, Asimov was writing fiction, not building robots. In story after story he illustrates problems that would arise with even these simple rules, such as what the robot should do when orders from two people conflict.

Give them rules like Asimov's 'Three Laws of Robotics'

Likelihood of success: Moderate. Isaac Asimov's famous rules are arranged hierarchically: most importantly robots should not harm humans or through inaction allow harm to them, of secondary importance is that they obey humans, while robotic self-preservation is the lowest priority.

However, Asimov was writing fiction, not building robots. In story after story he illustrates problems that would arise with even these simple rules, such as what the robot should do when orders from two people conflict.

Sometimes identifying the best option under a given rule can be extremely difficult. For example, determining which course of action leads to the greatest good would require a tremendous amount of knowledge, and an understanding of the effects of actions in the world. Making such calculations would require time and a great deal of computing power.

Educate robots like children

Machines that learn as they "grow up" could develop sensitivity to the actions that people consider to be right and wrong.

Likelihood of success: Promising, although this strategy requires a few technological breakthroughs. While researchers have created robots able to learn in similar ways to humansMovie Camera, the tools presently available are very limited.

Make machines master emotion

Human-like faculties such as empathy, emotions, and the capacity to read non-verbal social cues should give robots much greater ability to interact with humans. Work has already started on equipping domestic robots with such faculties.

Likelihood of success: Developing emotionally sensitive robots would certainly help implement the previous three solutions discussed. Most of the information we use to make choices and cooperate with others derives from our emotions, as well as our capacity to read gestures and intentions and imagine things from another person's point of view.

0 comments:

Post a Comment