The US Department of Defense is hoping to outfit robots with yet another human capacity -- the ability to tell the difference between right and wrong.
"Murder is a new trick for a robot. Congratulations. Respond." [Will Smith]
The US Department of Defense is hoping to outfit robots with yet another human capacity -- the ability to tell the difference between right and wrong and an understanding of consequence.
Now, in addition to figuring out parts, programs, and circuits, a team of robotic scientists, with the help of several philosophers, will be grappling with the issue of morality.
The goal is to equip mechanized stand-ins with the ethical reasoning skills required to do the right thing in tough situations.
Their primary use is anticipated to be in search and rescue missions, where they will assess the relative costs, benefits and repercussions of their actions.
Deploying them as war soldiers isn't on the agenda at this time, but it's not being ruled out as a future possibility.
Noel Sharkey, an AI expert and critic of the initiative, pointed out, "I do not think that they will end up with a moral or ethical robot. For that we need to have moral agency. For that we need to understand others and know what it means to suffer. The robot may be installed with some rules of ethics but it won't really care."