"Famous science fiction writer Isaac Asimov's 'Three Laws of Robotics', a plot device invented in the 1940s. They are:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Amazingly, there are still people who take these laws seriously and suppose they would work as a viable strategy for AI creation. Asimov Laws are far too simplistic, open to varying interpretations, anthropocentrically biased, and adversarial to qualify as a serious engineering strategy for AI morality. Asimov Laws are also predicated on the false assumptions that 1) AIs cannot become independent moral agents, let alone possess kinder-than-human morality, 2) AIs will always be mechanical, in appearance and in thought, 3) creating a human-friendly AI is a lot like coercing a human into being human-friendly. Asimov Laws also present themselves as semantic primitives - because they are to humans - therefore neglecting the vast underlying complexity that would be required to even approximate these laws in a real AI. The modern-day field of AI Friendliness is an attempt to go beyond Asimov Laws and similar ideas, creating workable strategies for safe AI goal systems. See also design-contingent philosophy, Friendliness, Friendship architecture, goal systems."
Three Laws of Robotics
Creating Friendly AI