Ethics of artificial intelligence
The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, concern with the moral behavior of artificial moral agents (AMAs).
Roboethics
The term "roboethics" was coined by roboticist Gianmarco Veruggio in 2002, referring to the morality of how humans design, construct, use and treat robots and other artificially intelligent beings.[1] It considers both how artificially intelligent beings may be used to harm humans and how they may be used to benefit humans.
Robot rights
Robot rights are the moral obligations of society towards its machines, similar to human rights or animal rights.[2] These may include the right to life and liberty, freedom of thought and expression and equality before the law.[3] The issue has been considered by the Institute for the Future[4] and by the U.K. Department of Trade and Industry.[5]
Experts disagree whether specific and detailed laws will be required soon or safely in the distant future.[5] Glenn McGee reports that sufficiently humanoid robots may appear by 2020.[6] Ray Kurzweil sets the date at 2029.[7] Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.[8]
The rules for the 2003 Loebner Prize competition explicitly addressed the question of robot rights:
61. If, in any given year, a publicly available open source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.[9]
In 'Laws of Robotics' by Asimov, there were three rules those were established.
- First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- Second Law: A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
- Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law[10]
These three laws are defines by Asimov to be set in stone and will determine the rights that all robots will have.
Threat to privacy
Aleksandr Solzhenitsyn's The First Circle describes the use of speech recognition technology in the service of tyranny.[11] If an AI program exists that can understand natural languages and speech (e.g. English), then, with adequate processing power it could theoretically listen to every phone conversation and read every email in the world, understand them and report back to the program's operators exactly what is said and exactly who is saying it. An AI program like this could allow governments or other entities to efficiently suppress dissent and attack their enemies.
Threat to human dignity
Joseph Weizenbaum argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as any of these:
- A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
- A therapist (as was seriously proposed by Kenneth Colby in the 1970s)
- A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
- A soldier
- A judge
- A police officer
Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[12]
Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[12] AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes.
Bill Hibbard[13] writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
Transparency and Open Source
Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[14] Ben Goertzel and David Hart created OpenCog as an open source framework for AI development.[15] OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman and others to develop open source AI beneficial to humanity[16]. There are numerous other open source AI developments.
Weaponization of artificial intelligence
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[17][18] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[19][20] One researcher states that autonomous robots might be more humane, as they could make decisions more effectively.
Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[21] From a conquentialist view, there is a chance that robots will develop the ability to make their own logical decisions on who to kill and that is why there should be a set moral framework that the A.I can not override.
There has been a recent outcry with regards to the engineering of artificial-intelligence weapons and has even fostered up ideas of a robot takeover of mankind. AI weapons do present a type of danger different than that of human controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human operated weapons, Stephen Hawking and Max Tegmark have signed a Future of Life petition to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[22]
"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikov's of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[23]
Physicist and Astronomer Royal Sir Martin Rees warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge has voiced a similar warning that humans may not survive when intelligence "escapes the constraints of biology." These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hopes of avoiding this threat to human existence.[22]
Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that this scenario "seem potentially as important as the risks related to loss of control", but that research organizations investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".[24]
Machine ethics
Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.[25][26][27][28]
Isaac Asimov considered the issue in the 1950s in his I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[29]
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne in Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[30] One problem in this case may have been that the goals were "terminal" (i.e. in contrast, ultimate human motives typically have a quality of requiring never-ending learning).[31]
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[17] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[32][33] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[34] They point to programs like the Language Acquisition Device which can emulate human interaction.
Vernor Vinge has suggested that a moment may come when some computers are smarter than humans. He calls this "the Singularity."[35] He suggests that it may be somewhat or possibly very dangerous for humans.[36] This is discussed by a philosophy called Singularitarianism. The Machine Intelligence Research Institute has suggested a need to build "Friendly AI", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.[37]
In 2009, academics and technical experts attended a conference to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[35]
In Moral Machines: Teaching Robots Right from Wrong,[38] Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. Nick Bostrom and Eliezer Yudkowsky have argued for decision trees (such as ID3) over neural networks and genetic algorithms on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. stare decisis),[39] while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[31]
Unintended consequences
Many researchers have argued that, by way of an "intelligence explosion" sometime in the 21st century, a self-improving AI could become so vastly more powerful than humans that we would not be able to stop it from achieving its goals.[40] In his paper Ethical Issues in Advanced Artificial Intelligence, the Oxford philosopher Nick Bostrom even argues that artificial intelligence has the capability to bring about human extinction. He claims that general superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. In theory, a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its top goal, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[41]
However, the sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[40][41] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[42]
Bill Hibbard[13] proposes an AI design that avoids several types of unintended AI behavior including self-delusion, unintended instrumental actions, and corruption of the reward generator.
Instead of overwhelming the human race and leading to our destruction Nick Bostrom believes that Super Intelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.[43]
Ethics of artificial intelligence in fiction
The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost Speciesism. The short story "The Planck Dive" suggest a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.
The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games. It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
Over time, debates have tended to focus less and less on possibility and more on desirability, as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
See also
- Artificial consciousness
- Artificial general intelligence
- Effective altruism, the far future and global catastrophic risks
- Existential risk
- Existential risk from advanced artificial intelligence
- Philosophy of artificial intelligence
- Superintelligence
- Researchers
- Nick Bostrom
- Ray Kurzweil
- Peter Norvig
- Steve Omohundro
- Stuart J. Russell
- Anders Sandberg
- Eliezer Yudkowsky
- Organisations
- Centre for the Study of Existential Risk
- Future of Humanity Institute
- Future of Life Institute
- Machine Intelligence Research Institute
Notes
- ↑ Veruggio, Gianmarco (2007). "The Roboethics Roadmap" (PDF). Scuola di Robotica: 2. Retrieved 28 March 2011.
- ↑ Evans, Woody (2015). "Posthuman Rights: Dimensions of Transhuman Worlds". Universidad Complutense Madrid. Retrieved 7 April 2016.
- ↑ The American Heritage Dictionary of the English Language, Fourth Edition
- ↑ "Robots could demand legal rights". BBC News. December 21, 2006. Retrieved January 3, 2010.
- 1 2 Henderson, Mark (April 24, 2007). "Human rights for robots? We're getting carried away". The Times Online (The Times of London). Retrieved May 2, 2010.
- ↑ McGee, Glenn. "A Robot Code of Ethics". The Scientist.
- ↑ Kurzweil, Ray (2005). The Singularity is Near. Penguin Books. ISBN 0-670-03384-7.
- ↑ The Big Question: Should the human race be worried by the rise of robots?, Independent Newspaper,
- ↑ Loebner Prize Contest Official Rules — Version 2.0 The competition was directed by David Hamill and the rules were developed by members of the Robitron Yahoo group.
- ↑ "Mason Central Authentication Service". search.proquest.com.mutex.gmu.edu. Retrieved 2016-04-23.
- ↑ (McCorduck 2004, p. 308)
- 1 2 Joseph Weizenbaum, quoted in McCorduck 2004, pp. 356, 374–376
- 1 2 Hibbard, Bill (2014): "Ethical Artificial Intelligence".
- ↑ Open Source AI. Bill Hibbard. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel and Stan Franklin.
- ↑ OpenCog: A Software Framework for Integrative Artificial General Intelligence. David Hart and Ben Goertzel. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel and Stan Franklin.
- ↑ Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free Cade Metz, Wired 27 April 2016.
- 1 2 Call for debate on killer robots, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
- ↑ Robot Three-Way Portends Autonomous Future, By David Axe wired.com, August 13, 2009.
- ↑ New Navy-funded Report Warns of War Robots Going "Terminator", by Jason Mick (Blog), dailytech.com, February 17, 2009.
- ↑ Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley engadget.com, Feb 18th 2009.
- ↑ "Mason Central Authentication Service". search.proquest.com.mutex.gmu.edu. Retrieved 2016-04-23.
- 1 2 Zach Musgrave and Bryan W. Roberts. "Why Artificial Intelligence Can Too Easily Be Weaponized - The Atlantic". The Atlantic.
- ↑ Cat Zakrzewski. "Musk, Hawking Warn of Artificial Intelligence Weapons". WSJ.
- ↑ GiveWell (2015). Potential risks from advanced artificial intelligence (Report). Retrieved 11 October 2015.
- ↑ Anderson. "Machine Ethics". Retrieved 27 June 2011.
- ↑ Anderson, Michael; Anderson, Susan Leigh, eds. (July 2011). Machine Ethics. Cambridge University Press. ISBN 978-0-521-11235-2.
- ↑ Anderson, Michael; Anderson, Susan Leigh, eds. (July–August 2006). "Special Issue on Machine Ethics". IEEE Intelligent Systems 21 (4): 10–63. doi:10.1109/mis.2006.70. ISSN 1541-1672.
- ↑ Anderson, Michael; Anderson, Susan Leigh (Winter 2007). "Machine Ethics: Creating an Ethical Intelligent Agent". AI Magazine (American Association for Artificial Intelligence) 28 (4): 15–26. ISSN 0738-4602.
- ↑ Asimov, Isaac (2008). I, Robot. New York: Bantam. ISBN 0-553-38256-X.
- ↑ Evolving Robots Learn To Lie To Each Other, Popular Science, August 18, 2009
- 1 2 Santos-Lang, Chris (2002). "Ethics for Artificial Intelligences".
- ↑ Science New Navy-funded Report Warns of War Robots Going "Terminator", by Jason Mick (Blog), dailytech.com, February 17, 2009.
- ↑ Navy report warns of robot uprising, suggests a strong moral compass, by Joseph L. Flatley engadget.com, Feb 18th 2009.
- ↑ AAAI Presidential Panel on Long-Term AI Futures 2008-2009 Study, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
- 1 2 Scientists Worry Machines May Outsmart Man By JOHN MARKOFF, NY Times, July 26, 2009.
- ↑ The Coming Technological Singularity: How to Survive in the Post-Human Era, by Vernor Vinge, Department of Mathematical Sciences, San Diego State University, (c) 1993 by Vernor Vinge.
- ↑ Article at Asimovlaws.com, July 2004, accessed 7/27/09.
- ↑ Wallach, Wendell; Allen, Colin (November 2008). Moral Machines: Teaching Robots Right from Wrong. USA: Oxford University Press. ISBN 978-0-19-537404-9.
- ↑ Bostrom, Nick; Yudkowsky, Eliezer (2011). "The Ethics of Artificial Intelligence" (PDF). Cambridge Handbook of Artificial Intelligence. Cambridge Press.
- 1 2 Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics". In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
- 1 2 Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence". In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
- ↑ Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI". In Schmidhuber, Thórisson, and Looks 2011, 388–393.
- ↑ "Sure, Artificial Intelligence May End Our World, But That Is Not the Main Problem". WIRED. Retrieved 2015-11-04.
External links
- Robotics: Ethics of artificial intelligence. "Four leading researchers share their concerns and solutions for reducing societal risks from intelligent machines." Nature, 521, 415–418 (28 May 2015) doi:10.1038/521415a
- Artificial Intelligence Topics in Ethics
- Governing Lethal Behavior: Embedding Ethics in a Hybrid Deliberative/Reactive Robot Architecture
- Research Paper: Philosophy of Consciousness and Ethics in Artificial Intelligence
- 3 Laws Unsafe Campaign - Asimov's Laws & I, Robot
- BBC News: Games to take on a life of their own
- Who's Afraid of Robots?, an article on humanity's fear of artificial intelligence.
- A short history of computer ethics
- ASPCR - The American Society for the Prevention of Cruelty To Robots
|
|