Ethics of artificial intelligence
{{Short description|none}} {{merge from|AI veganism|discuss=Talk:Ethics of artificial intelligence#Merge proposal|date=February 2026}} {{cs1 config|name-list-style=vanc}} {{Artificial intelligence|Philosophy}} The [[ethics]] of [[artificial intelligence]] covers a broad range of topics within AI that are considered to have particular ethical stakes.{{Cite web |last=Müller |first=Vincent C. |date=April 30, 2020 |title=Ethics of Artificial Intelligence and Robotics |url=https://plato.stanford.edu/entries/ethics-ai/ |url-status=live |archive-url=https://web.archive.org/web/20201010174108/https://plato.stanford.edu/entries/ethics-ai/ |archive-date=10 October 2020 |website=Stanford Encyclopedia of Philosophy}} This includes [[algorithmic bias]]es, [[Fairness (machine learning)|fairness]], [[accountability]], transparency, privacy, and [[Regulation of artificial intelligence|regulation]], particularly where systems influence or automate human decision-making. It also covers various emerging or potential future challenges such as [[machine ethics]] (how to make machines that behave ethically), [[Lethal autonomous weapon|lethal autonomous weapon systems]], [[Artificial intelligence arms race|arms race]] dynamics, [[AI safety]] and [[AI alignment|alignment]], [[technological unemployment]], AI-enabled [[misinformation]],{{Cite web |date=2024-11-14 |title=Assessing potential future artificial intelligence risks, benefits and policy imperatives |url=https://www.oecd.org/en/publications/assessing-potential-future-artificial-intelligence-risks-benefits-and-policy-imperatives_3f4e3dfb-en.html |access-date=2025-08-04 |website=OECD |language=en}} how to treat certain AI systems if they have a [[moral status]] (AI welfare and rights), [[artificial superintelligence]] and [[Existential risk from artificial general intelligence|existential risks]].
Some application areas may also have particularly important ethical implications, like [[Artificial intelligence in healthcare|healthcare]], education, criminal justice, or the military.
== Machine ethics == {{Main|Machine ethics|AI alignment}}
Machine ethics (or machine morality) is the field of research concerned with designing [[Moral agency#Artificial Moral Agents|Artificial Moral Agents]] (AMAs), robots or artificially intelligent computers that behave morally or as though moral.{{cite web|last=Anderson|title=Machine Ethics|url=http://uhaweb.hartford.edu/anderson/MachineEthics.html|url-status=live|archive-url=https://web.archive.org/web/20110928233656/https://uhaweb.hartford.edu/anderson/MachineEthics.html|archive-date=28 September 2011|access-date=27 June 2011}}{{Cite book|title=Machine Ethics|date=July 2011|publisher=[[Cambridge University Press]]|isbn=978-0-521-11235-2|editor1-last=Anderson|editor1-first=Michael|editor2-last=Anderson|editor2-first=Susan Leigh}}{{cite journal|last1=Anderson|first1=M.|last2=Anderson|first2=S.L.|date=July 2006|title=Guest Editors' Introduction: Machine Ethics|journal=IEEE Intelligent Systems|volume=21|issue=4|pages=10–11|doi=10.1109/mis.2006.70|s2cid=9570832}}{{cite journal|last1=Anderson|first1=Michael|last2=Anderson|first2=Susan Leigh|date=15 December 2007|title=Machine Ethics: Creating an Ethical Intelligent Agent|journal=AI Magazine|volume=28|issue=4|page=15|doi=10.1609/aimag.v28i4.2065|s2cid=17033332 }} To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of [[Agency (philosophy)|agency]], [[Rational agent|rational agency]], [[moral agency]], and artificial agency, which are related to the concept of AMAs.{{cite journal|last1=Boyles|first1=Robert James M.|date=2017|title=Philosophical Signposts for Artificial Moral Agent Frameworks|url=https://philarchive.org/rec/BOYPSF|journal=Suri|volume=6|issue=2|pages=92–109}}
There are discussions on creating tests to see if an AI is capable of making [[ethical decision]]s. [[Alan Winfield]] concludes that the [[Turing test]] is flawed and the requirement for an AI to pass the test is too low.{{Cite journal|last1=Winfield|first1=A. F.|last2=Michael|first2=K.|last3=Pitt|first3=J.|last4=Evers|first4=V.|date=March 2019|title=Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue]|journal=Proceedings of the IEEE|volume=107|issue=3|pages=509–517|doi=10.1109/JPROC.2019.2900622|s2cid=77393713|issn=1558-2256|doi-access=free}} A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical. [[Neuromorphic engineering|Neuromorphic]] AI could be one way to create morally capable robots, as it aims to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.{{cite news|last1=Al-Rodhan|first1=Nayef|date=7 December 2015|title=The Moral Code|url=https://www.foreignaffairs.com/articles/2015-08-12/moral-code|url-status=live|access-date=2017-03-04|archive-url=https://web.archive.org/web/20170305044025/https://www.foreignaffairs.com/articles/2015-08-12/moral-code|archive-date=2017-03-05}} Similarly, [[whole-brain emulation]] (scanning a brain and simulating it on digital hardware) could also in principle lead to human-like robots, thus capable of moral actions.{{Cite web |last=Sauer |first=Megan |date=2022-04-08 |title=Elon Musk says humans could eventually download their brains into robots — and Grimes thinks Jeff Bezos would do it |url=https://www.cnbc.com/2022/04/08/elon-musk-humans-could-eventually-download-their-brains-into-robots.html |access-date=2024-04-07 |website=CNBC |language=en |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925013113/https://www.cnbc.com/2022/04/08/elon-musk-humans-could-eventually-download-their-brains-into-robots.html |url-status=live }} And [[large language model]]s are capable of approximating human moral judgments.{{Cite web |last=Anadiotis |first=George |date=April 4, 2022 |title=Massaging AI language models for fun, profit and ethics |url=https://www.zdnet.com/article/massaging-ai-language-models-for-fun-profit-and-ethics/ |access-date=2024-04-07 |website=ZDNET |language=en |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925013214/https://www.zdnet.com/article/massaging-ai-language-models-for-fun-profit-and-ethics/ |url-status=live }} Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency, scale insensitivity, etc.
In ''Moral Machines: Teaching Robots Right from Wrong'',{{Cite book|last1=Wallach|first1=Wendell|title=Moral Machines: Teaching Robots Right from Wrong|last2=Allen|first2=Colin|date=November 2008|publisher=[[Oxford University Press]]|isbn=978-0-19-537404-9|location=USA }} [[Wendell Wallach]] and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern [[Normative ethics|normative theory]] and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific [[List of machine learning algorithms|learning algorithms]] to use in machines. For simple decisions, [[Nick Bostrom]] and [[Eliezer Yudkowsky]] have argued that [[decision tree]]s (such as [[ID3 algorithm|ID3]]) are more transparent than [[Artificial neural network|neural networks]] and [[genetic algorithm]]s,{{cite web|last1=Bostrom|first1=Nick|author-link1=Nick Bostrom|last2=Yudkowsky|first2=Eliezer|author-link2=Eliezer Yudkowsky|year=2011|title=The Ethics of Artificial Intelligence|url=http://www.nickbostrom.com/ethics/artificial-intelligence.pdf|url-status=live|archive-url=https://web.archive.org/web/20160304015020/http://www.nickbostrom.com/ethics/artificial-intelligence.pdf|archive-date=2016-03-04|access-date=2011-06-22|work=Cambridge Handbook of Artificial Intelligence|publisher=[[Cambridge Press]]}} while Chris Santos-Lang argued in favor of [[machine learning]] on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "[[Hacker culture|hackers]]".{{cite web|last=Santos-Lang|first=Chris|year=2002|title=Ethics for Artificial Intelligences|url=http://santoslang.wordpress.com/article/ethics-for-artificial-intelligences-3iue30fi4gfq9-1|url-status=live|archive-url=https://web.archive.org/web/20141225093359/http://santoslang.wordpress.com/article/ethics-for-artificial-intelligences-3iue30fi4gfq9-1/|archive-date=2014-12-25|access-date=2015-01-04}}
Some researchers frame machine ethics as part of the broader AI control or value alignment problem: the difficulty of ensuring that increasingly capable systems pursue objectives that remain compatible with human values and oversight. [[Stuart Russell]] has argued that beneficial systems should be designed to (1) aim at realizing human preferences, (2) remain uncertain about what those preferences are, and (3) learn about them from human behaviour and feedback, rather than optimizing a fixed, fully specified goal.{{cite book |last=Russell |first=Stuart J. |title=Human Compatible: Artificial Intelligence and the Problem of Control |date=2019 |publisher=Viking |isbn=978-0-525-55861-3}} Some authors argue that apparent compliance with human values may reflect optimization for evaluation contexts rather than stable internal norms, complicating the assessment of alignment in advanced language models.{{cite book |last=Šekrst |first=Kristina |title=The Illusion Engine: The Quest for Machine Consciousness |publisher=Springer |year=2025|isbn=978-3-032-05561-3}}
== Challenges ==
=== Algorithmic biases === {{Main|Algorithmic bias}}
[[File:Kamala Harris speaks about racial bias in artificial intelligence - 2020-04-23.ogg|thumb|[[Kamala Harris]] speaking about racial bias in artificial intelligence in 2020]]AI has become increasingly inherent in facial and [[Speech recognition|voice recognition]] systems. These systems may be vulnerable to biases and errors introduced by their human creators. Notably, the data used to train them can have biases.{{Cite web |last=Gabriel |first=Iason |date=2018-03-14 |title=The case for fairer algorithms – Iason Gabriel |url=https://medium.com/@Ethics_Society/the-case-for-fairer-algorithms-c008a12126f8 |url-status=live |archive-url=https://web.archive.org/web/20190722080401/https://medium.com/@Ethics_Society/the-case-for-fairer-algorithms-c008a12126f8 |archive-date=2019-07-22 |access-date=2019-07-22 |website=Medium}}{{Cite web |date=10 December 2016 |title=5 unexpected sources of bias in artificial intelligence |url=https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/ |url-status=live |archive-url=https://web.archive.org/web/20210318060659/https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/ |archive-date=2021-03-18 |access-date=2019-07-22 |website=TechCrunch}}{{Cite web |last=Knight |first=Will |title=Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead |url=https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ |url-status=live |archive-url=https://web.archive.org/web/20190704224752/https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ |archive-date=2019-07-04 |access-date=2019-07-22 |website=MIT Technology Review}}{{Cite web |last=Villasenor |first=John |date=2019-01-03 |title=Artificial intelligence and bias: Four key challenges |url=https://www.brookings.edu/blog/techtank/2019/01/03/artificial-intelligence-and-bias-four-key-challenges/ |url-status=live |archive-url=https://web.archive.org/web/20190722080355/https://www.brookings.edu/blog/techtank/2019/01/03/artificial-intelligence-and-bias-four-key-challenges/ |archive-date=2019-07-22 |access-date=2019-07-22 |website=Brookings}}
According to Allison Powell, associate professor at [[London School of Economics|LSE]] and director of the Data and Society programme, data collection is never neutral and always involves storytelling. She argues that the dominant narrative is that governing with technology is inherently better, faster and cheaper, but proposes instead to make data expensive, and to use it both minimally and valuably, with the cost of its creation factored in.{{Cite web |last=Goodman |first=Emma |date=2025-06-06 |title=Rethinking data power: beyond AI hype and corporate ethics |url=https://blogs.lse.ac.uk/medialse/2025/06/06/rethinking-data-power-beyond-ai-hype-and-corporate-ethics/ |access-date=2025-06-07 |website=LSE Blogs}} Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.{{cite journal |last1=Friedman |first1=Batya |last2=Nissenbaum |first2=Helen |date=July 1996 |title=Bias in computer systems |journal=ACM Transactions on Information Systems |volume=14 |issue=3 |pages=330–347 |doi=10.1145/230538.230561 |s2cid=207195759 |doi-access=free}} In [[natural language processing]], problems can arise from the [[text corpus]]—the source material the algorithm uses to learn about the relationships between different words.{{Cite web |title=Eliminating bias in AI |url=https://techxplore.com/news/2019-07-bias-ai.html |url-status=live |archive-url=https://web.archive.org/web/20190725200844/https://techxplore.com/news/2019-07-bias-ai.html |archive-date=2019-07-25 |access-date=2019-07-26 |website=techxplore.com}}
Large companies such as IBM, Google, etc. that provide significant funding for research and development{{Cite journal |last1=Abdalla |first1=Mohamed |last2=Wahle |first2=Jan Philip |last3=Ruas |first3=Terry |last4=Névéol |first4=Aurélie |last5=Ducel |first5=Fanny |last6=Mohammad |first6=Saif |last7=Fort |first7=Karen |date=2023 |editor-last=Rogers |editor-first=Anna |editor2-last=Boyd-Graber |editor2-first=Jordan |editor3-last=Okazaki |editor3-first=Naoaki |title=The Elephant in the Room: Analyzing the Presence of Big Tech in Natural Language Processing Research |url=https://aclanthology.org/2023.acl-long.734 |url-status=live |journal=Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |location=Toronto, Canada |publisher=Association for Computational Linguistics |pages=13141–13160 |arxiv=2305.02797 |doi=10.18653/v1/2023.acl-long.734 |archive-url=https://web.archive.org/web/20240925013216/https://aclanthology.org/2023.acl-long.734/ |archive-date=2024-09-25 |access-date=2023-11-13 |doi-access=free}} have made efforts to research and address these biases.{{Cite web |last=Olson |first=Parmy |title=Google's DeepMind Has An Idea For Stopping Biased AI |url=https://www.forbes.com/sites/parmyolson/2018/03/13/google-deepmind-ai-machine-learning-bias/ |url-status=live |archive-url=https://web.archive.org/web/20190726082959/https://www.forbes.com/sites/parmyolson/2018/03/13/google-deepmind-ai-machine-learning-bias/ |archive-date=2019-07-26 |access-date=2019-07-26 |website=Forbes}}{{Cite web |title=Machine Learning Fairness {{!}} ML Fairness |url=https://developers.google.com/machine-learning/fairness-overview/ |url-status=live |archive-url=https://web.archive.org/web/20190810004754/https://developers.google.com/machine-learning/fairness-overview/ |archive-date=2019-08-10 |access-date=2019-07-26 |website=Google Developers}}{{Cite web |title=AI and bias – IBM Research – US |url=https://www.research.ibm.com/5-in-5/ai-and-bias/ |url-status=live |archive-url=https://web.archive.org/web/20190717175957/http://www.research.ibm.com/5-in-5/ai-and-bias/ |archive-date=2019-07-17 |access-date=2019-07-26 |website=www.research.ibm.com}} One potential solution is to create documentation for the data used to train AI systems.{{cite journal |last1=Bender |first1=Emily M. |last2=Friedman |first2=Batya |date=December 2018 |title=Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science |journal=Transactions of the Association for Computational Linguistics |volume=6 |pages=587–604 |doi=10.1162/tacl_a_00041 |doi-access=free}}{{cite arXiv |eprint=1803.09010 |class=cs.DB |first1=Timnit |last1=Gebru |first2=Jamie |last2=Morgenstern |author2-link=Jamie Morgenstern|title=Datasheets for Datasets |date=2018 |last3=Vecchione |first3=Briana |last4=Vaughan |first4=Jennifer Wortman |last5=Wallach |first5=Hanna |author-link5=Hanna Wallach |last6=Daumé III |first6=Hal |last7=Crawford |first7=Kate}} [[Process mining]] can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.{{Cite web |last=Pery |first=Andrew |date=2021-10-06 |title=Trustworthy Artificial Intelligence and Process Mining: Challenges and Opportunities |url=https://deepai.org/publication/trustworthy-artificial-intelligence-and-process-mining-challenges-and-opportunities |url-status=live |archive-url=https://web.archive.org/web/20220218200006/https://deepai.org/publication/trustworthy-artificial-intelligence-and-process-mining-challenges-and-opportunities |archive-date=2022-02-18 |access-date=2022-02-18 |website=DeepAI}} However, there are also limitations to the current landscape of [[Fairness (machine learning)#Limitations|fairness in AI]], due to the intrinsic ambiguities in the concept of [[discrimination]], both at the philosophical and legal level.{{cite journal |last1=Ruggieri |first1=Salvatore |last2=Alvarez |first2=Jose M. |last3=Pugnana |first3=Andrea |last4=State |first4=Laura |last5=Turini |first5=Franco |date=2023-06-26 |title=Can We Trust Fair-AI? |journal=Proceedings of the AAAI Conference on Artificial Intelligence |publisher=Association for the Advancement of Artificial Intelligence (AAAI) |volume=37 |issue=13 |pages=15421–15430 |doi=10.1609/aaai.v37i13.26798 |issn=2374-3468 |s2cid=259678387 |doi-access=free |hdl-access=free |hdl=11384/136444}}{{cite journal |last1=Buyl |first1=Maarten |last2=De Bie |first2=Tijl |date=2022 |title=Inherent Limitations of AI Fairness |journal=Communications of the ACM |volume=67 |issue=2 |pages=48–55 |arxiv=2212.06495 |doi=10.1145/3624700 |hdl=1854/LU-01GMNH04RGNVWJ730BJJXGCY99}}
==== Racial and gender biases ==== Bias can be introduced through historical data used to train AI systems.{{Cite journal |last=Knaus |first=Thomas |date=2025-10-23 |title=Why AI matters for education—an exploration in seven arguments |url=https://doi.org/10.1007/s35834-025-00511-7 |journal=Zeitschrift für Bildungsforschung |language=en |doi=10.1007/s35834-025-00511-7 |issn=2190-6904}}{{Cite journal |last1=Ntoutsi |first1=Eirini |last2=Fafalios |first2=Pavlos |last3=Gadiraju |first3=Ujwal |last4=Iosifidis |first4=Vasileios |last5=Nejdl |first5=Wolfgang |last6=Vidal |first6=Maria-Esther |last7=Ruggieri |first7=Salvatore |last8=Turini |first8=Franco |last9=Papadopoulos |first9=Symeon |last10=Krasanakis |first10=Emmanouil |last11=Kompatsiaris |first11=Ioannis |last12=Kinder-Kurlanda |first12=Katharina |last13=Wagner |first13=Claudia |last14=Karimi |first14=Fariba |last15=Fernandez |first15=Miriam |date=May 2020 |title=Bias in data-driven artificial intelligence systems—An introductory survey |url=https://wires.onlinelibrary.wiley.com/doi/10.1002/widm.1356 |url-status=live |journal=WIREs Data Mining and Knowledge Discovery |language=en |volume=10 |issue=3 |article-number=e1356 |doi=10.1002/widm.1356 |issn=1942-4787 |archive-url=https://web.archive.org/web/20240925013154/https://wires.onlinelibrary.wiley.com/doi/10.1002/widm.1356 |archive-date=2024-09-25 |access-date=2023-12-14}} For instance, [[Amazon (company)|Amazon]] terminated their use of [[Artificial intelligence in hiring|AI hiring and recruitment]] because the algorithm favored male candidates over female ones.{{Cite news |last=Dastin |first=Jeffrey |date=2018-10-11 |title=Insight – Amazon scraps secret AI recruiting tool that showed bias against women |url=https://www.reuters.com/article/world/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG/ |access-date=2025-06-30 |work=Reuters |language=en-US}} This was because Amazon's system was trained with data collected over a 10-year period that included mostly male candidates. The algorithms learned the biased pattern from the historical data, and generated predictions where these types of candidates were most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turned out to be biased against female and minority candidates.{{Cite news |date=2018-10-10 |title=Amazon scraps secret AI recruiting tool that showed bias against women |url=https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G |url-status=live |archive-url=https://web.archive.org/web/20190527181625/https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G |archive-date=2019-05-27 |access-date=2019-05-29 |work=Reuters}}
The performance of [[Facial recognition system|facial recognition]] and computer vision models may vary based on race and gender. Facial recognition algorithms made by Microsoft, IBM and Face++ all performed significantly worse on darker-skinned women.{{cite news |last1=Lohr |first1=Steve |date=9 February 2018 |title=Facial Recognition Is Accurate, if You're a White Guy |url=https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html |url-status=live |archive-url=https://web.archive.org/web/20190109131036/https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html |archive-date=9 January 2019 |access-date=29 May 2019 |work=The New York Times}}{{Cite web |last=Buolamwini |first=Joy |last2=Gebru |first2=Timnit |date=2018 |title=Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification. Proceedings of Machine Learning Research, 81, 77-91. |url=https://proceedings.mlr.press/v81/buolamwini18a.html |access-date=2026-03-12 |website= |publisher=Proceedings of Machine Learning Research |pages=77-91 |edition=81st}} Facial recognition was shown to be biased against those with darker skin tones. AI systems may be less accurate for black people, as was the case in the development of an AI-based [[Pulse oximetry|pulse oximeter]] that overestimated blood oxygen levels in patients with darker skin, causing issues with their [[Hypoxia (medicine)|hypoxia]] treatment.{{Cite journal |last1=Federspiel |first1=Frederik |last2=Mitchell |first2=Ruth |last3=Asokan |first3=Asha |last4=Umana |first4=Carlos |last5=McCoy |first5=David |date=May 2023 |title=Threats by artificial intelligence to human health and human existence |journal=BMJ Global Health |volume=8 |issue=5 |doi=10.1136/bmjgh-2022-010435 |issn=2059-7908 |pmc=10186390 |pmid=37160371 |article-number=e010435}} In 2015, controversy erupted after a Black couple were labeled "Gorillas" by Google Photos.{{Cite news |date=2015-07-01 |title=Google apologises for Photos app's racist blunder |url=https://www.bbc.com/news/technology-33347866 |access-date=2026-03-12 |work=BBC News |language=en-GB}}{{Cite news |last=Simonite |first=Tom |title=When It Comes to Gorillas, Google Photos Remains Blind |url=https://www.wired.com/story/when-it-comes-to-gorillas-google-photos-remains-blind/ |access-date=2026-03-12 |work=Wired |language=en-US |issn=1059-1028|date=2018-01-11}} Oftentimes the systems are able to easily detect the faces of white people while being unable to register the faces of people who are black. This has led to the ban of police usage of AI materials or software in some [[U.S. states]]. The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. For example, if a facial recognition system was only tested on people who were white, it would make it much harder for it to interpret the facial structure and tones of other races and [[Ethnicity|ethnicities]]. Biases often stem from the training data rather than the [[algorithm]] itself, notably when the data represents past human decisions.{{Cite journal |last=Manyika |first=James |date=2022 |title=Getting AI Right: Introductory Notes on AI & Society |journal=Daedalus |volume=151 |issue=2 |pages=5–27 |doi=10.1162/daed_e_01897 |issn=0011-5266 |doi-access=free}}
A 2020 study that reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.{{cite journal |last1=Koenecke |first1=Allison |author-link=Allison Koenecke |last2=Nam |first2=Andrew |last3=Lake |first3=Emily |last4=Nudell |first4=Joe |last5=Quartey |first5=Minnie |last6=Mengesha |first6=Zion |last7=Toups |first7=Connor |last8=Rickford |first8=John R. |last9=Jurafsky |first9=Dan |last10=Goel |first10=Sharad |date=7 April 2020 |title=Racial disparities in automated speech recognition |journal=Proceedings of the National Academy of Sciences |volume=117 |issue=14 |pages=7684–7689 |bibcode=2020PNAS..117.7684K |doi=10.1073/pnas.1915768117 |pmc=7149386 |pmid=32205437 |doi-access=free}}
[[Injustice]] in the use of AI is much harder to eliminate within healthcare systems, as oftentimes diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race.{{Cite journal |last1=Imran |first1=Ali |last2=Posokhova |first2=Iryna |last3=Qureshi |first3=Haneya N. |last4=Masood |first4=Usama |last5=Riaz |first5=Muhammad Sajid |last6=Ali |first6=Kamran |last7=John |first7=Charles N. |last8=Hussain |first8=MD Iftikhar |last9=Nabeel |first9=Muhammad |date=2020-01-01 |title=AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app |journal=Informatics in Medicine Unlocked |volume=20 |article-number=100378 |doi=10.1016/j.imu.2020.100378 |issn=2352-9148 |pmc=7318970 |pmid=32839734}} This can be perceived as a bias because each patient is a different case, and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what should be considered a biased decision in the distribution of treatment. While it is known that there are differences in how diseases and injuries affect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there are certain tests for diseases, such as [[breast cancer]], that are recommended to certain groups of people over others because they are more likely to contract the disease in question. If AI implements these statistics and applies them to each patient, it could be considered biased.{{Cite journal |last1=Cirillo |first1=Davide |last2=Catuara-Solarz |first2=Silvina |last3=Morey |first3=Czuee |last4=Guney |first4=Emre |last5=Subirats |first5=Laia |last6=Mellino |first6=Simona |last7=Gigante |first7=Annalisa |last8=Valencia |first8=Alfonso |last9=Rementeria |first9=María José |last10=Chadha |first10=Antonella Santuccione |last11=Mavridis |first11=Nikolaos |date=2020-06-01 |title=Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare |journal=npj Digital Medicine |language=en |volume=3 |issue=1 |page=81 |doi=10.1038/s41746-020-0288-5 |issn=2398-6352 |pmc=7264169 |pmid=32529043 |doi-access=free}}
In the justice system, AI can have biases against black people, labeling black court participants as high-risk at a much larger rate than white participants. AI often struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally.{{Citation |last=Spindler |first=Gerald |title=Different approaches for liability of Artificial Intelligence – Pros and Cons |date=2023 |work=Liability for AI |pages=41–96 |publisher=Nomos Verlagsgesellschaft mbH & Co. KG |doi=10.5771/9783748942030-41 |isbn=978-3-7489-4203-0}} The [[COMPAS (software)|COMPAS]] program has been used to predict which defendants are more likely to reoffend. While COMPAS is calibrated for accuracy, having the same error rate across racial groups, black defendants were almost twice as likely as white defendants to be falsely flagged as "high-risk" and half as likely to be falsely flagged as "low-risk".{{Cite book |last=Christian |first=Brian |title=The alignment problem: machine learning and human values |date=2021 |publisher=W. W. Norton & Company |isbn=978-0-393-86833-3 |edition=First published as a Norton paperback |location=New York, NY}} Another example is within Google's ads that targeted men with higher paying jobs and women with lower paying jobs. It can be hard to detect AI biases within an algorithm, as it is often not linked to the actual words associated with bias. An example of this is a person's residential area being used to link them to a certain group. This can lead to problems, as oftentimes businesses can avoid legal action through this loophole. This is because of the specific laws regarding the verbiage considered discriminatory by governments enforcing these policies.{{Cite journal |last1=Ntoutsi |first1=Eirini |last2=Fafalios |first2=Pavlos |last3=Gadiraju |first3=Ujwal |last4=Iosifidis |first4=Vasileios |last5=Nejdl |first5=Wolfgang |last6=Vidal |first6=Maria-Esther |last7=Ruggieri |first7=Salvatore |last8=Turini |first8=Franco |last9=Papadopoulos |first9=Symeon |last10=Krasanakis |first10=Emmanouil |last11=Kompatsiaris |first11=Ioannis |last12=Kinder-Kurlanda |first12=Katharina |last13=Wagner |first13=Claudia |last14=Karimi |first14=Fariba |last15=Fernandez |first15=Miriam |date=May 2020 |title=Bias in data-driven artificial intelligence systems—An introductory survey |journal=WIREs Data Mining and Knowledge Discovery |language=en |volume=10 |issue=3 |article-number=e1356 |doi=10.1002/widm.1356 |issn=1942-4787 |doi-access=free}}
Large language models often reinforce [[gender stereotypes]], assigning roles and characteristics based on traditional gender norms. For instance, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men, perpetuating gendered expectations and roles.{{Cite book |last1=Busker |first1=Tony |title=Proceedings of the 16th International Conference on Theory and Practice of Electronic Governance |last2=Choenni |first2=Sunil |last3=Shoae Bargh |first3=Mortaza |date=2023-11-20 |publisher=Association for Computing Machinery |isbn=979-8-4007-0742-1 |series=ICEGOV '23 |location=New York, NY, USA |pages=24–32 |chapter=Stereotypes in ChatGPT: An empirical study |doi=10.1145/3614321.3614325 |doi-access=free}}{{Cite book |last1=Kotek |first1=Hadas |title=Proceedings of the ACM Collective Intelligence Conference |last2=Dockum |first2=Rikker |last3=Sun |first3=David |date=2023-11-05 |publisher=Association for Computing Machinery |isbn=979-8-4007-0113-9 |series=CI '23 |location=New York, NY, USA |pages=12–24 |chapter=Gender bias and stereotypes in Large Language Models |doi=10.1145/3582269.3615599 |doi-access=free |arxiv=2308.14921}} Additionally, [[Facial recognition system|facial recognition]], [[computer vision]], or automatic gender recognition models can reinforce bias against both [[cisgender]]{{Citation |last=Wang |first=Tianlu |title=Balanced Datasets Are Not Enough: Estimating and Mitigating Gender Bias in Deep Image Representations |date=2019-10-11 |url=http://arxiv.org/abs/1811.08489 |access-date=2026-03-12 |publisher=arXiv |doi=10.48550/arXiv.1811.08489 |id=arXiv:1811.08489 |last2=Zhao |first2=Jieyu |last3=Yatskar |first3=Mark |last4=Chang |first4=Kai-Wei |last5=Ordonez |first5=Vicente}}{{Citation |last=Albiero |first=Vítor |title=Analysis of Gender Inequality In Face Recognition Accuracy |date=2020-01-31 |url=http://arxiv.org/abs/2002.00065 |access-date=2026-03-12 |publisher=arXiv |doi=10.48550/arXiv.2002.00065 |id=arXiv:2002.00065 |last2=S |first2=Krishnapriya K. |last3=Vangara |first3=Kushal |last4=Zhang |first4=Kai |last5=King |first5=Michael C. |last6=Bowyer |first6=Kevin W.}} and [[transgender]]{{Cite journal |last=Hamidi |first=Foad |last2=Scheuerman |first2=Morgan Klaus |last3=Branham |first3=Stacy M. |date=2018-04-19 |title=Gender Recognition or Gender Reductionism? The Social Implications of Embedded Gender Recognition Systems |url=https://doi.org/10.1145/3173574.3173582 |journal=Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems |series=CHI '18 |location=New York, NY, USA |publisher=Association for Computing Machinery |pages=1–13 |doi=10.1145/3173574.3173582 |isbn=978-1-4503-5620-6}}{{Cite web |last=Keyes |first=Os |date=November 1, 2018 |title=The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition |url=https://doi.org/10.1145/3274357 |access-date=2026-03-12 |website=ACM Digital Library |language=en |doi=10.1145/3274357}} people through misclassification of gender that is misaligned with the person's identity.
==== Stereotyping ==== Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.{{Cite arXiv |eprint=2305.18189v1 |class=cs.CL |first1=Myra |last1=Cheng |first2=Esin |last2=Durmus |title=Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models |date=2023-05-29 |language=en |last3=Jurafsky |first3=Dan}}
==== Language bias ==== Since current large language models are predominantly trained on English-language data, they often present Western views as truth, while systematically downplaying non-English perspectives.{{Cite journal |last1=Pretorius |first1=Lynette |last2=Huynh |first2=Huy-Hoang |last3=Pudyanti |first3=Anak Agung Ayu Redi |last4=Li |first4=Ziqi |last5=Noori |first5=Abdul Qawi |last6=Zhou |first6=Zhiheng|date=2025 |title=Empowering international PhD students: Generative AI, Ubuntu, and the decolonisation of academic communication |journal=The Internet and Higher Education |language=en |volume=67 |article-number=101038 |doi=10.1016/j.iheduc.2025.101038|doi-access=free }}
==== Political bias ==== Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.{{Cite journal |last1=Eacersall |first1=Douglas |last2=Pretorius |first2=Lynette |last3=Smirnov |first3=Ivan | last4=Spray |first4=Erika | last5=Illingworth|first5=Sam | last6=Chugh |first6=Ritesh| last7=Strydom |first7=Sonja |last8=Stratton-Maher |first8=Dianne |last9=Simmons |first9=Jonathan |last10=Jennings |first10=Isaac |last11=Roux |first11=Rian |last12=Kamrowski |first12=Ruth |last13=Downie |first13=Abigail |last14=Thong |first14=Chee Ling |last15=Howell |first15=Katharine A. |date=2025 |title=Navigating ethical challenges in generative AI-enhanced research: The ETHICAL framework for responsible generative AI use |journal=Journal of Applied Learning & Teaching |language=en |volume=8 |issue=2|doi=10.37074/jalt.2025.8.2.9|doi-access=free }}{{Cite journal |last1=Feng |first1=Shangbin |last2=Park |first2=Chan Young |last3=Liu |first3=Yuhan |last4=Tsvetkov |first4=Yulia |date=July 2023 |editor-last=Rogers |editor-first=Anna |editor2-last=Boyd-Graber |editor2-first=Jordan |editor3-last=Okazaki |editor3-first=Naoaki |title=From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models |url=https://aclanthology.org/2023.acl-long.656 |journal=Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |location=Toronto, Canada |publisher=Association for Computational Linguistics |pages=11737–11762 |arxiv=2305.08283 |doi=10.18653/v1/2023.acl-long.656 |doi-access=free}}{{Cite journal |last1=Zhou |first1=Karen |last2=Tan |first2=Chenhao |date=December 2023 |editor-last=Bouamor |editor-first=Houda |editor2-last=Pino |editor2-first=Juan |editor3-last=Bali |editor3-first=Kalika |title=Entity-Based Evaluation of Political Bias in Automatic Summarization |url=https://aclanthology.org/2023.findings-emnlp.696 |journal=Findings of the Association for Computational Linguistics: EMNLP 2023 |location=Singapore |publisher=Association for Computational Linguistics |pages=10374–10386 |arxiv=2305.02321 |doi=10.18653/v1/2023.findings-emnlp.696 |doi-access=free |access-date=2023-12-25 |archive-date=2024-04-24 |archive-url=https://web.archive.org/web/20240424141927/https://aclanthology.org/2023.findings-emnlp.696/ |url-status=live }}
===Dominance by tech giants=== The commercial AI scene is dominated by [[Big Tech]] companies such as [[Alphabet Inc.]], [[Amazon (company)|Amazon]], [[Apple Inc.]], [[Meta Platforms]], and [[Microsoft]].{{cite web |last1=Hammond |first1=George |title=Big Tech is spending more than VC firms on AI startups |url=https://arstechnica.com/ai/2023/12/big-tech-is-spending-more-than-vc-firms-on-ai-startups/ |website=Ars Technica |language=en-us |date=27 December 2023 |url-status=live |archive-url=https://web.archive.org/web/20240110195706/https://arstechnica.com/ai/2023/12/big-tech-is-spending-more-than-vc-firms-on-ai-startups/ |archive-date= Jan 10, 2024 }}{{cite web |last1=Wong |first1=Matteo |title=The Future of AI Is GOMA |url=https://www.theatlantic.com/technology/archive/2023/10/big-ai-silicon-valley-dominance/675752/ |website=The Atlantic |language=en |date=24 October 2023 |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20240105020744/https://www.theatlantic.com/technology/archive/2023/10/big-ai-silicon-valley-dominance/675752/ |archive-date= Jan 5, 2024 }}{{cite news |title=Big tech and the pursuit of AI dominance |url=https://www.economist.com/business/2023/03/26/big-tech-and-the-pursuit-of-ai-dominance |newspaper=The Economist |date=Mar 26, 2023 |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20231229021351/https://www.economist.com/business/2023/03/26/big-tech-and-the-pursuit-of-ai-dominance |archive-date= Dec 29, 2023 }} Some of these players already own the vast majority of existing [[cloud computing|cloud infrastructure]] and [[computing]] power from [[data center]]s, allowing them to entrench further in the marketplace.{{cite news |last1=Fung |first1=Brian |title=Where the battle to dominate AI may be won |url=https://www.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html |work=CNN Business |date=19 December 2023 |language=en |url-status=live |archive-url=https://web.archive.org/web/20240113053332/https://www.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html |archive-date= Jan 13, 2024 }}{{cite news |last1=Metz |first1=Cade |title=In the Age of A.I., Tech's Little Guys Need Big Friends |url=https://www.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html |work=The New York Times |date=5 July 2023 |access-date=17 July 2024 |archive-date=8 July 2024 |archive-url=https://web.archive.org/web/20240708214644/https://www.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html |url-status=live }}
=== Climate impacts === {{Main|Environmental impact of artificial intelligence}}
The largest [[generative AI]] models require significant computing resources to train and use. These computing resources are often concentrated in massive data centers. The resulting environmental impacts include greenhouse gas emissions, water consumption, and [[electronic waste]]. Despite improved energy efficiency, the energy needs are expected to increase, as AI gets more broadly used.{{Cite news |last=mkaczmarski |date=2025-04-11 |title=AI energy demand to climb in 2025–26 despite efficiency gains |url=https://www.bloomberg.com/professional/insights/artificial-intelligence/ai-energy-demand-to-climb-in-2025-26-despite-efficiency-gains/ |access-date=2025-10-07 |work=Bloomberg Professional Services |language=en-US}}
==== Electricity consumption and carbon footprint ==== These resources are often concentrated in massive data centers, which require demanding amounts of energy, resulting in increased greenhouse gas emissions.{{Cite web |date=2025-01-17 |title=Explained: Generative AI's environmental impact |url=https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117 |access-date=2025-10-03 |website=MIT News |language=en}} A 2023 study suggests that the amount of energy required to train large AI models was equivalent to 626,000 pounds of carbon dioxide or the same as 300 round-trip flights between New York and San Francisco.{{Cite web |last=Kanungo |first=Alokya |date=2023-07-18 |title=The Real Environmental Impact of AI |url=https://earth.org/the-green-dilemma-can-ai-fulfil-its-potential-without-harming-the-environment/ |access-date=2025-10-03 |website=Earth.Org |language=en}}
==== Water consumption ==== In addition to carbon emissions, these data centers also need water for cooling AI chips. Locally, this can lead to [[water scarcity]] and the disruption of ecosystems. Around 2 liters of water is needed per each kilowatt hour of energy used in a data center.
==== Electronic waste ==== Another problem is the resulting electronic waste (or e-waste). This can include hazardous materials and chemicals, such as [[lead]] and [[Mercury (element)|mercury]], resulting in the contamination of soil and water. In order to prevent the environmental effects of AI-related e-waste, better disposal practices and stricter laws may be put in place.
==== Prospective ==== The rising popularity of AI increases the need for data centers and intensifies these problems. There is also a lack of transparency from AI companies about the environmental impacts. Some applications can also indirectly affect the environment. For example, AI advertising can increase consumption of [[fast fashion]], an industry that already produces significant emissions.{{Cite web |last=Coleman |first=Jude |title=AI's Climate Impact Goes beyond Its Emissions |url=https://www.scientificamerican.com/article/ais-climate-impact-goes-beyond-its-emissions/ |access-date=2025-10-03 |website=Scientific American |language=en}}
However, AI can also be used in a positive way by helping to mitigate the environmental damages. Different AI technologies can help monitor emissions and develop algorithms to help companies lower their emissions. === Open source === [[Bill Hibbard]] argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[http://www.ssec.wisc.edu/~billh/g/hibbard_agi_workshop.pdf Open Source AI.] {{Webarchive|url=https://web.archive.org/web/20160304054930/http://www.ssec.wisc.edu/~billh/g/hibbard_agi_workshop.pdf |date=2016-03-04 }} Bill Hibbard. 2008 [https://agi-conf.org/2008/papers/ proceedings] {{Webarchive|url=https://web.archive.org/web/20240925013117/https://agi-conf.org/2008/papers/ |date=2024-09-25 }} of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin. Organizations like [[Hugging Face]]{{Cite web |last1=Stewart |first1=Ashley |last2=Melton |first2=Monica |title=Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup |url=https://www.businessinsider.com/hugging-face-open-source-ai-approach-2023-12 |access-date=2024-04-07 |website=Business Insider |language=en-US |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925013220/https://www.businessinsider.com/hugging-face-open-source-ai-approach-2023-12 |url-status=live }} and [[EleutherAI]]{{Cite web |title=The open-source AI boom is built on Big Tech's handouts. How long will it last? |url=https://www.technologyreview.com/2023/05/12/1072950/open-source-ai-google-openai-eleuther-meta/ |access-date=2024-04-07 |website=MIT Technology Review |language=en |archive-date=2024-01-05 |archive-url=https://web.archive.org/web/20240105005257/https://www.technologyreview.com/2023/05/12/1072950/open-source-ai-google-openai-eleuther-meta/ |url-status=live }} have been actively open-sourcing AI software. Various open-weight large language models have also been released, such as [[Gemma (language model)|Gemma]], [[LLaMA|Llama2]] and [[Mistral AI|Mistral]].{{Cite news |last=Yao |first=Deborah |date=February 21, 2024 |title=Google Unveils Open Source Models to Rival Meta, Mistral |url=https://aibusiness.com/nlp/google-unveils-open-source-models-to-compete-against-meta |work=AI Business}}
However, making code [[open source]] does not make it comprehensible, which by many definitions means that the AI code is not transparent. The [[IEEE Standards Association]] has published a [[Technical standards|technical standard]] on Transparency of Autonomous Systems: IEEE 7001-2021.{{cite book |title=7001-2021 – IEEE Standard for Transparency of Autonomous Systems |date=4 March 2022 |publisher=IEEE |isbn=978-1-5044-8311-7 |pages=1–54 |doi=10.1109/IEEESTD.2022.9726144 |ref=p7001 |s2cid=252589405 }}. The IEEE effort identifies multiple scales of transparency for different stakeholders.
There are also concerns that releasing AI models may lead to misuse.{{Cite journal |last1=Kamila |first1=Manoj Kumar |last2=Jasrotia |first2=Sahil Singh |date=2023-01-01 |title=Ethical issues in the development of artificial intelligence: recognizing the risks |journal=International Journal of Ethics and Systems |volume=41 |pages=45–63 |doi=10.1108/IJOES-05-2023-0107 |issn=2514-9369 |s2cid=259614124}} For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted a blog on this topic, asking for government regulation to help determine the right thing to do.{{cite magazine |last1=Thurm |first1=Scott |date=July 13, 2018 |title=Microsoft Calls For Federal Regulation of Facial Recognition |url=https://www.wired.com/story/microsoft-calls-for-federal-regulation-of-facial-recognition/ |url-status=live |archive-url=https://web.archive.org/web/20190509231338/https://www.wired.com/story/microsoft-calls-for-federal-regulation-of-facial-recognition/ |archive-date=May 9, 2019 |access-date=January 10, 2019 |magazine=Wired |ref=WiredMS}} Furthermore, open-weight AI models can be [[Fine-tuning (deep learning)|fine-tuned]] to remove any countermeasure, until the AI model complies with dangerous requests, without any filtering. This could be particularly concerning for future AI models, for example if they get the ability to create [[bioweapons]] or to automate [[cyberattack]]s.{{Cite web |last=Piper |first=Kelsey |date=2024-02-02 |title=Should we make our most powerful AI models open source to all? |url=https://www.vox.com/future-perfect/2024/2/2/24058484/open-source-artificial-intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake |access-date=2024-04-07 |website=Vox |language=en}} [[OpenAI]], initially committed to an open-source approach to the development of [[artificial general intelligence]] (AGI), eventually switched to a closed-source approach, citing competitiveness and safety reasons. [[Ilya Sutskever]], OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years.{{Cite web |last=Vincent |first=James |date=2023-03-15 |title=OpenAI co-founder on company's past approach to openly sharing research: "We were wrong" |url=https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview |access-date=2024-04-07 |website=The Verge |language=en |archive-date=2023-03-17 |archive-url=https://web.archive.org/web/20230317210900/https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview |url-status=live }}
=== Strain on open knowledge platforms === In April 2023, ''[[Wired (magazine)|Wired]]'' reported that [[Stack Overflow]], a popular programming help forum with over 50 million questions and answers, planned to begin charging large AI developers for access to its content. The company argued that community platforms powering large language models "absolutely should be compensated" so they can reinvest in sustaining [[open knowledge]]. Stack Overflow said its data was being accessed through [[Data scraping|scraping]], APIs, and data dumps, often without proper attribution, in violation of its terms and the [[Creative Commons license]] applied to user contributions. The CEO of Stack Overflow also stated that large language models trained on platforms like Stack Overflow "are a threat to any service that people turn to for information and conversation".{{Cite magazine |date=28 April 2023 |title=Stack Overflow Will Charge AI Giants for Training Data |url=https://www.wired.com/story/stack-overflow-will-charge-ai-giants-for-training-data/ |access-date=3 April 2025 |magazine=WIRED}}
Aggressive AI crawlers have increasingly overloaded open-source infrastructure, "causing what amounts to persistent [[distributed denial-of-service]] (DDoS) attacks on vital public resources", according to a March 2025 ''[[Ars Technica]]'' article. Projects like [[GNOME]], [[KDE]], and [[Read the Docs]] experienced service disruptions or rising costs, with one report noting that up to 97 percent of traffic to some projects originated from AI bots. In response, maintainers implemented measures such as [[Proof of-work system|proof-of-work systems]] and country blocks. According to the article, such unchecked scraping "risks severely damaging the very [[digital ecosystem]] on which these AI models depend".{{Cite web |date=25 March 2025 |title=Open source devs say AI crawlers dominate traffic, forcing blocks on entire countries |url=https://arstechnica.com/ai/2025/03/devs-say-ai-crawlers-dominate-traffic-forcing-blocks-on-entire-countries/ |access-date=3 April 2025 |website=Ars Technica}}
In April 2025, the [[Wikimedia Foundation]] reported that automated scraping by AI bots was placing strain on its infrastructure. Since early 2024, bandwidth usage had increased by 50 percent due to large-scale downloading of multimedia content by bots collecting training data for AI models. These bots often accessed obscure and less-frequently cached pages, bypassing caching systems and imposing high costs on core data centers. According to Wikimedia, bots made up 35 percent of total page views but accounted for 65 percent of the most expensive requests. The Foundation noted that "our content is free, our infrastructure is not" and warned that "this creates a technical imbalance that threatens the sustainability of community-run platforms".{{Cite web |date=2 April 2025 |title=AI bots strain Wikimedia as bandwidth surges 50% |url=https://arstechnica.com/information-technology/2025/04/ai-bots-strain-wikimedia-as-bandwidth-surges-50/ |access-date=3 April 2025 |website=Ars Technica}}
=== Transparency === Approaches like machine learning with [[neural network]]s can result in computers making decisions that neither they nor their developers can explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. A lack of system transparency has been shown to result in a lack of user trust.{{Cite journal |last=von Eschenbach |first=Warren J. |date=2021-12-01 |title=Transparency and the Black Box Problem: Why We Do Not Trust AI |url=https://link.springer.com/article/10.1007/s13347-021-00477-0 |journal=Philosophy & Technology |language=en |volume=34 |issue=4 |pages=1607–1622 |doi=10.1007/s13347-021-00477-0 |issn=2210-5441}} Consequently, many standards and policies have been proposed to compel developers of AI systems to incorporate transparency into their systems.{{Cite journal |last1=Lund |first1=Brady |last2=Orhan |first2=Zeynep |last3=Mannuru |first3=Nishith Reddy |last4=Bevara |first4=Ravi Varma Kumar |last5=Porter |first5=Brett |last6=Vinaih |first6=Meka Kasi |last7=Bhaskara |first7=Padmapadanand |date=2025-01-29 |title=Standards, frameworks, and legislation for artificial intelligence (AI) transparency |url=https://link.springer.com/article/10.1007/s43681-025-00661-4 |journal=AI and Ethics |volume=5 |issue=4 |pages=3639–3655 |language=en |doi=10.1007/s43681-025-00661-4 |issn=2730-5961}} This push for transparency has led to advocacy and in some jurisdictions legal requirements for [[explainable artificial intelligence]].[https://think.kera.org/2017/12/05/inside-the-mind-of-a-i/ Inside The Mind Of A.I.] {{Webarchive|url=https://web.archive.org/web/20210810003331/https://think.kera.org/2017/12/05/inside-the-mind-of-a-i/|date=2021-08-10}} – Cliff Kuang interview Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to providing reasons for the model's outputs, and interpretability focusing on understanding the inner workings of an AI model.{{Cite web |date=2024-10-08 |title=What Is AI Interpretability? {{!}} IBM |url=https://www.ibm.com/think/topics/interpretability |access-date=2025-07-03 |website=www.ibm.com |language=en}}
In healthcare, the use of complex AI methods or techniques often results in models described as "[[Black box|black-boxes]]" due to the difficulty to understand how they work. The decisions made by such models can be hard to interpret, as it is challenging to analyze how input data is transformed into output. This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards.{{Cite journal |last1=Li |first1=Fan |last2=Ruijs |first2=Nick |last3=Lu |first3=Yuan |date=2022-12-31 |title=Ethics & AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare |journal=AI |language=en |volume=4 |issue=1 |pages=28–53 |doi=10.3390/ai4010003 |doi-access=free |issn=2673-2688}} Trust in healthcare AI has been shown to vary depending on the level of transparency provided.{{Cite journal |last1=Shabankareh |first1=Mohammadjavad |last2=Khamoushi Sahne |first2=Seyed Sina |last3=Nazarian |first3=Alireza |last4=Foroudi |first4=Pantea |date=2025-01-01 |title=The impact of AI perceived transparency on trust in AI recommendations in healthcare applications |url=https://www.emerald.com/insight/content/doi/10.1108/apjba-12-2024-0690/full/html |journal=Asia-Pacific Journal of Business Administration |volume=ahead-of-print |issue=ahead-of-print |doi=10.1108/APJBA-12-2024-0690 |issn=1757-4331}} Moreover, unexplainable outputs of AI systems make it much more difficult to identify and detect medical error.{{Cite journal |last1=Xu |first1=Hanhui |last2=Shuttleworth |first2=Kyle Michael James |date=2024-02-01 |title=Medical artificial intelligence and the black box problem: a view based on the ethical principle of "do no harm" |url=https://www.sciencedirect.com/science/article/pii/S2667102623000578 |journal=Intelligent Medicine |volume=4 |issue=1 |pages=52–57 |doi=10.1016/j.imed.2023.08.001 |issn=2667-1026}} ===Accountability=== A special case of the opaqueness of AI is that caused by it being [[anthropomorphised]], that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its [[moral agency]].{{Dubious|date=April 2024|reason=Unclear why AIs couldn't have moral agency. Also unclear whether attributing it moral agency is a special case of opaqueness, and whether that would prevent people from attributing the responsibility of incidents to the company that developed it.}} This can cause people to overlook whether either human [[negligence]] or deliberate criminal action has led to unethical outcomes produced through an AI system. Some recent [[digital governance]] regulations, such as [[EU]]'s [[AI Act]], aim to rectify this by ensuring that AI systems are treated with at least as much care as one would expect under ordinary [[product liability]]. This includes potentially [[Information technology audit|AI audits]].
=== Regulation === {{Main|Regulation of artificial intelligence}}
According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller.{{Cite web |last=Howard |first=Ayanna |date=29 July 2019 |title=The Regulation of AI – Should Organizations Be Worried? {{!}} Ayanna Howard |url=https://sloanreview.mit.edu/article/the-regulation-of-ai-should-organizations-be-worried/ |url-status=live |archive-url=https://web.archive.org/web/20190814134545/https://sloanreview.mit.edu/article/the-regulation-of-ai-should-organizations-be-worried/ |archive-date=2019-08-14 |access-date=2019-08-14 |website=MIT Sloan Management Review}} Similarly, according to a five-country study by KPMG and the [[University of Queensland]] Australia in 2021, 66–79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully.{{Cite web |date=March 2021 |title=Trust in artificial intelligence – A five country study |url=https://assets.kpmg.com/content/dam/kpmg/au/pdf/2021/trust-in-ai-multiple-countries.pdf |website=KPMG |access-date=2023-10-06 |archive-date=2023-10-01 |archive-url=https://web.archive.org/web/20231001161127/https://assets.kpmg.com/content/dam/kpmg/au/pdf/2021/trust-in-ai-multiple-countries.pdf |url-status=live }}
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.{{cite web |last1=Bastin |first1=Roland |last2=Wantz |first2=Georges |date=June 2017 |title=The General Data Protection Regulation Cross-industry innovation |url=https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/technology/lu-general-data-protection-regulation-cross-industry-innovation-062017.pdf |url-status=live |archive-url=https://web.archive.org/web/20190110183405/https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/technology/lu-general-data-protection-regulation-cross-industry-innovation-062017.pdf |archive-date=2019-01-10 |access-date=2019-01-10 |website=Inside magazine |publisher=Deloitte |ref=DeloitteGDPR}} The [[OECD]], [[UN]], [[EU]], and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.{{Cite web |date=2017-06-07 |title=UN artificial intelligence summit aims to tackle poverty, humanity's 'grand challenges' |url=https://news.un.org/en/story/2017/06/558962-un-artificial-intelligence-summit-aims-tackle-poverty-humanitys-grand |url-status=live |archive-url=https://web.archive.org/web/20190726084819/https://news.un.org/en/story/2017/06/558962-un-artificial-intelligence-summit-aims-tackle-poverty-humanitys-grand |archive-date=2019-07-26 |access-date=2019-07-26 |website=UN News}}{{Cite web |title=Artificial intelligence – Organisation for Economic Co-operation and Development |url=http://www.oecd.org/going-digital/ai/ |url-status=live |archive-url=https://web.archive.org/web/20190722124751/http://www.oecd.org/going-digital/ai/ |archive-date=2019-07-22 |access-date=2019-07-26 |website=www.oecd.org}}{{Cite web |last=Anonymous |date=2018-06-14 |title=The European AI Alliance |url=https://ec.europa.eu/digital-single-market/en/european-ai-alliance |url-status=live |archive-url=https://web.archive.org/web/20190801011543/https://ec.europa.eu/digital-single-market/en/european-ai-alliance |archive-date=2019-08-01 |access-date=2019-07-26 |website=Digital Single Market – European Commission}}
On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence".{{Cite web |last=European Commission High-Level Expert Group on AI |date=2019-06-26 |title=Policy and investment recommendations for trustworthy Artificial Intelligence |url=https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence |url-status=live |archive-url=https://web.archive.org/web/20200226023934/https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence |archive-date=2020-02-26 |access-date=2020-03-16 |website=Shaping Europe's digital future – European Commission |language=en}} This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector.{{Cite journal |last1=Fukuda-Parr |first1=Sakiko |last2=Gibbons |first2=Elizabeth |date=July 2021 |title=Emerging Consensus on 'Ethical AI': Human Rights Critique of Stakeholder Guidelines |journal=Global Policy |language=en |volume=12 |issue=S6 |pages=32–44 |doi=10.1111/1758-5899.12965 |issn=1758-5880 |doi-access=free}} The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.{{Cite web |date=2 August 2019 |title=EU Tech Policy Brief: July 2019 Recap |url=https://cdt.org/blog/eu-tech-policy-brief-july-2019-recap/ |url-status=live |archive-url=https://web.archive.org/web/20190809194057/https://cdt.org/blog/eu-tech-policy-brief-july-2019-recap/ |archive-date=2019-08-09 |access-date=2019-08-09 |website=Center for Democracy & Technology}} To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks.{{Cite journal |last1=Curtis |first1=Caitlin |last2=Gillespie |first2=Nicole |last3=Lockey |first3=Steven |date=2022-05-24 |title=AI-deploying organizations are key to addressing 'perfect storm' of AI risks |journal=AI and Ethics |language=en |volume=3 |issue=1 |pages=145–153 |doi=10.1007/s43681-022-00163-7 |issn=2730-5961 |pmc=9127285 |pmid=35634256 }}
In June 2024, the EU adopted the [[Artificial Intelligence Act]] (AI Act).{{Cite web |date= 6 August 2023|title=EU AI Act: first regulation on artificial intelligence |url=https://www.europarl.europa.eu/topics/en/article/20230601STO93804/eu-ai-act-first-regulation-on-artificial-intelligence |access-date=2025-07-15 |website=European Parliament |language=en}} On August 1st 2024, The AI Act [[Entry into force|entered into force]].{{Cite web |date=2024-08-01 |title=AI Act enters into force |url=https://commission.europa.eu/news-and-media/news/ai-act-enters-force-2024-08-01_en |access-date=2025-07-15 |website=European Commission |language=en}} The rules gradually apply, with the act becoming fully applicable 24 months after entry into force. The AI Act sets rules on providers and users of AI systems. It follows a risk-based approach, where depending on the risk level, AI systems are prohibited or specific requirements need to be met for placing those AI systems on the market and for using them.
=== Increasing use === AI has been slowly making its presence more known throughout the world, from chatbots that seemingly have answers for every homework question to generative AI that can create a painting about whatever one desires. AI has become increasingly popular in hiring markets, from the ads that target certain people according to what they are looking for to the inspection of applications of potential hires. Events such as [[COVID-19]] have sped up the adoption of AI programs in the application process, due to more people having to apply electronically, and with this increase in online applicants the use of AI made the process of narrowing down potential employees easier and more efficient. AI has become more prominent as businesses have to keep up with the times and ever-expanding internet. Processing analytics and making decisions becomes much easier with the help of AI. As [[Tensor Processing Unit|Tensor Processing Units]] (TPUs) and [[Graphics processing unit|graphics processing units]] (GPUs) become more powerful, AI capabilities also increase, forcing companies to use it to keep up with the competition. Managing customers' needs and automating many parts of the workplace leads to companies having to spend less money on employees.
AI has also seen increased usage in criminal justice and healthcare. For medicinal means, AI is being used more often to analyze patient data to make predictions about future patients' conditions and possible treatments. These programs are called [[Clinical decision support system|clinical decision support systems]] (DSS). AI's future in healthcare may develop into something further than just recommended treatments, such as referring certain patients over others, leading to the possibility of inequalities.{{Cite journal |last1=Challen |first1=Robert |last2=Denny |first2=Joshua |last3=Pitt |first3=Martin |last4=Gompels |first4=Luke |last5=Edwards |first5=Tom |last6=Tsaneva-Atanasova |first6=Krasimira |date=March 2019 |title=Artificial intelligence, bias and clinical safety |journal=BMJ Quality & Safety |language=en |volume=28 |issue=3 |pages=231–237 |doi=10.1136/bmjqs-2018-008370 |issn=2044-5415|doi-access=free |pmid=30636200 |pmc=6560460 }}
===AI welfare=== In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may become conscious, such as the [[global workspace theory]] or the [[integrated information theory]]. Edelman notes one exception had been [[Thomas Metzinger]], who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an "[[Suffering risks|explosion of artificial suffering]]", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of conscious instances.{{cite journal |author=[[Thomas Metzinger]] |date=February 2021 |title=Artificial Suffering: An Argument for a Global Moratorim on Synthetic Phenomenology |journal=[[Journal of Artificial Intelligence and Consciousness]] |volume=8 |pages=43–66 |doi=10.1142/S270507852150003X |pmid= |s2cid=233176465 |doi-access=free}}{{cite journal |vauthors=Agarwal A, Edelman S |date=2020 |title=Functionally effective conscious AI without suffering |journal=[[Journal of Artificial Intelligence and Consciousness]] |volume=7 |pages=39–50 |arxiv=2002.05652 |doi=10.1142/S2705078520300030 |pmid= |s2cid=211096533}} Podcast host Dwarkesh Patel said he cared about making sure no "digital equivalent of [[factory farming]]" happens.{{Cite news |last=Roose |first=Kevin |date=2025-04-24 |title=If A.I. Systems Become Conscious, Should They Have Rights? |url=https://www.nytimes.com/2025/04/24/technology/ai-welfare-anthropic-claude.html |access-date=2025-04-24 |work=The New York Times |language=en-US |issn=0362-4331}} In the [[ethics of uncertain sentience]], the [[precautionary principle]] is often invoked.{{Cite journal |last=Birch |first=Jonathan |author-link=Jonathan Birch (philosopher) |date=2017-01-01 |title=Animal sentience and the precautionary principle |url=https://www.wellbeingintlstudiesrepository.org/animsent/vol2/iss16/1 |url-status=live |journal=Animal Sentience |volume=2 |issue=16 |doi=10.51291/2377-7478.1200 |issn=2377-7478 |archive-url=https://web.archive.org/web/20240811145748/https://www.wellbeingintlstudiesrepository.org/animsent/vol2/iss16/1/ |archive-date=2024-08-11 |access-date=2024-07-08 |doi-access=free}}
Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged.{{Cite journal |last=Macrae |first=Carl |date=September 2022 |title=Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety, and Sociotechnical Sources of Risk |url=https://onlinelibrary.wiley.com/doi/10.1111/risa.13850 |journal=Risk Analysis |language=en |volume=42 |issue=9 |pages=1999–2025 |bibcode=2022RiskA..42.1999M |doi=10.1111/risa.13850 |issn=0272-4332 |pmid=34814229}} These include [[OpenAI]] founder [[Ilya Sutskever]] in February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022, [[David Chalmers]] argued that it was unlikely current large language models like [[GPT-3]] had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future.{{cite arXiv |eprint=2303.07103v1 |class=Computer Science |first=David |last=Chalmers |author-link=David Chalmers |title=Could a Large Language Model be Conscious? |date=March 2023}} [[Anthropic]] hired its first AI welfare researcher in 2024,{{Cite web |last=Edwards |first=Benj |date=2024-11-11 |title=Anthropic hires its first "AI welfare" researcher |url=https://arstechnica.com/ai/2024/11/anthropic-hires-its-first-ai-welfare-researcher/ |access-date=2025-04-24 |website=Ars Technica |language=en}} and in 2025 started a "model welfare" research program that explores topics such as how to assess whether a model deserves moral consideration, potential "signs of distress", and "low-cost" interventions.{{Cite web |last=Wiggers |first=Kyle |date=2025-04-24 |title=Anthropic is launching a new program to study AI 'model welfare' |url=https://techcrunch.com/2025/04/24/anthropic-is-launching-a-new-program-to-study-ai-model-welfare/ |access-date=2025-04-27 |website=TechCrunch |language=en-US}}
According to Carl Shulman and [[Nick Bostrom]], it may be possible to create machines that would be "superhumanly efficient at deriving well-being from resources", called "super-beneficiaries". One reason for this is that digital hardware could enable much faster information processing than biological brains, leading to a faster rate of [[subjective experience]]. These machines could also be engineered to feel intense and positive subjective experience, unaffected by the [[hedonic treadmill]]. Shulman and Bostrom caution that failing to appropriately consider the moral claims of digital minds could lead to a moral catastrophe, while uncritically prioritizing them over human interests could be detrimental to humanity.{{Cite journal |last1=Shulman |first1=Carl |last2=Bostrom |first2=Nick |date=August 2021 |title=Sharing the World with Digital Minds |url=https://nickbostrom.com/papers/digital-minds.pdf |journal=Rethinking Moral Status|pages=306–326 |doi=10.1093/oso/9780192894076.003.0018 |isbn=978-0-19-289407-6 }}{{cite news |last1=Fisher |first1=Richard |date=13 November 2020 |title=The intelligent monster that you should let eat you |url=https://www.bbc.com/future/article/20201111-philosophy-of-utility-monsters-and-artificial-intelligence |access-date=12 February 2021 |work= |publisher=BBC News |language=en}}
===Threat to human dignity=== {{Main|Computer Power and Human Reason}}
[[Joseph Weizenbaum]] argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as:
- A customer service representative (AI technology is already used today for telephone-based [[interactive voice response]] systems)
- A nursemaid for the elderly (as was reported by [[Pamela McCorduck]] in her book ''The Fifth Generation'')
- A soldier
- A judge
- A police officer
- A therapist (as was proposed by [[Kenneth Colby]] in the 1970s)
Weizenbaum explains that we require authentic feelings of [[empathy]] from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[[Joseph Weizenbaum]], quoted in {{Harvnb|McCorduck|2004|pp=356, 374–376}}
[[Pamela McCorduck]] counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all. However, [[Andreas Kaplan|Kaplan]] and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and ingrained, which makes them even more difficult to spot and fight against.{{cite journal|last1=Kaplan|first1=Andreas|last2=Haenlein|first2=Michael|date=January 2019|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|journal=Business Horizons|volume=62|issue=1|pages=15–25|doi=10.1016/j.bushor.2018.08.004|s2cid=158433736}}
Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as [[computationalism]]). To Weizenbaum, these points suggest that AI research devalues human life.
- {{cite book | ref=none | last = Weizenbaum | first = Joseph | author-link=Joseph Weizenbaum | year = 1976 | title = Computer Power and Human Reason | publisher = W.H. Freeman & Company | location = San Francisco | isbn = 978-0-7167-0464-5 | title-link = Computer Power and Human Reason }}
- {{McCorduck 2004}}, pp. 132–144
AI founder [[John McCarthy (computer scientist)|John McCarthy]] objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse", he writes. [[Bill Hibbard]]{{cite arXiv|eprint=1411.1373|class=cs.AI|first1=Bill|last1=Hibbard|title=Ethical Artificial Intelligence|date=17 November 2015}} writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
=== Liability for self-driving cars === {{main|Self-driving car liability}}
As the widespread use of [[Self-driving car|autonomous cars]] becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed.{{cite news |last1=Davies |first1=Alex |date=29 February 2016 |title=Google's Self-Driving Car Caused Its First Crash |url=https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/ |url-status=live |archive-url=https://web.archive.org/web/20190707212719/https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/ |archive-date=7 July 2019 |access-date=26 July 2019 |magazine=Wired}}{{cite news |last1=Levin |first1=Sam |author-link=Julia Carrie Wong |last2=Wong |first2=Julia Carrie |date=19 March 2018 |title=Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian |url=https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe |url-status=live |archive-url=https://web.archive.org/web/20190726084818/https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe |archive-date=26 July 2019 |access-date=26 July 2019 |work=The Guardian}} There have been debates about the legal liability of the responsible party if these cars get into accidents.{{Cite web |date=30 January 2018 |title=Who is responsible when a self-driving car has an accident? |url=https://futurism.com/who-responsible-when-self-driving-car-accident |url-status=live |archive-url=https://web.archive.org/web/20190726084819/https://futurism.com/who-responsible-when-self-driving-car-accident |archive-date=2019-07-26 |access-date=2019-07-26 |website=Futurism}}{{Cite news |title=Autonomous Car Crashes: Who – or What – Is to Blame? |url=https://knowledge.wharton.upenn.edu/article/automated-car-accidents/ |url-status=live |archive-url=https://web.archive.org/web/20190726084820/https://knowledge.wharton.upenn.edu/article/automated-car-accidents/ |archive-date=2019-07-26 |access-date=2019-07-26 |website=Knowledge@Wharton |publisher=Radio Business North America Podcasts |series=Law and Public Policy}} In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.{{Cite web |last=Delbridge |first=Emily |title=Driverless Cars Gone Wild |url=https://www.thebalance.com/driverless-car-accidents-4171792 |url-status=live |archive-url=https://web.archive.org/web/20190529020717/https://www.thebalance.com/driverless-car-accidents-4171792 |archive-date=2019-05-29 |access-date=2019-05-29 |website=The Balance}}
In another incident on March 18, 2018, [[Elaine Herzberg]] was struck and killed by a self-driving [[Uber]] in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.{{Citation |last=Stilgoe |first=Jack |title=Who Killed Elaine Herzberg? |date=2020 |work=Who's Driving Innovation? |pages=1–6 |url=http://link.springer.com/10.1007/978-3-030-32320-2_1 |access-date=2020-11-11 |archive-url=https://web.archive.org/web/20210318060722/https://link.springer.com/chapter/10.1007%2F978-3-030-32320-2_1 |archive-date=2021-03-18 |url-status=live |place=Cham |publisher=Springer International Publishing |language=en |doi=10.1007/978-3-030-32320-2_1 |isbn=978-3-030-32319-6 |s2cid=214359377|url-access=subscription }}
Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary.{{cite journal |last1=Maxmen |first1=Amy |date=October 2018 |title=Self-driving car dilemmas reveal that moral choices are not universal |journal=Nature |volume=562 |issue=7728 |pages=469–470 |bibcode=2018Natur.562..469M |doi=10.1038/d41586-018-07135-0 |pmid=30356197 |doi-access=free}}{{Failed verification|date=November 2020}} Thus, it falls on governments to regulate drivers who over-rely on autonomous features and to inform them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.{{Cite web |title=Regulations for driverless cars |url=https://www.gov.uk/government/publications/driverless-cars-in-the-uk-a-regulatory-review |url-status=live |archive-url=https://web.archive.org/web/20190726084816/https://www.gov.uk/government/publications/driverless-cars-in-the-uk-a-regulatory-review |archive-date=2019-07-26 |access-date=2019-07-26 |website=GOV.UK}}{{Cite web |title=Automated Driving: Legislative and Regulatory Action – CyberWiki |url=https://cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_Legislative_and_Regulatory_Action |archive-url=https://web.archive.org/web/20190726084828/https://cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_Legislative_and_Regulatory_Action |archive-date=2019-07-26 |access-date=2019-07-26 |website=cyberlaw.stanford.edu}}{{Cite web |title=Autonomous Vehicles {{!}} Self-Driving Vehicles Enacted Legislation |url=http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx |url-status=live |archive-url=https://web.archive.org/web/20190726165225/http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx |archive-date=2019-07-26 |access-date=2019-07-26 |website=www.ncsl.org}}
Experts contend that autonomous vehicles ought to be able to distinguish between rightful and harmful decisions since they have the potential of inflicting harm.{{Cite journal |last1=Etzioni |first1=Amitai |last2=Etzioni |first2=Oren |date=2017-12-01 |title=Incorporating Ethics into Artificial Intelligence |journal=The Journal of Ethics |language=en |volume=21 |issue=4 |pages=403–418 |doi=10.1007/s10892-017-9252-2 |issn=1572-8609 |s2cid=254644745}} The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into the machine's guidance system. However, there are significant challenges facing both strategies: the top-down technique is criticized for its difficulty in preserving certain moral convictions, while the bottom-up strategy is questioned for potentially unethical learning from human activities.
=== Weaponization === {{Main|Military applications of artificial intelligence|Lethal autonomous weapon}}
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[http://news.bbc.co.uk/2/hi/technology/8182003.stm Call for debate on killer robots] {{Webarchive|url=https://web.archive.org/web/20090807005005/http://news.bbc.co.uk/2/hi/technology/8182003.stm|date=2009-08-07}}, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09. The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm Science New Navy-funded Report Warns of War Robots Going "Terminator"] {{Webarchive|url=https://web.archive.org/web/20090728101106/http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm|date=2009-07-28}}, by Jason Mick (Blog), dailytech.com, February 17, 2009. The President of the [[Association for the Advancement of Artificial Intelligence]] has commissioned a study to look at this issue.[http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm AAAI Presidential Panel on Long-Term AI Futures 2008–2009 Study] {{Webarchive|url=https://web.archive.org/web/20090828214741/http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm|date=2009-08-28}}, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09. They point to programs like the Language Acquisition Device which can emulate human interaction.
On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the '[[black box]]' and understand the kill-chain process. However, a major concern is how the report will be implemented.{{Cite book|last=United States. Defense Innovation Board|title=AI principles: recommendations on the ethical use of artificial intelligence by the Department of Defense|oclc=1126650738}} The US Navy has funded a report which indicates that as [[military robots]] become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[https://www.engadget.com/2009/02/18/navy-report-warns-of-robot-uprising-suggests-a-strong-moral-com/ Navy report warns of robot uprising, suggests a strong moral compass] {{Webarchive|url=https://web.archive.org/web/20110604145633/http://www.engadget.com/2009/02/18/navy-report-warns-of-robot-uprising-suggests-a-strong-moral-com/ |date=2011-06-04 }}, by Joseph L. Flatley engadget.com, Feb 18th 2009. Some researchers state that [[autonomous robot]]s might be more humane, as they could make decisions more effectively.{{Cite journal|last1=Umbrello|first1=Steven|last2=Torres|first2=Phil|last3=De Bellis|first3=Angelo F.|date=March 2020|title=The future of war: could lethal autonomous weapons make conflict more ethical?|url=http://link.springer.com/10.1007/s00146-019-00879-x|journal=AI & Society|language=en|volume=35|issue=1|pages=273–282|doi=10.1007/s00146-019-00879-x|hdl=2318/1699364|s2cid=59606353|issn=0951-5666|access-date=2020-11-11|archive-date=2021-01-05|archive-url=https://archive.today/20210105020836/https://link.springer.com/article/10.1007/s00146-019-00879-x|url-status=live|url-access=subscription}} In 2024, the [[DARPA|Defense Advanced Research Projects Agency]] funded a program, ''Autonomy Standards and Ideals with Military Operational Values'' (ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities.{{Cite web |last=Jamison |first=Miles |date=2024-12-20 |title=DARPA Launches Ethics Program for Autonomous Systems |url=https://executivegov.com/2024/12/darpa-launches-ethics-program-autonomous-systems/ |access-date=2025-01-02 |website=executivegov.com |language=en-US}}{{Cite web |title=DARPA's ASIMOV seeks to develop Ethical Standards for Autonomous Systems |url=https://www.spacedaily.com/reports/CoVar_to_develop_Ethical_Standards_for_Autonomous_Systems_under_DARPA_ASIMOV_contract_999.html |access-date=2025-01-02 |website=Space Daily}}
Research has studied how to make autonomous systems with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."{{cite journal |last1=Hellström |first1=Thomas |title=On the moral responsibility of military robots |journal=Ethics and Information Technology |date=June 2013 |volume=15 |issue=2 |pages=99–107 |id={{ProQuest|1372020233}} |s2cid=15205810 |doi=10.1007/s10676-012-9301-2 |url=http://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-60199 }} From a [[Consequentialism|consequentialist]] view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set [[Morality|moral]] framework that the AI cannot override.{{Cite web|url=https://qz.com/1244055/we-can-train-ai-to-identify-good-and-evil-and-then-use-it-to-teach-us-morality/|title=We can train AI to identify good and evil, and then use it to teach us morality|last=Mitra|first=Ambarish|website=Quartz|date=5 April 2018 |access-date=2019-07-26|archive-date=2019-07-26|archive-url=https://web.archive.org/web/20190726085248/https://qz.com/1244055/we-can-train-ai-to-identify-good-and-evil-and-then-use-it-to-teach-us-morality/|url-status=live}}
There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a [[AI takeover|robot takeover of mankind]]. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop [[Unmanned combat aerial vehicle|autonomous drone weapons]], paralleling similar announcements by Russia and South Korea{{Cite news |title=South Korea developing new stealthy drones to support combat aircraft |last=Dominguez |first=Gabriel |date=23 August 2022 |work=[[The Japan Times]] |url=https://www.japantimes.co.jp/news/2022/08/23/asia-pacific/south-korea-stealth-drones-development/ |access-date=14 June 2023}} respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, [[Stephen Hawking]] and [[Max Tegmark]] signed a "Future of Life" petition{{Cite web|url=https://futureoflife.org/ai-principles/|title=AI Principles|website=Future of Life Institute|date=11 August 2017 |access-date=2019-07-26|archive-date=2017-12-11|archive-url=https://web.archive.org/web/20171211171044/https://futureoflife.org/ai-principles/|url-status=live}} to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.{{cite web|url=https://www.theatlantic.com/technology/archive/2015/08/humans-not-robots-are-the-real-reason-artificial-intelligence-is-scary/400994/|title=Why Artificial Intelligence Can Too Easily Be Weaponized – The Atlantic|author=Zach Musgrave and Bryan W. Roberts|work=The Atlantic|date=2015-08-14|access-date=2017-03-06|archive-date=2017-04-11|archive-url=https://web.archive.org/web/20170411140722/https://www.theatlantic.com/technology/archive/2015/08/humans-not-robots-are-the-real-reason-artificial-intelligence-is-scary/400994/|url-status=live}}
"If any major military power pushes ahead with the AI weapon development, a global [[arms race]] is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the [[AK-47|Kalashnikovs]] of tomorrow", says the petition, which includes [[Skype]] co-founder [[Jaan Tallinn]] and MIT professor of linguistics [[Noam Chomsky]] as additional supporters against AI weaponry.{{cite web|url=https://blogs.wsj.com/digits/2015/07/27/musk-hawking-warn-of-artificial-intelligence-weapons/|title=Musk, Hawking Warn of Artificial Intelligence Weapons|author=Cat Zakrzewski|work=WSJ|date=2015-07-27|access-date=2017-08-04|archive-date=2015-07-28|archive-url=https://web.archive.org/web/20150728173944/http://blogs.wsj.com/digits/2015/07/27/musk-hawking-warn-of-artificial-intelligence-weapons/|url-status=live}}
Physicist and Astronomer Royal [[Sir Martin Rees]] has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." [[Huw Price]], a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created the [[Centre for the Study of Existential Risk]] at Cambridge University in the hope of avoiding this threat to human existence.
Regarding the potential for smarter-than-human systems to be employed militarily, the [[Open Philanthropy Project]] writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the [[Machine Intelligence Research Institute]] (MIRI) and the [[Future of Humanity Institute]] (FHI), and there seems to have been less analysis and debate regarding them".{{Cite web |date=August 11, 2015 |title=Potential Risks from Advanced Artificial Intelligence |url=https://www.openphilanthropy.org/research/potential-risks-from-advanced-artificial-intelligence/ |access-date=2024-04-07 |website=Open Philanthropy |language=en-us}}
Academic Gao Qiqi writes that military use of AI risks escalating military competition between countries and that the impact of AI in military matters will not be limited to one country but will have spillover effects.{{Cite book |last1=Bachulska |first1=Alicja |url=https://ecfr.eu/publication/idea-of-china/ |title=The Idea of China: Chinese Thinkers on Power, Progress, and People |last2=Leonard |first2=Mark |last3=Oertel |first3=Janka |date=2 July 2024 |publisher=[[European Council on Foreign Relations]] |isbn=978-1-916682-42-9 |location=Berlin, Germany |pages= |format=EPUB |access-date=22 July 2024 |archive-url=https://web.archive.org/web/20240717120845/https://ecfr.eu/publication/idea-of-china/ |archive-date=17 July 2024 |url-status=live}}{{Rp|page=91}} Gao cites the example of U.S. military use of AI, which he contends has been used as a scapegoat to evade accountability for decision-making.{{Rp|page=91}}
A [[Summit on Responsible Artificial Intelligence in the Military Domain|summit]] was held in 2023 in the Hague on the issue of using AI responsibly in the military domain.{{Cite web |last=Brandon Vigliarolo |title=International military AI summit ends with 60-state pledge |url=https://www.theregister.com/2023/02/17/military_ai_summit/ |access-date=2023-02-17 |website=www.theregister.com |language=en}}
===Singularity=== {{Further|Existential risk from artificial general intelligence|Superintelligence|Technological singularity}}
[[Vernor Vinge]], among numerous others, has suggested that a moment may come when some or all computers will be smarter than humans. The onset of this event is commonly referred to as "[[Technological singularity|the Singularity]]"{{cite news |last1=Markoff |first1=John |date=25 July 2009 |title=Scientists Worry Machines May Outsmart Man |url=https://www.nytimes.com/2009/07/26/science/26robot.html |url-status=live |archive-url=https://web.archive.org/web/20170225202201/http://www.nytimes.com/2009/07/26/science/26robot.html |archive-date=25 February 2017 |access-date=24 February 2017 |work=The New York Times}} and is the central point of discussion in the philosophy of [[Singularitarianism]]. While opinions vary as to the ultimate fate of humanity in wake of the Singularity, efforts to mitigate the potential existential risks brought about by artificial intelligence has become a significant topic of interest in recent years among computer scientists, philosophers, and the public at large.
Many researchers have argued that, through an [[intelligence explosion]], a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.Muehlhauser, Luke, and Louie Helm. 2012. [https://intelligence.org/files/IE-ME.pdf "Intelligence Explosion and Machine Ethics"] {{Webarchive|url=https://web.archive.org/web/20150507173028/http://intelligence.org/files/IE-ME.pdf |date=2015-05-07 }}. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer. In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book ''[[Superintelligence: Paths, Dangers, Strategies]]'', philosopher [[Nick Bostrom]] argues that artificial intelligence has the capability to bring about human extinction. He claims that an [[artificial superintelligence]] would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled [[unintended consequences]] could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.Bostrom, Nick. 2003. [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"] {{Webarchive|url=https://web.archive.org/web/20181008090224/http://www.nickbostrom.com/ethics/ai.html |date=2018-10-08 }}. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.{{Cite book |last=Bostrom |first=Nick |title=Superintelligence: paths, dangers, strategies |date=2017 |publisher=Oxford University Press |isbn=978-0-19-967811-2 |location=Oxford, United Kingdom}}
However, Bostrom contended that superintelligence also has the potential to solve many difficult problems such as disease, poverty, and environmental destruction, and could help [[Human enhancement|humans enhance themselves]].{{Cite journal|last1=Umbrello|first1=Steven|last2=Baum|first2=Seth D.|date=2018-06-01|title=Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing|url=http://www.sciencedirect.com/science/article/pii/S0016328717301908|journal=Futures|language=en|volume=100|pages=63–73|doi=10.1016/j.futures.2018.04.007|hdl=2318/1685533|s2cid=158503813|issn=0016-3287|access-date=2020-11-29|archive-date=2019-05-09|archive-url=https://web.archive.org/web/20190509222110/https://www.sciencedirect.com/science/article/pii/S0016328717301908|url-status=live|hdl-access=free}}
Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.Yudkowsky, Eliezer. 2011. [https://intelligence.org/files/ComplexValues.pdf "Complex Value Systems in Friendly AI"] {{Webarchive|url=https://web.archive.org/web/20150929212318/http://intelligence.org/files/ComplexValues.pdf |date=2015-09-29 }}. In Schmidhuber, Thórisson, and Looks 2011, 388–393. AI researchers such as [[Stuart J. Russell]],{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |location=United States |publisher=Viking |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }} [[Bill Hibbard]], [[Roman Yampolskiy]],{{Cite journal|last=Yampolskiy|first=Roman V.|date=2020-03-01|title=Unpredictability of AI: On the Impossibility of Accurately Predicting All Actions of a Smarter Agent|url=https://www.worldscientific.com/doi/abs/10.1142/S2705078520500034|journal=[[Journal of Artificial Intelligence and Consciousness]]|volume=07|issue=1|pages=109–118|doi=10.1142/S2705078520500034|s2cid=218916769|issn=2705-0785|access-date=2020-11-29|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060657/https://www.worldscientific.com/doi/abs/10.1142/S2705078520500034|url-status=live|url-access=subscription}} [[Shannon Vallor]],{{Citation|last1=Wallach|first1=Wendell|title=Moral Machines: From Value Alignment to Embodied Virtue|date=2020-09-17|url=https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-14|work=Ethics of Artificial Intelligence|pages=383–412|publisher=Oxford University Press|language=en|doi=10.1093/oso/9780190905033.003.0014|isbn=978-0-19-090503-3|access-date=2020-11-29|last2=Vallor|first2=Shannon|archive-date=2020-12-08|archive-url=https://web.archive.org/web/20201208114354/https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-14|url-status=live|url-access=subscription}} [[Steven Umbrello]]{{Cite journal|last=Umbrello|first=Steven|date=2019|title=Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach|journal=Big Data and Cognitive Computing|language=en|volume=3|issue=1|page=5|doi=10.3390/bdcc3010005|doi-access=free|hdl=2318/1685727|hdl-access=free}} and [[Luciano Floridi]]{{Cite journal|last1=Floridi|first1=Luciano|last2=Cowls|first2=Josh|last3=King|first3=Thomas C.|last4=Taddeo|first4=Mariarosaria|date=2020|title=How to Design AI for Social Good: Seven Essential Factors|journal=Science and Engineering Ethics|language=en|volume=26|issue=3|pages=1771–1796|doi=10.1007/s11948-020-00213-5|issn=1353-3452|pmc=7286860|pmid=32246245}} have proposed design strategies for developing beneficial machines.
== Solutions and approaches == To address ethical challenges in artificial intelligence, developers have introduced various systems designed to ensure responsible AI behavior. Examples include [[Nvidia]]'s [[Llama (language model)|Llama]] Guard, which focuses on improving the [[AI safety|safety]] and [[AI alignment|alignment]] of large AI models,{{cite web |title=Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations |url=https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/ |access-date=2024-12-06 |website=Meta.com}} and [[Preamble (company)|Preamble]]'s customizable guardrail platform.{{Cite arXiv|eprint=2411.14442 |first1=Kristina |last1=Šekrst |first2=Jeremy |last2=McHugh |title=AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development |last3=Cefalu |first3=Jonathan Rodriguez |year=2024 |class=cs.CY }} These systems aim to address issues such as algorithmic bias, misuse, and vulnerabilities, including [[prompt injection]] attacks, by embedding ethical guidelines into the functionality of AI models.
Prompt injection, a technique by which malicious inputs can cause AI systems to produce unintended or harmful outputs, has been a focus of these developments. Some approaches use customizable policies and rules to analyze inputs and outputs, ensuring that potentially problematic interactions are filtered or mitigated. Other tools focus on applying structured constraints to inputs, restricting outputs to predefined parameters,{{cite web |title=Nvidia NeMo Guardrails |url=https://docs.nvidia.com/nemo-guardrails/index.html |access-date=2024-12-06 |website=Nvidia}} or leveraging real-time monitoring mechanisms to identify and address vulnerabilities.{{cite arXiv |eprint=2312.06674 |class=cs.CL |first1=Hakan |last1=Inan |first2=Kartikeya |last2=Upasani |title=Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations |first3=Jianfeng |last3=Chi |first4=Rashi |last4=Rungta |first5=Krithika |last5=Iyer |first6=Yuning |last6=Mao |first7=Michael |last7=Tontchev |first8=Qing |last8=Hu |first9=Brian |last9=Fuller |first10=Davide |last10=Testuggine |first11=Madian |last11=Khabsa |year=2023}} These efforts reflect a broader trend in ensuring that artificial intelligence systems are designed with safety and ethical considerations at the forefront, particularly as their use becomes increasingly widespread in critical applications.{{cite arXiv |eprint=2402.01822 |class=cs |first1=Yi |last1=Dong |first2=Ronghui |last2=Mu |title=Building Guardrails for Large Language Models |last3=Jin |first3=Gaojie |last4=Qi |first4=Yi |last5=Hu |first5=Jinwei |last6=Zhao |first6=Xingyu |last7=Meng |first7=Jie |last8=Ruan |first8=Wenjie |last9=Huang |first9=Xiaowei |year=2024}}{{cite journal |first=Woody |last=Evans|title=Posthuman Rights: Dimensions of Transhuman Worlds |journal=Teknokultura |year=2015 |volume=12 | issue=2| pages=373–384 |doi=10.5209/rev_TK.2015.v12.n2.49072}}
== Institutions in AI policy and ethics == There are many organizations concerned with AI ethics and policy, public and governmental as well as corporate and societal.
[[Amazon.com, Inc.|Amazon]], [[Google]], [[Facebook]], [[IBM]], and [[Microsoft]] have established a [[Nonprofit organization|non-profit]], The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.{{cite news |last1=Fiegerman |first1=Seth |title=Facebook, Google, Amazon create group to ease AI concerns |url=https://money.cnn.com/2016/09/28/technology/partnership-on-ai/ |work=CNNMoney |date=28 September 2016 |access-date=18 August 2020 |archive-date=17 September 2020 |archive-url=https://web.archive.org/web/20200917141730/https://money.cnn.com/2016/09/28/technology/partnership-on-ai/ |url-status=live }}
The [[IEEE]] put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization. The IEEE's [https://standards.ieee.org/industry-connections/activities/ieee-global-initiative/ Ethics of Autonomous Systems] initiative aims to address ethical dilemmas related to decision-making and the impact on society while developing guidelines for the development and use of autonomous systems. In particular, in domains like artificial intelligence and robotics, the Foundation for Responsible Robotics is dedicated to promoting moral behavior as well as responsible robot design and use, ensuring that robots maintain moral principles and are congruent with human values.
Traditionally, [[government]] has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and [[NGO|non-government organizations]] to ensure AI is ethically applied.
AI ethics work is structured by personal values and professional commitments, and involves constructing contextual meaning through data and algorithms. Therefore, AI ethics work needs to be incentivized.{{Cite journal |last1=Slota |first1=Stephen C. |last2=Fleischmann |first2=Kenneth R. |last3=Greenberg |first3=Sherri |last4=Verma |first4=Nitin |last5=Cummings |first5=Brenna |last6=Li |first6=Lan |last7=Shenefiel |first7=Chris |date=2023 |title=Locating the work of artificial intelligence ethics |url=https://onlinelibrary.wiley.com/doi/10.1002/asi.24638 |journal=Journal of the Association for Information Science and Technology |language=en |volume=74 |issue=3 |pages=311–322 |doi=10.1002/asi.24638 |s2cid=247342066 |issn=2330-1635 |access-date=2023-07-21 |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925020205/https://onlinelibrary.wiley.com/doi/10.1002/asi.24638 |url-status=live |url-access=subscription }}
=== Intergovernmental initiatives ===
- The [[European Commission]] has a High-Level Expert Group on Artificial Intelligence. On 8 April 2019, this published its "Ethics Guidelines for [[Trustworthy AI|Trustworthy Artificial Intelligence]]".{{Cite web|url=https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai|title=Ethics guidelines for trustworthy AI|date=2019-04-08|website=Shaping Europe's digital future – European Commission|publisher=European Commission|language=en|access-date=2020-02-20|archive-date=2020-02-20|archive-url=https://web.archive.org/web/20200220002342/https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai|url-status=live}} The European Commission also has a Robotics and Artificial Intelligence Innovation and Excellence unit, which published a white paper on excellence and trust in artificial intelligence innovation on 19 February 2020.{{Cite web|url=https://ec.europa.eu/digital-single-market/en/news/white-paper-artificial-intelligence-european-approach-excellence-and-trust|title=White Paper on Artificial Intelligence – a European approach to excellence and trust | Shaping Europe's digital future|date=19 February 2020 |access-date=2021-03-18|archive-date=2021-03-06|archive-url=https://web.archive.org/web/20210306003222/https://ec.europa.eu/digital-single-market/en/news/white-paper-artificial-intelligence-european-approach-excellence-and-trust|url-status=live}} The European Commission also proposed the [[Artificial Intelligence Act]], which came [[Entry into force|into force]] on 1 August 2024, with provisions that shall come into operation gradually over time.{{Cite web |title=Implementation Timeline {{!}} EU Artificial Intelligence Act |url=https://artificialintelligenceact.eu/implementation-timeline/ |access-date=2025-10-02 |language=en-US}}
- The [[OECD]] established an OECD AI Policy Observatory.{{cite web |title=OECD AI Policy Observatory |url=https://www.oecd.ai/ |access-date=2021-03-18 |archive-date=2021-03-08 |archive-url=https://web.archive.org/web/20210308171133/https://oecd.ai/ |url-status=live }}
- In 2021, [[UNESCO]] adopted the Recommendation on the Ethics of Artificial Intelligence,{{Cite book |url=https://unesdoc.unesco.org/ark:/48223/pf0000381137.locale=en |title=Recommendation on the Ethics of Artificial Intelligence |publisher=UNESCO |year=2021}} the first global standard on the ethics of AI.{{Cite web |date=2021-11-26 |title=UNESCO member states adopt first global agreement on AI ethics |url=https://www.helsinkitimes.fi/themes/themes/science-and-technology/20454-unesco-member-states-adopt-first-global-agreement-on-ai-ethics.html |access-date=2023-04-26 |website=Helsinki Times |language=en-gb |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925020210/https://www.helsinkitimes.fi/themes/themes/science-and-technology/20454-unesco-member-states-adopt-first-global-agreement-on-ai-ethics.html |url-status=live }}
=== Governmental initiatives ===
- In the [[United States]] the [[Obama]] administration put together a Roadmap for AI Policy.{{Cite news|date=2016-12-21|title=The Obama Administration's Roadmap for AI Policy|work=Harvard Business Review|url=https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy|access-date=2021-03-16|issn=0017-8012|archive-date=2021-01-22|archive-url=https://web.archive.org/web/20210122003445/https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy|url-status=live}} The Obama Administration released two prominent [[white papers]] on the future and impact of AI. In 2019 the White House through an executive memo known as the "American AI Initiative" instructed NIST (the National Institute of Standards and Technology) to begin work on Federal Engagement of AI Standards (February 2019).{{Cite web|title=Accelerating America's Leadership in Artificial Intelligence – The White House|url=https://trumpwhitehouse.archives.gov/articles/accelerating-americas-leadership-in-artificial-intelligence/|access-date=2021-03-16|website=trumpwhitehouse.archives.gov|archive-date=2021-02-25|archive-url=https://web.archive.org/web/20210225073748/https://trumpwhitehouse.archives.gov/articles/accelerating-americas-leadership-in-artificial-intelligence/|url-status=live}} *In January 2020, in the United States, the [[First presidency of Donald Trump|Trump Administration]] released a draft executive order issued by the Office of Management and Budget (OMB) on "Guidance for Regulation of Artificial Intelligence Applications" ("OMB AI Memorandum"). The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority. Additionally, federal entities are even encouraged to use the order to circumnavigate any state laws and regulations that a market might see as too onerous to fulfill.{{Cite web|date=2020-01-13|title=Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, "Guidance for Regulation of Artificial Intelligence Applications"|url=https://www.federalregister.gov/documents/2020/01/13/2020-00261/request-for-comments-on-a-draft-memorandum-to-the-heads-of-executive-departments-and-agencies|access-date=2020-11-28|website=Federal Register|archive-date=2020-11-25|archive-url=https://web.archive.org/web/20201125060218/https://www.federalregister.gov/documents/2020/01/13/2020-00261/request-for-comments-on-a-draft-memorandum-to-the-heads-of-executive-departments-and-agencies|url-status=live}} *The Artificial Intelligence Research, Innovation, and Accountability Act of 2024 was a proposed bipartisan bill introduced by U.S. Senator [[John Thune]] that would require websites to disclose the use of AI systems in handling interactions with users and regulate the transparency of "high-impact AI systems" by requiring that annual design and safety plans be submitted to the [[National Institute of Standards and Technology]] for oversight based on pre-defined assessment criteria.{{Cite web |last=Sen. Thune |first=John |date=2024-12-18 |title=Text – S.3312 – 118th Congress (2023–2024): Artificial Intelligence Research, Innovation, and Accountability Act of 2024 |url=https://www.congress.gov/bill/118th-congress/senate-bill/3312/text |access-date=2025-05-25 |website=www.congress.gov}} *The [[Computing Community Consortium|Computing Community Consortium (CCC)]] weighed in with a 100-plus page draft report{{Cite web|url=https://www.hpcwire.com/2019/05/14/ccc-offers-draft-20-year-ai-roadmap-seeks-comments/|title=CCC Offers Draft 20-Year AI Roadmap; Seeks Comments|date=2019-05-14|website=HPCwire|access-date=2019-07-22|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060659/https://www.hpcwire.com/2019/05/14/ccc-offers-draft-20-year-ai-roadmap-seeks-comments/|url-status=live}} – ''A 20-Year Community Roadmap for Artificial Intelligence Research in the US''{{Cite web|url=https://www.cccblog.org/2019/05/13/request-comments-on-draft-a-20-year-community-roadmap-for-ai-research-in-the-us/|title=Request Comments on Draft: A 20-Year Community Roadmap for AI Research in the US » CCC Blog|date=13 May 2019 |access-date=2019-07-22|archive-date=2019-05-14|archive-url=https://web.archive.org/web/20190514193546/https://www.cccblog.org/2019/05/13/request-comments-on-draft-a-20-year-community-roadmap-for-ai-research-in-the-us/|url-status=live}}
- The [[Center for Security and Emerging Technology]] advises US policymakers on the security implications of emerging technologies such as AI.
- In Russia, the first-ever Russian "Codex of ethics of artificial intelligence" for business was signed in 2021. It was driven by [[Analytical Center for the Government of the Russian Federation]] together with major commercial and academic institutions such as [[Sberbank]], [[Yandex]], [[Rosatom]], [[Higher School of Economics]], [[Moscow Institute of Physics and Technology]], [[ITMO University]], [[Nanosemantics]], [[Rostelecom]], [[CIAN]] and others.{{in lang|ru}} [https://www.kommersant.ru/doc/5089365 Интеллектуальные правила] {{Webarchive|url=https://web.archive.org/web/20211230212952/https://www.kommersant.ru/doc/5089365 |date=2021-12-30 }} — [[Kommersant]], 25.11.2021
=== Academic initiatives === *Multiple research institutes at the [[University of Oxford]] have centrally focused on AI ethics. The [[Future of Humanity Institute]] focused on AI safety{{cite arXiv|eprint=1705.08807|class=cs.AI|first1=Katja|last1=Grace|first2=John|last2=Salvatier|title=When Will AI Exceed Human Performance? Evidence from AI Experts|date=2018-05-03|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain}} and the governance of AI{{Cite web|title=China wants to shape the global future of artificial intelligence|url=https://www.technologyreview.com/2018/03/16/144630/china-wants-to-shape-the-global-future-of-artificial-intelligence/|url-status=live|archive-url=https://web.archive.org/web/20201120052853/https://www.technologyreview.com/2018/03/16/144630/china-wants-to-shape-the-global-future-of-artificial-intelligence/|archive-date=2020-11-20|access-date=2020-11-29|website=MIT Technology Review|language=en}} before shuttering in 2024.{{Cite journal |last=Adam |first=David |date=2024-04-26 |title=Future of Humanity Institute shuts: what's next for 'deep future' research? |url=https://www.nature.com/articles/d41586-024-01229-8 |journal=Nature |language=en |volume=629 |issue=8010 |pages=16–17 |doi=10.1038/d41586-024-01229-8|pmid=38671273 |bibcode=2024Natur.629...16A |url-access=subscription }} The Institute for Ethics in AI, directed by [[John Tasioulas]], whose primary goal, among others, is to promote AI ethics as a field proper in comparison to related [[applied ethics]] fields. The [[Oxford Internet Institute]], directed by [[Luciano Floridi]], focuses on the ethics of near-term AI technologies and ICTs.{{Cite journal|last1=Floridi|first1=Luciano|last2=Cowls|first2=Josh|last3=Beltrametti|first3=Monica|last4=Chatila|first4=Raja|last5=Chazerand|first5=Patrice|last6=Dignum|first6=Virginia|last7=Luetge|first7=Christoph|last8=Madelin|first8=Robert|last9=Pagallo|first9=Ugo|last10=Rossi|first10=Francesca|last11=Schafer|first11=Burkhard|date=2018-12-01|title=AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations|journal=Minds and Machines|language=en|volume=28|issue=4|pages=689–707|doi=10.1007/s11023-018-9482-5|issn=1572-8641|pmc=6404626|pmid=30930541}} The AI Governance Initiative at the Oxford Martin School focuses on understanding risks from AI from technical and policy perspectives.{{Cite web |title=AI Governance |url=https://www.oxfordmartin.ox.ac.uk/ai-governance |access-date=2025-02-19 |website=Oxford Martin School |language=en}} *The Centre for Digital Governance at the [[Hertie School]] in Berlin was co-founded by [[Joanna Bryson]] to research questions of ethics and technology.{{cite magazine|date=|title=Joanna J. Bryson|url=https://www.wired.com/author/joanna-j-bryson/|magazine=WIRED|location=|access-date=13 January 2023|archive-date=15 March 2023|archive-url=https://web.archive.org/web/20230315194630/https://www.wired.com/author/joanna-j-bryson/|url-status=live}} *The [[AI Now Institute]] at [[NYU]] is a research institute studying the social implications of artificial intelligence. Its interdisciplinary research focuses on the themes bias and inclusion, labour and automation, rights and liberties, and safety and civil infrastructure.{{Cite web|title=New Artificial Intelligence Research Institute Launches|date=2017-11-20|url=https://engineering.nyu.edu/news/new-artificial-intelligence-research-institute-launches|access-date=2021-02-21|language=en-US|archive-date=2020-09-18|archive-url=https://web.archive.org/web/20200918091106/https://engineering.nyu.edu/news/new-artificial-intelligence-research-institute-launches|url-status=live}} *The [[Institute for Ethics and Emerging Technologies]] (IEET) researches the effects of AI on unemployment,{{Cite book|title=Surviving the machine age: intelligent technology and the transformation of human work|date=15 March 2017|isbn=978-3-319-51165-8|editor=James J. Hughes|publisher=Palgrave Macmillan Cham|location=Cham, Switzerland|oclc=976407024|editor2=LaGrandeur, Kevin}}{{Cite book|last=Danaher, John|title=Automation and utopia: human flourishing in a world without work|year=2019|isbn=978-0-674-24220-3|publisher=Harvard University Press|location=Cambridge, Massachusetts|oclc=1114334813}} and policy. *The [[Institute for Ethics in Artificial Intelligence]] (IEAI) at the [[Technical University of Munich]] directed by [[Christoph Lütge]] conducts research across various domains such as mobility, employment, healthcare and sustainability.{{Cite web|title=TUM Institute for Ethics in Artificial Intelligence officially opened|url=https://www.tum.de/nc/en/about-tum/news/press-releases/details/35727/|url-status=live|archive-url=https://web.archive.org/web/20201210032545/https://www.tum.de/nc/en/about-tum/news/press-releases/details/35727/|archive-date=2020-12-10|access-date=2020-11-29|website=www.tum.de|language=en}} *[[Barbara J. Grosz]], the Higgins Professor of Natural Sciences at the [[Harvard John A. Paulson School of Engineering and Applied Sciences]] has initiated the Embedded EthiCS into [[Harvard University|Harvard]]'s computer science curriculum to develop a future generation of computer scientists with worldview that takes into account the social impact of their work.{{Cite web |last=Communications |first=Paul Karoff SEAS |date=2019-01-25 |title=Harvard works to embed ethics in computer science curriculum |url=https://news.harvard.edu/gazette/story/2019/01/harvard-works-to-embed-ethics-in-computer-science-curriculum/ |access-date=2023-04-06 |website=Harvard Gazette |language=en-US |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925020310/https://news.harvard.edu/gazette/story/2019/01/harvard-works-to-embed-ethics-in-computer-science-curriculum/ |url-status=live }}
=== Private organizations ===
- [[Algorithmic Justice League]]{{Cite news|last=Lee|first=Jennifer|date=2020-02-08|title=When Bias Is Coded Into Our Technology|language=en|work=NPR|url=https://www.npr.org/sections/codeswitch/2020/02/08/770174171/when-bias-is-coded-into-our-technology|access-date=2021-12-22}}
- [[Black in AI]]{{Cite journal|date=2018-12-12|title=How one conference embraced diversity|journal=Nature|language=en|volume=564|issue=7735|pages=161–162|doi=10.1038/d41586-018-07718-x|pmid=31123357|s2cid=54481549|doi-access=free}}
- [[Data for Black Lives]]{{Cite news|last=Roose|first=Kevin|date=2020-12-30|title=The 2020 Good Tech Awards|language=en-US|work=The New York Times|url=https://www.nytimes.com/2020/12/30/technology/2020-good-tech-awards.html|access-date=2021-12-21|issn=0362-4331}}
== History == Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to the [[Age of Enlightenment|Enlightenment]]: [[Gottfried Wilhelm Leibniz|Leibniz]] already posed the question of whether we should attribute intelligence to a mechanism that behaves as if it were a sentient being,{{Cite journal |last=Lodge |first=Paul |date=2014 |title=Leibniz's Mill Argument Against Mechanical Materialism Revisited |journal=Ergo: An Open Access Journal of Philosophy |volume=1 |issue=20201214 |doi=10.3998/ergo.12405314.0001.003 |issn=2330-4014 |doi-access=free |hdl=2027/spo.12405314.0001.003}} and so does [[René Descartes|Descartes]], who describes what could be considered an early version of the [[Turing test]].{{Citation |last1=Bringsjord |first1=Selmer |title=Artificial Intelligence |date=2020 |editor-last=Zalta |editor-first=Edward N. |url=https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/ |access-date=2023-12-08 |edition=Summer 2020 |publisher=Metaphysics Research Lab, Stanford University |last2=Govindarajulu |first2=Naveen Sundar |editor2-last=Nodelman |editor2-first=Uri |encyclopedia=The Stanford Encyclopedia of Philosophy |archive-date=2022-03-08 |archive-url=https://web.archive.org/web/20220308015735/https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/ |url-status=live }}
The [[Romanticism|romantic]] period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in [[Mary Shelley]]'s ''[[Frankenstein]]''. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: [[R.U.R.|''R.U.R – Rossum's Universal Robots'']], [[Karel Čapek]]'s play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor, ''robota'')Kulesz, O. (2018). "[https://unesdoc.unesco.org/ark:/48223/pf0000380584 Culture, Platforms and Machines]". UNESCO, Paris. but was also an international success after it premiered in 1921. [[George Bernard Shaw]]'s play ''[[Back to Methuselah]]'', published in 1921, questions at one point the validity of thinking machines that act like humans; [[Fritz Lang]]'s 1927 film ''[[Metropolis (1927 film)|Metropolis]]'' shows an [[Android (robot)|android]] leading the uprising of the exploited masses against the oppressive regime of a [[Technocracy|technocratic]] society. In the 1950s, [[Isaac Asimov]] considered the issue of how to control machines in ''[[I, Robot]]''. At the insistence of his editor [[John W. Campbell Jr.]], he proposed the [[Three Laws of Robotics]] to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior.{{Cite book |last=Jr |first=Henry C. Lucas |url=https://books.google.com/books?id=FzwnridL72IC&dq=digital+Much+of+his+work+was+then+spent+testing+the+boundaries+of+his+three+laws+to+see+where+they+would+break+down,+or+where+they+would+create+paradoxical+or+unanticipated+behavior.&pg=PP13 |title=Information Technology and the Productivity Paradox: Assessing the Value of Investing in IT |date=1999-04-29 |publisher=Oxford University Press |isbn=978-0-19-802838-3 |language=en |access-date=2024-02-21 |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925020211/https://books.google.com/books?id=FzwnridL72IC&dq=digital+Much+of+his+work+was+then+spent+testing+the+boundaries+of+his+three+laws+to+see+where+they+would+break+down,+or+where+they+would+create+paradoxical+or+unanticipated+behavior.&pg=PP13#v=onepage&q&f=false |url-status=live }} His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.{{Cite book |last=Asimov |first=Isaac |title=I, Robot |title-link=I, Robot |publisher=Bantam |year=2008 |isbn=978-0-553-38256-3 |location=New York}} More recently, academics and many governments have challenged the idea that AI can itself be held accountable.{{cite journal |last1=Bryson |first1=Joanna |last2=Diamantis |first2=Mihailis |last3=Grant |first3=Thomas |date=September 2017 |title=Of, for, and by the people: the legal lacuna of synthetic persons |journal=Artificial Intelligence and Law |volume=25 |issue=3 |pages=273–291 |doi=10.1007/s10506-017-9214-9 |ref=lacuna |doi-access=free}} A panel convened by the [[United Kingdom]] in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.{{cite web |date=September 2010 |title=Principles of robotics |url=https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/ |url-status=live |archive-url=https://web.archive.org/web/20180401004346/https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/ |archive-date=1 April 2018 |access-date=10 January 2019 |publisher=UK's EPSRC |ref=principles}}
[[Eliezer Yudkowsky]], from the [[Machine Intelligence Research Institute]], suggested in 2004 a need to study how to build a "[[Friendly AI]]", meaning that there should also be efforts to make AI intrinsically friendly and humane.{{Cite web |last=Yudkowsky |first=Eliezer |date=July 2004 |title=Why We Need Friendly AI |url=http://www.asimovlaws.com/articles/archives/2004/07/why_we_need_fri_1.html |archive-url=https://web.archive.org/web/20120524150856/http://www.asimovlaws.com/articles/archives/2004/07/why_we_need_fri_1.html |archive-date=May 24, 2012 |website=3 laws unsafe}}
In 2009, academics and technical experts attended a conference organized by the [[Association for the Advancement of Artificial Intelligence]] to discuss the potential impact of robots and computers, and the impact of the hypothetical possibility that they could become self-sufficient and make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.{{Cite journal |last=Aleksander |first=Igor |date=March 2017 |title=Partners of Humans: A Realistic Assessment of the Role of Robots in the Foreseeable Future |url=http://journals.sagepub.com/doi/10.1057/s41265-016-0032-4 |journal=Journal of Information Technology |language=en |volume=32 |issue=1 |pages=1–9 |doi=10.1057/s41265-016-0032-4 |issn=0268-3962 |s2cid=5288506 |access-date=2024-02-21 |archive-date=2024-02-21 |archive-url=https://web.archive.org/web/20240221065213/https://journals.sagepub.com/doi/10.1057/s41265-016-0032-4 |url-status=live |url-access=subscription }} They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.
Also in 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of [[Lausanne]], Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other Evolving Robots Learn To Lie To Each Other] {{Webarchive|url=https://web.archive.org/web/20090828105728/http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other|date=2009-08-28}}, Popular Science, August 18, 2009
== Role and impact of fiction == {{Main|Artificial intelligence in fiction}}
The role of fiction with regards to AI ethics has been a complex one.{{cite web |last1=Bassett |first1=Caroline |last2=Steinmueller |first2=Ed |last3=Voss |first3=Georgina |title=Better Made Up: The Mutual Influence of Science Fiction and Innovation |url=https://www.nesta.org.uk/report/better-made-up-the-mutual-influence-of-science-fiction-and-innovation/ |publisher=Nesta |access-date=3 May 2024 |archive-date=3 May 2024 |archive-url=https://web.archive.org/web/20240503204507/https://www.nesta.org.uk/report/better-made-up-the-mutual-influence-of-science-fiction-and-innovation/ |url-status=live }} One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has prefigured common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the ''Institut de Robòtica i Informàtica Industrial'' (Institute of robotics and industrial computing) at the Technical University of Catalonia notes,{{Cite web |last=Velasco |first=Guille |date=2020-05-04 |title=Science-Fiction: A Mirror for the Future of Humankind |url=https://revistaidees.cat/en/science-fiction-favors-engaging-debate-on-artificial-intelligence-and-ethics/ |access-date=2023-12-08 |website=IDEES |language=en-US |archive-date=2021-04-22 |archive-url=https://web.archive.org/web/20210422164230/https://revistaidees.cat/en/science-fiction-favors-engaging-debate-on-artificial-intelligence-and-ethics/ |url-status=live }} in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.
=== TV series ===
While ethical questions linked to AI have been featured in science fiction literature and [[List of artificial intelligence films|feature films]] for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series ''[[Real Humans]]'' (2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series ''[[Black Mirror]]'' (2013–Present) is particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series [[Osmosis (TV series)|''Osmosis'']] (2020) and British series [[The One (TV series)|''The One'']] deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series [[Love, Death & Robots|''Love, Death+Robots'']] have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, which shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.{{Cite web|date=2021-05-14|title=Love, Death & Robots season 2, episode 1 recap – "Automated Customer Service"|url=https://readysteadycut.com/2021/05/14/recap-love-death-and-robots-season-2-episode-1-automated-customer-service-netflix-series/|access-date=2021-12-21|website=Ready Steady Cut|language=en-GB|archive-date=2021-12-21|archive-url=https://web.archive.org/web/20211221035251/https://readysteadycut.com/2021/05/14/recap-love-death-and-robots-season-2-episode-1-automated-customer-service-netflix-series/|url-status=live}}
=== Future visions in fiction and games ===
The movie ''[[The Thirteenth Floor]]'' suggests a future where [[simulated reality|simulated worlds]] with sentient inhabitants are created by computer [[game console]]s for the purpose of entertainment. The movie ''[[The Matrix]]'' suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost [[speciesism]]. The short story "[[The Planck Dive]]" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the [[Emergency Medical Hologram]] of ''[[USS Voyager (NCC-74656)|Starship Voyager]]'', which is an apparently sentient copy of a reduced subset of the consciousness of its creator, [[Lewis Zimmerman|Dr. Zimmerman]], who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies ''[[Bicentennial Man (film)|Bicentennial Man]]'' and ''[[A.I. Artificial Intelligence|A.I.]]'' deal with the possibility of sentient robots that could love. ''[[I, Robot (film)|I, Robot]]'' explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.{{Cite book|title=AI narratives: a history of imaginative thinking about intelligent machines|editor=Cave, Stephen|editor2= Dihal, Kanta|editor3= Dillon, Sarah|date=14 February 2020|isbn=978-0-19-258604-9|edition=First|location=Oxford|publisher=Oxford University Press|oclc=1143647559}}
Over time, debates have tended to focus less and less on ''possibility'' and more on ''desirability'',{{Citation|last1=Cerqui|first1=Daniela|title=Re-Designing Humankind: The Rise of Cyborgs, a Desirable Goal?|date=2008|url=http://link.springer.com/10.1007/978-1-4020-6591-0_14|work=Philosophy and Design|pages=185–195|place=Dordrecht|publisher=Springer Netherlands|language=en|doi=10.1007/978-1-4020-6591-0_14|isbn=978-1-4020-6590-3|access-date=2020-11-11|last2=Warwick|first2=Kevin|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060701/https://link.springer.com/chapter/10.1007%2F978-1-4020-6591-0_14|url-status=live|url-access=subscription}} as emphasized in the [[Hugo de Garis#The Artilect War|"Cosmist" and "Terran" debates]] initiated by [[Hugo de Garis]] and [[Kevin Warwick]].
==See also==
{{columns-list|colwidth=30em| *[[AI takeover]] *[[AI washing]] *[[Artificial consciousness]] *[[Artificial intelligence and copyright]] *[[Artificial general intelligence]] (AGI) *[[Computer ethics]] *[[Dead internet theory]] *[[Effective altruism#Long-term future and global catastrophic risks|Effective altruism, the long term future and global catastrophic risks]] *[[Artificial intelligence and elections]] – Use of AI in elections and political campaigning. *[[Ethics of uncertain sentience]] *[[Existential risk from artificial general intelligence]] *''[[Human Compatible]]'' *[[Metaverse law]] *[[Personhood]] *[[Philosophy of artificial intelligence]] *[[Regulation of artificial intelligence]] *[[Robotic governance|Robotic Governance]] *[[Roko's basilisk]] *''[[Superintelligence: Paths, Dangers, Strategies]]'' *[[Suffering risks]] }}
==References== {{Reflist}}
==External links==
- [https://iep.utm.edu/ethics-of-artificial-intelligence/ Ethics of Artificial Intelligence] at the [[Internet Encyclopedia of Philosophy]]
- [https://plato.stanford.edu/entries/ethics-ai/ Ethics of Artificial Intelligence and Robotics] at the [[Stanford Encyclopedia of Philosophy]]
- [https://www.cambridge.org/core/books/cambridge-handbook-of-the-law-ethics-and-policy-of-artificial-intelligence/0AD007641DE27F837A3A16DBC0888DD1 The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence]
- {{cite journal |last1=Russell |first1=S. |last2=Hauert |first2=S. |last3=Altman |first3=R. |last4=Veloso |first4=M. |title=Robotics: Ethics of artificial intelligence |journal=Nature |date=May 2015 |volume=521 |issue=7553 |pages=415–418 |doi=10.1038/521415a |pmid=26017428 |s2cid=4452826 |bibcode=2015Natur.521..415. |doi-access=free }}
- [https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/ AI Ethics Guidelines Global Inventory] by [https://algorithmwatch.org Algorithmwatch]
- {{cite journal |last1=Hagendorff |first1=Thilo |title=The Ethics of AI Ethics: An Evaluation of Guidelines |journal=Minds and Machines |date=March 2020 |volume=30 |issue=1 |pages=99–120 |s2cid=72940833 |doi=10.1007/s11023-020-09517-8 |doi-access=free |arxiv=1903.03425 }}
- Sheludko, M. (December, 2023). [https://lasoft.org/blog/ethical-aspects-of-artificial-intelligence-challenges-and-imperatives/ Ethical Aspects of Artificial Intelligence: Challenges and Imperatives]. Software Development Blog.
- {{Cite web |last=Eisikovits |first=Nir |title=AI Is an Existential Threat—Just Not the Way You Think |url=https://www.scientificamerican.com/article/ai-is-an-existential-threat-just-not-the-way-you-think/ |access-date=2024-03-04 |website=Scientific American |language=en}} *{{Cite arXiv|last1=Anwar|first1=U.|last2=Saparov|first2=A.|last3=Rando|first3=J.|last4=Paleka|first4=D.|last5=Turpin|first5=M.|last6=Hase|first6=P.|last7=Lubana|first7=E. S.|last8=Jenner|first8=E.|last9=Casper|first9=S.|last10=Sourbut|first10=O.|last11=Edelman|first11=B. L.|last12=Zhang|first12=Z.|last13=Günther|first13=M.|last14=Korinek|first14=A.|last15=Hernandez-Orallo|first15=J.|last16=Hammond|first16=L.|last17=Bigelow|first17=E.|last18=Pan|first18=A.|last19=Langosco|first19=L.|last20=Krueger|first20=D.|title=Foundational Challenges in Assuring Alignment and Safety of Large Language Models|date=2024|class=cs.LG |eprint=2404.09932}}
{{Ethics}} {{Artificial intelligence navbox}} {{Existential risk from artificial intelligence}} {{Philosophy of science}}
[[Category:Artificial intelligence|Ethics]] [[Category:Philosophy of artificial intelligence]] [[Category:Ethics of science and technology]] [[Category:Regulation of robots]] [[Category:Regulation of artificial intelligence]] [[Category:Ethics by topic]]
Related Articles
From MOAI Insights

디지털 트윈, 당신 공장엔 이미 있다 — 엑셀과 MES 사이 어딘가에
디지털 트윈은 10억짜리 3D 시뮬레이션이 아니다. 지금 쓰고 있는 엑셀에 좋은 질문 하나를 더하는 것 — 두 전문가가 중소 제조기업이 이미 가진 데이터로 예측하는 공장을 만드는 현실적 로드맵을 제시한다.

공장의 뇌는 어떻게 생겼는가 — 제조운영 AI 아키텍처 해부
지식관리, 업무자동화, 의사결정지원 — 따로 보면 다 있던 것들입니다. 제조 AI의 진짜 차이는 이 셋이 순환하면서 '우리 공장만의 지능'을 만든다는 데 있습니다.

그 30분을 18년 동안 매일 반복했습니다 — 품질팀장이 본 AI Agent
18년차 품질팀장이 매일 아침 30분씩 반복하던 데이터 분석을 AI Agent가 3분 만에 해냈습니다. 챗봇과는 완전히 다른 물건 — 직접 시스템에 접근해서 데이터를 꺼내고 분석하는 AI의 현장 도입기.
Want to apply this in your factory?
MOAI helps manufacturing companies adopt AI tailored to their operations.
Talk to us →