Ethics

From
Jump to: navigation, search

YouTube search... ...Google search


There are many efforts underway to address the ethical issues raised by artificial intelligence. Some of these efforts are focused on developing ethical guidelines for the development and use of AI, while others are focused on developing technical solutions to mitigate the risks of AI. The development of ethical guidelines and technical solutions is just one part of the effort to address the ethical issues raised by AI. It is also important to have open and transparent discussions about the potential risks and benefits of AI, and to involve stakeholders from all sectors of society in the development of AI technologies.


The Ethical Side of Data Usage | Veritone
Machine learning requires data, and many companies have lots of data that is useful for many very important tasks. However there are many questions about how this data should be used, shared and applied. Additionally, companies walk a fine line with just how much they want to let customers and users know about the data they have on them. This panel will explore the ethical side of data usage from an industry perspective. For more details, visit us at https://Veritone.com Veritone is a leading provider of artificial intelligence technology and solutions. The company’s proprietary operating system, aiWARE™, orchestrates an expanding ecosystem of machine learning models to transform audio, video and other data sources into actionable intelligence. Its open architecture enables customers in the media and entertainment, legal and compliance, and government sectors to easily deploy applications that leverage the power of AI to dramatically improve operational efficiency and effectiveness.

Use of Artificial Intelligence by the U.S. and Its Adversaries
After discussing the state of artificial intelligence expertise, technologies, and applications in the United States, China, and Russia, experts will evaluate the ways in which Beijing and Moscow can use AI improve their influence operations, cyberattacks, and battlefield capabilities. Speakers will also consider how the United States can counter any advantages that AI provides Russia and China in the propaganda, cyber, and military domains. Speakers include: Brian Drake, Director of Artificial Intelligence and Machine Learning, DIA Future Capabilities and Innovation Office; Elsa Kania, Adjunct Senior Fellow, Technology and National Security Program, CNAS; Dr. Margarita Konaev, Research Fellow, CSET; Colonel P.J. Maykish, USAF, Director of Analysis, National Security Commission on Artificial Intelligence; and Moderator, Charles Clancy, Chief Futurist and Senior Vice President/General Manager, MITRE

Kathryn Hume, Ethical Algorithms: Bias and Explainability in Machine Learning
Ethics of AI Lab Centre for Ethics, University of Toronto, March 20, 2018 https://ethics.utoronto.ca Kathryn Hume intergrate.ai

Yi Zeng on "Brain-inspired Artificial Intelligence and Ethics of Artificial Intelligence"
Yi Zeng of the Institute of Automation of the Chinese Academy of Sciences on "Brain-inspired Artificial Intelligence and Ethics of Artificial Intelligence" at a LASER/LAst Dialogues www.scaruffi.com/leonardo/sep2020.html

CRISPR, AI, and the Ethics of Scientific Discovery
EthicsinSociety (Introductions by Professor Rob Reich, President Marc Tessier-Lavigne, and grad student Margaret Guo end at 13:52.) Twin revolutions at the start of the 21st century are shaking up the very idea of what it means to be human. Computer vision and image recognition are at the heart of the AI revolution. And CRISPR is a powerful new technique for genetic editing that allows humans to intervene in evolution. Jennifer Doudna and Fei-Fei Li, pioneering scientists in the fields of gene editing and artificial intelligence, respectively, discuss the ethics of scientific discovery. Russ Altman moderated the conversation.

DOD Officials Discuss Artificial Intelligence Ethics
Dana Deasy, the Defense Department’s chief information officer, and Air Force Lt. Gen John N.T. Shanahan, director of the DOD’s Joint Artificial Intelligence Center, discuss the adoption of ethical principles for artificial intelligence at a Pentagon press briefing, Feb. 21, 2020.

CS-E3210 Machine Learning: Basic Principles - Ethics and the GDPR
Alexander Jung Guest talk by Maria Rehbinder Senior Legal Counsel in Aalto University and Certified Information Privacy Professional (CIPP/E) Richard Darst Aalto Science-IT Coordinator

Google Head of Ethical AI Research on Data Biases and Ethics
Margaret (Meg) Mitchell, Co-Head of Ethical Research Group at Google AI, addresses all on data biases, algorithms, regulation, and more.

Michael Kearns: Algorithmic Fairness, Privacy & Ethics | Lex Fridman Podcast #50
I really enjoyed this conversation with Michael. Here's the outline: 0:00 - Introduction 2:45 - Influence from literature and journalism 7:39 - Are most people good? 13:05 - Ethical algorithm 24:28 - Algorithmic fairness of groups vs individuals 33:36 - Fairness tradeoffs 46:29 - Facebook, social networks, and algorithmic ethics 58:04 - Machine learning 58:05 - Machine learning 59:19 - Algorithm that determines what is fair 1:01:25 - Computer scientists should think about ethics 1:05:59 - Algorithmic privacy 1:11:50 - Differential privacy 1:19:10 - Privacy by misinformation 1:22:31 - Privacy of data in society 1:27:49 - Game theory 1:29:40 - Nash equilibrium 1:30:35 - Machine learning and game theory 1:34:52 - Mutual assured destruction 1:36:56 - Algorithmic trading 1:44:09 - Pivotal moment in graduate school

Ethics and Bias in Artificial Intelligence - 18th Vienna Deep Learning Meetup
The Vienna Deep Learning Meetup and the Centre for Informatics and Society of TU Wien jointly organized an evening of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques are in terms of their potential to do good, the technologies raise a number of ethical questions and are prone to biases that can subvert their well-intentioned goals. Machine learning systems, from simple spam filtering or recommender systems to Deep Learning and AI, have already arrived at many different parts of society. Which web search results, job offers, product ads and social media posts we see online, even what we pay for food, mobility or insurance - all these decisions are already being made or supported by algorithms, many of which rely on statistical and machine learning methods. As they permeate society more and more, we also discover the real world impact of these systems due to inherent biases they carry. For instance, criminal risk scoring to determine bail for defendants in US district courts has been found to be biased against black people, and analysis of word embeddings has been shown to reaffirm gender stereotypes due to biased training data. While a general consensus seems to exist that such biases are almost inevitable, solutions range from embracing the bias as a factual representation of an unfair society to mathematical approaches trying to determine and combat bias in machine learning training data and the resulting algorithms. Besides producing biased results, many machine learning methods and applications raise complex ethical questions. Should governments use such methods to determine the trustworthiness of their citizens? Should the use of systems known to have biases be tolerated to benefit some while disadvantaging others? Is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans? And how do we keep AI and automation technologies from widening society's divides, such as the digital divide or income inequality? This event provides a platform for multidisciplinary debate in the form of keynotes and a panel discussion with international experts from diverse fields: Keynotes: - Prof. Moshe Vardi: "Deep Learning and the Crisis of Trust in Computing" - Prof. Sarah Spiekermann-Hoff: “The Big Data Illusion and its Impact on Flourishing with General AI” Panelists: Ethics and Bias in AI - Prof. Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University - Prof. Peter Purgathofer, Centre for Informatics and Society / Institute for Visual Computing & Human-Centered Technology, TU Wien - Prof. Sarah Spiekermann-Hoff, Institute for Management Information Systems, WU Vienna - Prof. Mark Coeckelbergh, Professor of Philosophy of Media and Technology, Department of Philosophy, University of Vienna - Dr. Christof Tschohl, Scientific Director at Research Institute AG & Co KG Moderator: Markus Mooslechner, Terra Mater Factual Studios

Values, Rights, & Religion

Montreal Declaration for Responsible AI

Effort to develop ethical guidelines for AI is the Montreal Declaration for Responsible AI. This declaration was developed by a group of experts from around the world in 2018. The declaration calls for the development of AI that is beneficial to humanity, and that respects human rights and dignity.

Partnership on AI

The Partnership on AI (PAI) is a non-profit coalition committed to the responsible use of artificial intelligence. It researches best practices for artificial intelligence systems and to educate the public about AI. Efforts underway to develop technical solutions to mitigate the risks of AI. The partnership has developed a set of AI principles, and is working on projects to address issues such as bias in AI systems and the safety of autonomous vehicles.

Publicly announced September 28, 2016, its founding members are Amazon, Facebook, Google, DeepMind, Microsoft, and IBM, with interim co-chairs Eric Horvitz of Microsoft Research and Mustafa Suleyman of DeepMind. Apple joined the consortium as a founding member in January 2017. More than 100 partners from academia, civil society, industry, and nonprofits are member organizations in 2019. In January 2017, Apple head of advanced development for Siri, Tom Gruber, joined the Partnership on AI's board. In October 2017, Terah Lyons joined the Partnership on AI as the organization's founding executive director.

The PAI's mission is to promote the beneficial use of AI through research, education, and public engagement. The PAI works to ensure that AI is developed and used in a way that is safe, ethical, and beneficial to society.

The PAI's work is guided by a set of AI principles, which were developed by the PAI's members and endorsed by the PAI's board of directors. The principles are:

  • AI should be developed and used for beneficial purposes.
  • AI should be used in a way that respects human rights and dignity.
  • AI should be developed and used in a way that is safe and secure.
  • AI should be developed and used in a way that is fair and unbiased.
  • AI should be developed and used in a way that is transparent and accountable.
  • AI should be developed and used in a way that is understandable and interpretable.
  • AI should be developed and used in a way that is aligned with societal values.


The PAI's work is divided into four areas:

  • Research: The PAI supports research on the societal and ethical implications of AI.
  • Education: The PAI provides educational resources on AI to the public.
  • Public engagement: The PAI engages with the public about AI through events, publications, and other activities.
  • Policy: The PAI works to develop and promote policies that promote the beneficial use of AI.

The PAI is a valuable resource for anyone who is interested in the responsible use of artificial intelligence. The PAI's work is helping to ensure that AI is developed and used in a way that is safe, ethical, and beneficial to society.

Asilomar AI Principles

whatis-asilomar_ai_principles.png

One of the most well-known efforts to develop ethical guidelines for AI is the Asilomar AI Principles. These principles were developed by a group of experts in AI, ethics, and law in 2017. The principles outline a set of values that should guide the development and use of AI, including safety, transparency, accountability, and fairness.


Stupidity

Dietrich Bonhoeffer, a German theologian and anti-Nazi dissident, proposed a profound and cautionary theory about stupidity in his letters and writings, particularly during his imprisonment by the Nazis. His "Theory of Stupidity" highlights stupidity as a societal and moral issue, arguing that it can be more dangerous than outright evil. Below is an overview of his perspective:

  • Stupidity as a Moral Weakness: Bonhoeffer viewed stupidity not merely as a lack of intelligence but as a moral failing—a lack of critical reflection, self-awareness, and responsibility. Stupidity, in this sense, is a choice to relinquish independent thought and judgment. He linked it to a failure to exercise freedom responsibly, often driven by conformity, fear, or an uncritical acceptance of authority.
  • The Danger of Stupidity: Bonhoeffer believed stupidity is more dangerous than malice because it is impervious to reason. Unlike evil, which can be confronted with moral arguments or force, stupidity resists logic and appeals to rationality. Stupid individuals or groups can unknowingly perpetuate harm, acting as tools for those with malicious intent.
  • Collective Phenomenon: Stupidity, Bonhoeffer argued, often manifests in groups or societies under oppressive systems. People tend to abandon critical thinking and align with the crowd to avoid standing out or facing repercussions. He observed that totalitarian regimes, like the Nazis, exploit and encourage collective stupidity by demanding blind loyalty and discouraging dissent.
  • Loss of Autonomy: Stupid individuals, according to Bonhoeffer, stop thinking for themselves and become passive. Their moral and intellectual faculties are outsourced to a leader or ideology, making them manipulable.
  • Hope Through Enlightenment: Bonhoeffer maintained that stupidity can be overcome, not through confrontation or ridicule, but through education and fostering critical thinking. He called for the cultivation of wisdom and courage in individuals, enabling them to resist manipulation and embrace independent thought.

Modern Implications: Bonhoeffer's insights remain relevant, especially in discussions about the dangers of groupthink, misinformation, and the erosion of critical thinking in political and social contexts. They caution societies to value wisdom, encourage dialogue, and remain vigilant against the allure of simplistic or authoritarian solutions to complex problems.

Cipolla's Five Laws of Human Stupidity: Cipolla's framework is a humorous yet insightful way of understanding human behavior and its consequences.

  • Always and inevitably, everyone underestimates the number of stupid individuals in circulation. This law asserts that stupidity is far more prevalent than we assume. Even in groups of educated or seemingly competent individuals, stupidity can appear unexpectedly.
  • The probability that a person is stupid is independent of any other characteristic of that person. Stupidity does not discriminate based on education, social status, profession, or any other factor. It is a universal trait found in all demographics.
  • A stupid person is someone who causes harm to another person or group while deriving no personal gain, and possibly even incurring self-harm. This is the "Golden Law of Stupidity." It highlights the irrational and self-destructive nature of stupidity, as it leads to harm without logical benefit to the perpetrator.
  • Non-stupid people always underestimate the damaging power of stupid individuals. Non-stupid individuals often fail to recognize the potential harm caused by stupidity. They either underestimate its impact or assume it can be controlled.
  • A stupid person is the most dangerous type of person. Stupid people are more dangerous than malicious individuals because their actions are unpredictable and irrational. A malicious person may act out of calculated self-interest, but a stupid person's behavior harms everyone, including themselves.




How AI Can Help?

AI can assist in understanding and disseminating the insights from Dietrich Bonhoeffer's "Theory of Stupidity" and Carlo Cipolla's "Five Laws of Human Stupidity" in several ways:

  • Summarization and Simplification: AI can summarize and distill complex theories into accessible, easy-to-understand formats for various audiences. Examples include concise bullet points, infographics, or video scripts for educational purposes.
  • Comparative Analysis: AI can draw connections between Bonhoeffer's and Cipolla's theories, highlighting shared themes and unique aspects. It can provide a comparative framework to help readers better understand their relevance to historical and contemporary issues.
  • Educational Tools: AI can create quizzes, discussion prompts, and lesson plans to teach these theories in classrooms or workshops. Interactive simulations can illustrate concepts, such as the dangers of groupthink or the societal impact of stupidity.
  • Content Creation: AI can generate articles, blog posts, or social media content to raise awareness about these theories. It can tailor the content to specific audiences, such as students, professionals, or policymakers.
  • Detection of Modern Implications: AI can analyze current events, media, and social trends to identify instances where Bonhoeffer's and Cipolla's insights are relevant. It can help flag situations where collective stupidity or groupthink might be at play, encouraging critical discussions.
  • Facilitation of Dialogue: AI-powered chatbots or forums can facilitate discussions, offering counterpoints or probing questions to encourage deeper reflection on these theories. This could be useful for academic settings or public forums.
  • Misinformation and Stupidity Mitigation: AI can assist in combating misinformation by fact-checking and promoting critical thinking, addressing Bonhoeffer’s concerns about the erosion of reason. Tools like language analysis and sentiment tracking can help identify patterns of irrational or harmful behavior in digital spaces.
  • Personal Development Tools: AI can create self-assessment tools to help individuals identify cognitive biases or tendencies to conform uncritically,


Debating

YouTube search... ...Google search

LIVE DEBATE – IBM Project Debater
At Intelligence Squared U.S., we’ve debated AI before – the risks, the rewards, and whether it can change the world – but for the first time, we’re debating with AI. In partnership with IBM, Intelligence Squared U.S. is hosting a unique debate between a world-class champion debater and an AI system. IBM Project Debater is the first AI system designed to debate humans on complex topics using a combination of pioneering research developed by IBM researchers, including: data-driven speechwriting and delivery, listening comprehension, and modeling human dilemmas. First debuted in a small closed-door event in June 2018, Project Debater will now face its toughest opponent yet in front of its largest-ever audience, with our own John Donvan in the moderator’s seat. The topic will not be revealed to Project Debater and the champion human debater until shortly before the debate begins.

Two robots debate the future of humanity
Hanson Robotics Limited's Ben Goertzel, Sophia and Han at RISE 2017. Now for something that’s never been done onstage before. While they may not be human, our next guests are ready to discuss the future of humanity, and how they see their types flourish over the coming years.

Debating IBM's Artificial Intelligence - BBC Click
Computer scientists around the world are working on ways to make artificial intelligence indistinguishable from humans - with varying degrees of success. One way this is being tested is in debates between people and computers. This week IBM’s AI system was on stage at Cambridge University and Jen Copestake was in the audience to see the results.

AI Learns the Art of Debate
Project Debater is the first AI system that can debate humans on complex topics. The goal is to help people build persuasive arguments and make well-informed decisions. Learn more: http://www.ibm.com/projectdebater.