Difference between revisions of "Ethics"

From
Jump to: navigation, search
m
(37 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
|title=PRIMO.ai
 
|title=PRIMO.ai
 
|titlemode=append
 
|titlemode=append
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS  
+
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Facebook
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
 
}}
 
}}
Line 8: Line 8:
 
[http://www.google.com/search?q=ethic+standards+deep+machine+learning+ML+artificial+intelligence ...Google search]
 
[http://www.google.com/search?q=ethic+standards+deep+machine+learning+ML+artificial+intelligence ...Google search]
  
 +
* [[Case Studies]]
 +
** [[Risk, Compliance and Regulation]]
 +
** [[Government Services]]
 +
** [[Law]]
 +
*** [http://www.nextgov.com/emerging-tech/2020/01/white-house-proposes-light-touch-regulatory-approach-artificial-intelligence/162276/ U.S. 10 Principles - White House Proposes 'Light-Touch Regulatory Approach' for Artificial Intelligence | Brandi Vincent - Nextgov]
 +
*** [http://www.xinhuanet.com/english/2019-05/26/c_138091724.htm Beijing publishes AI ethical standards, calls for int'l cooperation | Xinhua]
 +
*** [http://eugdpr.org/ EU General Data Protection Regulations GDPR.org]  ...[[Privacy#General Data Protection Regulations (GDPR)|GDPR]]
 +
** [[Defense]]
 +
*** [http://www.ai.mil/docs/08_21_20_responsible_ai_champions_pilot.pdf Responsible AI Champions Pilot |] [[Defense#Joint Artificial Intelligence Center (JAIC)|Department of Defense Joint Artificial Intelligence Center (JAIC)]]  ...DoD AI Principles ...Themes  ...Tactics
 
* [[Other Challenges]] in Artificial Intelligence
 
* [[Other Challenges]] in Artificial Intelligence
* [http://eugdpr.org/ EU GDPR.org]
+
* [[Explainable / Interpretable AI]]
 +
* [[Bias and Variances]]
 +
* [[Privacy]]
 +
* [http://www.partnershiponai.org/ Partnership on AI] brings together diverse, global voices to realize the promise of artificial intelligence
 +
* [http://montrealethics.ai/ Montreal AI Ethics Institute] creating tangible and applied technical and policy research in the ethical, safe, and inclusive development of AI.
 +
* [http://www.engadget.com/2019/02/08/amazon-microsoft-facial-recognition-laws Amazon joins Microsoft in calling for regulation of facial recognition tech | Saqib Shah - engadget]
 +
* [http://www.washingtonpost.com/opinions/mark-zuckerberg-the-internet-needs-new-rules-lets-start-in-these-four-areas/2019/03/29/9e6f0504-521a-11e9-a3f7-78b7525a8d5f_story.html The Internet needs new rules. Let’s start in these four areas. | Mark Zuckerberg]
 
* [http://www.newstatesman.com/science-tech/technology/2019/06/how-big-tech-funds-debate-ai-ethics How Big Tech funds the debate on AI ethics | Oscar Williams - NewStatesman and NS Tech]
 
* [http://www.newstatesman.com/science-tech/technology/2019/06/how-big-tech-funds-debate-ai-ethics How Big Tech funds the debate on AI ethics | Oscar Williams - NewStatesman and NS Tech]
* [http://www.xinhuanet.com/english/2019-05/26/c_138091724.htm Beijing publishes AI ethical standards, calls for int'l cooperation | Xinhua]
+
* [http://edition.cnn.com/2019/04/08/tech/ai-guidelines-eu/index.html Europe is making AI rules now to avoid a new tech crisis | Ivana Kottasová - CNN Business]
* [http://edition.cnn.com/2019/04/08/tech/ai-guidelines-eu/index.html | Ivana Kottasová - CNN Business]
 
 
* [http://www.reuters.com/article/us-oecd-technology/oecd-members-including-u-s-back-guiding-principles-to-make-ai-safer-idUSKCN1SS1V5 OECD members, including U.S., back guiding principles to make AI safer | Leigh Thomas - Reuters]
 
* [http://www.reuters.com/article/us-oecd-technology/oecd-members-including-u-s-back-guiding-principles-to-make-ai-safer-idUSKCN1SS1V5 OECD members, including U.S., back guiding principles to make AI safer | Leigh Thomas - Reuters]
 
* [http://singularityhub.com/2019/03/11/3-practical-solutions-to-offset-automations-impact-on-work/ 3 Practical Solutions to Offset Automation’s Impact on Work |  Moran Cerf, Ryan Burke and Scott Payne - Singularity Hub]
 
* [http://singularityhub.com/2019/03/11/3-practical-solutions-to-offset-automations-impact-on-work/ 3 Practical Solutions to Offset Automation’s Impact on Work |  Moran Cerf, Ryan Burke and Scott Payne - Singularity Hub]
* [http://www.ft.com/content/4fd088a4-021b-11e9-bf0f-53b8511afd73 EU backs AI regulation while China and US favour technology | Siddharth Venkataramakrishnan - he Financial Times Limited]
+
* [http://www.ft.com/content/4fd088a4-021b-11e9-bf0f-53b8511afd73 EU backs AI regulation while China and US favour technology | Siddharth Venkataramakrishnan - The Financial Times Limited]
* [http://www.engadget.com/2019/02/08/amazon-microsoft-facial-recognition-laws Amazon joins Microsoft in calling for regulation of facial recognition tech | Saqib Shah - engadget]
+
* [http://www.telegraph.co.uk/technology/2019/04/09/could-tough-new-rules-regulate-big-tech-backfire/ Could tough new rules to regulate big tech backfire? | Harry de Quetteville & Matthew Field - The Telegraph]
* [http://www.wired.com/story/google-says-wants-rules-ai-kinda-sorta/ GOOGLE SAYS IT WANTS RULES FOR THE USE OF AI—KINDA, SORTA | Tim Simonite - WIRED]
+
* [http://www.nature.com/articles/d41586-019-01413-1 Don’t let industry write the rules for AI | Yochai Benkler - Nature]
+
* [http://blog.datarobot.com/the-algorithmic-accountability-act-of-2019-taking-the-right-steps-toward-ai-success The Algorithmic Accountability Act of 2019: Taking the Right Steps Toward AI Success | Colin Priest - DataRobot]
 +
 
 +
 
 
Leading institutes and companies have published a set of ethical standards for AI research Europe is making AI rules now to avoid a new tech crisis  
 
Leading institutes and companies have published a set of ethical standards for AI research Europe is making AI rules now to avoid a new tech crisis  
  
<youtube>uA1ihJMe4Us</youtube>
+
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>q92Wj_8UQOw</youtube>
 +
<b>The Ethical Side of Data Usage | Veritone
 +
</b><br>Machine learning requires data, and many companies have lots of data that is useful for many very important tasks. However there are many questions about how this data should be used, shared and applied. Additionally, companies walk a fine line with just how much they want to let customers and users know about the data they have on them. This panel will explore the ethical side of data usage from an industry perspective.  For more details, visit us at http://Veritone.com  Veritone is a leading provider of artificial intelligence technology and solutions. The company’s proprietary operating system, aiWARE™, orchestrates an expanding ecosystem of machine learning models to transform audio, video and other data sources into actionable intelligence. Its open architecture enables customers in the media and entertainment, legal and compliance, and government sectors to easily deploy applications that leverage the power of AI to dramatically improve operational efficiency and effectiveness.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>YeZUkgHdxv0</youtube>
 +
<b>Use of Artificial Intelligence by the U.S. and Its Adversaries
 +
</b><br>After discussing the state of artificial intelligence expertise, technologies, and applications in the United States, [[Government Services#China|China]], and [[Government Services#Russia|Russia]], experts will evaluate the ways in which Beijing and Moscow can use AI improve their influence operations, cyberattacks, and battlefield capabilities. Speakers will also consider how the United States can counter any advantages that AI provides [[Government Services#Russia|Russia]] and [[Government Services#China|China]] in the propaganda, cyber, and military domains.  Speakers include: Brian Drake, Director of Artificial Intelligence and Machine Learning, DIA Future Capabilities and Innovation Office; Elsa Kania, Adjunct Senior Fellow, Technology and National Security Program, CNAS; Dr. Margarita Konaev, Research Fellow, CSET; Colonel P.J. Maykish, USAF, Director of Analysis, National Security Commission on Artificial Intelligence; and Moderator, Charles Clancy, Chief Futurist and Senior Vice President/General Manager, MITRE
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>-Vl_eSM8cRY</youtube>
 
<youtube>-Vl_eSM8cRY</youtube>
 +
<b>Kathryn Hume, Ethical Algorithms: Bias and Explainability in Machine Learning
 +
</b><br>Ethics of AI Lab  Centre for Ethics, University of Toronto, March 20, 2018  http://ethics.utoronto.ca  Kathryn Hume  intergrate.ai
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>WxcpQW3afkw</youtube>
 +
<b>Yi Zeng on "Brain-inspired Artificial Intelligence and Ethics of Artificial Intelligence"
 +
</b><br>Yi Zeng of the Institute of Automation of the [[Government Services#China|Chinese]] Academy of Sciences on "Brain-inspired Artificial Intelligence and Ethics of Artificial Intelligence" at a LASER/LAst Dialogues www.scaruffi.com/leonardo/sep2020.html
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>61Uvlt8TUfY</youtube>
 +
<b>CRISPR, AI, and the Ethics of Scientific Discovery
 +
</b><br>EthicsinSociety  (Introductions by Professor Rob Reich, President Marc Tessier-Lavigne, and grad student Margaret Guo end at 13:52.)
 +
Twin revolutions at the start of the 21st century are shaking up the very idea of what it means to be human. Computer vision and image recognition are at the heart of the AI revolution. And CRISPR is a powerful new technique for genetic editing that allows humans to intervene in evolution.  Jennifer Doudna and [[Creatives#Fei-Fei Li|Fei-Fei Li]], pioneering scientists in the fields of gene editing and artificial intelligence, respectively,  discuss the ethics of scientific discovery. Russ Altman moderated the conversation.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>i6TtgHLprZ0</youtube>
 +
<b>DOD Officials Discuss Artificial Intelligence Ethics
 +
</b><br>Dana Deasy, the Defense Department’s chief information officer, and Air Force Lt. Gen John N.T. Shanahan, director of the DOD’s Joint Artificial Intelligence Center, discuss the adoption of ethical principles for artificial intelligence at a Pentagon press briefing, Feb. 21, 2020.
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>aE8PePfUR9Y</youtube>
 +
<b>CS-E3210 Machine Learning: Basic Principles - Ethics and the [[Privacy#General Data Protection Regulations (GDPR)|GDPR]]
 +
</b><br>Alexander Jung  Guest talk by Maria Rehbinder  Senior Legal Counsel in Aalto University and Certified Information Privacy Professional (CIPP/E)  Richard Darst  Aalto Science-IT Coordinator
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>mt0i7Tiju88</youtube>
 +
<b>[[Google]] Head of Ethical AI Research on Data Biases and Ethics
 +
</b><br>Margaret (Meg) Mitchell, Co-Head of Ethical Research Group at [[Google]] AI, addresses all on data biases, algorithms, regulation, and more.
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>AzdxbzHtjgs</youtube>
 +
<b>Michael Kearns: Algorithmic Fairness, Privacy & Ethics | [[Creatives#Lex Fridman|Lex Fridman]] Podcast #50
 +
</b><br>I really enjoyed this conversation with Michael. Here's the outline:  0:00 - Introduction 2:45 - Influence from literature and journalism  7:39 - Are most people good? 13:05 - Ethical algorithm  24:28 - Algorithmic fairness of groups vs individuals  33:36 - Fairness tradeoffs  46:29 - Facebook, social networks, and algorithmic ethics  58:04 - Machine learning  58:05 - Machine learning  59:19 - Algorithm that determines what is fair  1:01:25 - Computer scientists should think about ethics  1:05:59 - Algorithmic privacy  1:11:50 - Differential privacy  1:19:10 - Privacy by misinformation  1:22:31 - Privacy of data in society  1:27:49 - Game theory  1:29:40 - Nash equilibrium  1:30:35 - Machine learning and game theory  1:34:52 - Mutual assured destruction  1:36:56 - Algorithmic trading  1:44:09 - Pivotal moment in graduate school
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>_zwBCDmlvv8</youtube>
 
<youtube>_zwBCDmlvv8</youtube>
<youtube>aE8PePfUR9Y</youtube>
+
<b>Ethics and Bias in Artificial Intelligence - 18th Vienna Deep Learning Meetup
 +
</b><br>The Vienna Deep Learning Meetup and the Centre for Informatics and Society of TU Wien jointly organized an evening of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques are in terms of their potential to do good, the technologies raise a number of ethical questions and are prone to biases that can subvert their well-intentioned goals. 
 +
Machine learning systems, from simple spam filtering or recommender systems to Deep Learning and AI, have already arrived at many different parts of society. Which web search results, job offers, product ads and social media posts we see online, even what we pay for food, mobility or insurance - all these decisions are already being made or supported by algorithms, many of which rely on statistical and machine learning methods. As they permeate society more and more, we also discover the real world impact of these systems due to inherent biases they carry. For instance, criminal risk scoring to determine bail for defendants in US district courts has been found to be biased against black people, and analysis of word embeddings has been shown to reaffirm gender stereotypes due to biased training data. While a general consensus seems to exist that such biases are almost inevitable, solutions range from embracing the bias as a factual representation of an unfair society to mathematical approaches trying to determine and combat bias in machine learning training data and the resulting algorithms.  Besides producing biased results, many machine learning methods and applications raise complex ethical questions. Should governments use such methods to determine the trustworthiness of their citizens? Should the use of systems known to have biases be tolerated to benefit some while disadvantaging others? Is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans? And how do we keep AI and automation technologies from widening society's divides, such as the digital divide or income inequality?  This event provides a platform for multidisciplinary debate in the form of keynotes and a panel discussion with international experts from diverse fields:  Keynotes:  - Prof. Moshe Vardi: "Deep Learning and the Crisis of Trust in Computing"  - Prof. Sarah Spiekermann-Hoff: “The Big Data Illusion and its Impact on Flourishing with General AI”  Panelists: Ethics and Bias in AI  - Prof. Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University  - Prof. Peter Purgathofer, Centre for Informatics and Society / Institute for Visual Computing & Human-Centered Technology, TU Wien  - Prof. Sarah Spiekermann-Hoff, Institute for Management Information Systems, WU Vienna  - Prof. Mark Coeckelbergh, Professor of Philosophy of Media and Technology, Department of Philosophy, University of Vienna  - Dr. Christof Tschohl, Scientific Director at Research Institute AG & Co KG  Moderator: Markus Mooslechner, Terra Mater Factual Studios
 +
|}
 +
|}<!-- B -->
 +
 
 +
= Debating =
 +
[http://www.youtube.com/results?search_query=debater+debating+AI YouTube search...]
 +
[http://www.google.com/search?q=debater+debating+AI ...Google search]
 +
 
 +
* [http://www.research.ibm.com/artificial-intelligence/project-debater/ Project Debater | ][[IBM]]
 +
* [http://www.hansonrobotics.com/ Hanson Robotics]
 +
 
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>m3u-1yttrVw</youtube>
 +
<b>LIVE DEBATE – [[IBM]] Project Debater
 +
</b><br>At Intelligence Squared U.S., we’ve debated AI before – the risks, the rewards, and whether it can change the world – but for the first time, we’re debating with AI.
 +
In partnership with [[IBM]], Intelligence Squared U.S. is hosting a unique debate between a world-class champion debater and an AI system. IBM Project Debater is the first AI system designed to debate humans on complex topics using a combination of pioneering research developed by [[IBM]] researchers, including: data-driven speechwriting and delivery, listening comprehension, and modeling human dilemmas. First debuted in a small closed-door event in June 2018, Project Debater will now face its toughest opponent yet in front of its largest-ever audience, with our own John Donvan in the moderator’s seat. The topic will not be revealed to Project Debater and the champion human debater until shortly before the debate begins.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>1y3XdwTa1cA</youtube>
 +
<b>Two robots debate the future of humanity
 +
</b><br>Hanson Robotics Limited's Ben Goertzel, Sophia and Han at RISE 2017.  Now for something that’s never been done onstage before. While they may not be human, our next guests are ready to discuss the future of humanity, and how they see their types flourish over the coming years.
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>xF1J8FFq03M</youtube>
 +
<b>Debating [[IBM]]'s Artificial Intelligence - BBC Click
 +
</b><br>Computer scientists around the world are working on ways to make artificial intelligence indistinguishable from humans - with varying degrees of success. One way this is being tested is in debates between people and computers. This week [[IBM]]’s AI system was on stage at Cambridge University and Jen Copestake was in the audience to see the results.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>muCU38g3pNg</youtube>
 +
<b>Human beats [[IBM]]'s AI in debate competition
 +
</b><br>[[IBM]]'s 6-year-old artificial intelligence debating system, dubbed Miss Debater, just got trounced by world-renowned debater, Harish Natarajan. Suck it, machines! Follow The Resident at http://www.twitter.com/TheResident
 +
|}
 +
|}<!-- B -->

Revision as of 11:27, 27 December 2020

YouTube search... ...Google search


Leading institutes and companies have published a set of ethical standards for AI research Europe is making AI rules now to avoid a new tech crisis

The Ethical Side of Data Usage | Veritone
Machine learning requires data, and many companies have lots of data that is useful for many very important tasks. However there are many questions about how this data should be used, shared and applied. Additionally, companies walk a fine line with just how much they want to let customers and users know about the data they have on them. This panel will explore the ethical side of data usage from an industry perspective. For more details, visit us at http://Veritone.com Veritone is a leading provider of artificial intelligence technology and solutions. The company’s proprietary operating system, aiWARE™, orchestrates an expanding ecosystem of machine learning models to transform audio, video and other data sources into actionable intelligence. Its open architecture enables customers in the media and entertainment, legal and compliance, and government sectors to easily deploy applications that leverage the power of AI to dramatically improve operational efficiency and effectiveness.

Use of Artificial Intelligence by the U.S. and Its Adversaries
After discussing the state of artificial intelligence expertise, technologies, and applications in the United States, China, and Russia, experts will evaluate the ways in which Beijing and Moscow can use AI improve their influence operations, cyberattacks, and battlefield capabilities. Speakers will also consider how the United States can counter any advantages that AI provides Russia and China in the propaganda, cyber, and military domains. Speakers include: Brian Drake, Director of Artificial Intelligence and Machine Learning, DIA Future Capabilities and Innovation Office; Elsa Kania, Adjunct Senior Fellow, Technology and National Security Program, CNAS; Dr. Margarita Konaev, Research Fellow, CSET; Colonel P.J. Maykish, USAF, Director of Analysis, National Security Commission on Artificial Intelligence; and Moderator, Charles Clancy, Chief Futurist and Senior Vice President/General Manager, MITRE

Kathryn Hume, Ethical Algorithms: Bias and Explainability in Machine Learning
Ethics of AI Lab Centre for Ethics, University of Toronto, March 20, 2018 http://ethics.utoronto.ca Kathryn Hume intergrate.ai

Yi Zeng on "Brain-inspired Artificial Intelligence and Ethics of Artificial Intelligence"
Yi Zeng of the Institute of Automation of the Chinese Academy of Sciences on "Brain-inspired Artificial Intelligence and Ethics of Artificial Intelligence" at a LASER/LAst Dialogues www.scaruffi.com/leonardo/sep2020.html

CRISPR, AI, and the Ethics of Scientific Discovery
EthicsinSociety (Introductions by Professor Rob Reich, President Marc Tessier-Lavigne, and grad student Margaret Guo end at 13:52.) Twin revolutions at the start of the 21st century are shaking up the very idea of what it means to be human. Computer vision and image recognition are at the heart of the AI revolution. And CRISPR is a powerful new technique for genetic editing that allows humans to intervene in evolution. Jennifer Doudna and Fei-Fei Li, pioneering scientists in the fields of gene editing and artificial intelligence, respectively, discuss the ethics of scientific discovery. Russ Altman moderated the conversation.

DOD Officials Discuss Artificial Intelligence Ethics
Dana Deasy, the Defense Department’s chief information officer, and Air Force Lt. Gen John N.T. Shanahan, director of the DOD’s Joint Artificial Intelligence Center, discuss the adoption of ethical principles for artificial intelligence at a Pentagon press briefing, Feb. 21, 2020.

CS-E3210 Machine Learning: Basic Principles - Ethics and the GDPR
Alexander Jung Guest talk by Maria Rehbinder Senior Legal Counsel in Aalto University and Certified Information Privacy Professional (CIPP/E) Richard Darst Aalto Science-IT Coordinator

Google Head of Ethical AI Research on Data Biases and Ethics
Margaret (Meg) Mitchell, Co-Head of Ethical Research Group at Google AI, addresses all on data biases, algorithms, regulation, and more.

Michael Kearns: Algorithmic Fairness, Privacy & Ethics | Lex Fridman Podcast #50
I really enjoyed this conversation with Michael. Here's the outline: 0:00 - Introduction 2:45 - Influence from literature and journalism 7:39 - Are most people good? 13:05 - Ethical algorithm 24:28 - Algorithmic fairness of groups vs individuals 33:36 - Fairness tradeoffs 46:29 - Facebook, social networks, and algorithmic ethics 58:04 - Machine learning 58:05 - Machine learning 59:19 - Algorithm that determines what is fair 1:01:25 - Computer scientists should think about ethics 1:05:59 - Algorithmic privacy 1:11:50 - Differential privacy 1:19:10 - Privacy by misinformation 1:22:31 - Privacy of data in society 1:27:49 - Game theory 1:29:40 - Nash equilibrium 1:30:35 - Machine learning and game theory 1:34:52 - Mutual assured destruction 1:36:56 - Algorithmic trading 1:44:09 - Pivotal moment in graduate school

Ethics and Bias in Artificial Intelligence - 18th Vienna Deep Learning Meetup
The Vienna Deep Learning Meetup and the Centre for Informatics and Society of TU Wien jointly organized an evening of discussion on the topic of Ethics and Bias in AI. As promising as machine learning techniques are in terms of their potential to do good, the technologies raise a number of ethical questions and are prone to biases that can subvert their well-intentioned goals. Machine learning systems, from simple spam filtering or recommender systems to Deep Learning and AI, have already arrived at many different parts of society. Which web search results, job offers, product ads and social media posts we see online, even what we pay for food, mobility or insurance - all these decisions are already being made or supported by algorithms, many of which rely on statistical and machine learning methods. As they permeate society more and more, we also discover the real world impact of these systems due to inherent biases they carry. For instance, criminal risk scoring to determine bail for defendants in US district courts has been found to be biased against black people, and analysis of word embeddings has been shown to reaffirm gender stereotypes due to biased training data. While a general consensus seems to exist that such biases are almost inevitable, solutions range from embracing the bias as a factual representation of an unfair society to mathematical approaches trying to determine and combat bias in machine learning training data and the resulting algorithms. Besides producing biased results, many machine learning methods and applications raise complex ethical questions. Should governments use such methods to determine the trustworthiness of their citizens? Should the use of systems known to have biases be tolerated to benefit some while disadvantaging others? Is it ethical to develop AI technologies that might soon replace many jobs currently performed by humans? And how do we keep AI and automation technologies from widening society's divides, such as the digital divide or income inequality? This event provides a platform for multidisciplinary debate in the form of keynotes and a panel discussion with international experts from diverse fields: Keynotes: - Prof. Moshe Vardi: "Deep Learning and the Crisis of Trust in Computing" - Prof. Sarah Spiekermann-Hoff: “The Big Data Illusion and its Impact on Flourishing with General AI” Panelists: Ethics and Bias in AI - Prof. Moshe Vardi, Karen Ostrum George Distinguished Service Professor in Computational Engineering, Rice University - Prof. Peter Purgathofer, Centre for Informatics and Society / Institute for Visual Computing & Human-Centered Technology, TU Wien - Prof. Sarah Spiekermann-Hoff, Institute for Management Information Systems, WU Vienna - Prof. Mark Coeckelbergh, Professor of Philosophy of Media and Technology, Department of Philosophy, University of Vienna - Dr. Christof Tschohl, Scientific Director at Research Institute AG & Co KG Moderator: Markus Mooslechner, Terra Mater Factual Studios

Debating

YouTube search... ...Google search

LIVE DEBATE – IBM Project Debater
At Intelligence Squared U.S., we’ve debated AI before – the risks, the rewards, and whether it can change the world – but for the first time, we’re debating with AI. In partnership with IBM, Intelligence Squared U.S. is hosting a unique debate between a world-class champion debater and an AI system. IBM Project Debater is the first AI system designed to debate humans on complex topics using a combination of pioneering research developed by IBM researchers, including: data-driven speechwriting and delivery, listening comprehension, and modeling human dilemmas. First debuted in a small closed-door event in June 2018, Project Debater will now face its toughest opponent yet in front of its largest-ever audience, with our own John Donvan in the moderator’s seat. The topic will not be revealed to Project Debater and the champion human debater until shortly before the debate begins.

Two robots debate the future of humanity
Hanson Robotics Limited's Ben Goertzel, Sophia and Han at RISE 2017. Now for something that’s never been done onstage before. While they may not be human, our next guests are ready to discuss the future of humanity, and how they see their types flourish over the coming years.

Debating IBM's Artificial Intelligence - BBC Click
Computer scientists around the world are working on ways to make artificial intelligence indistinguishable from humans - with varying degrees of success. One way this is being tested is in debates between people and computers. This week IBM’s AI system was on stage at Cambridge University and Jen Copestake was in the audience to see the results.

Human beats IBM's AI in debate competition
IBM's 6-year-old artificial intelligence debating system, dubbed Miss Debater, just got trounced by world-renowned debater, Harish Natarajan. Suck it, machines! Follow The Resident at http://www.twitter.com/TheResident