Difference between revisions of "AI Governance"

From
Jump to: navigation, search
m
m
 
(17 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
|title=PRIMO.ai
 
|title=PRIMO.ai
 
|titlemode=append
 
|titlemode=append
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS  
+
|keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
+
 
 +
<!-- Google tag (gtag.js) -->
 +
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script>
 +
<script>
 +
  window.dataLayer = window.dataLayer || [];
 +
  function gtag(){dataLayer.push(arguments);}
 +
  gtag('js', new Date());
 +
 
 +
  gtag('config', 'G-4GCWLBVJ7T');
 +
</script>
 
}}
 
}}
[http://www.youtube.com/results?search_query=AI+Governance+artificial+intelligence+deep+learning Youtube search...]
+
[https://www.youtube.com/results?search_query=ai+Governance YouTube]
[http://www.google.com/search?q=AI+Governance+artificial+intelligence+deep+machine+learning+ML ...Google search]
+
[https://www.quora.com/search?q=ai%20Governance ... Quora]
 +
[https://www.google.com/search?q=ai+Governance ...Google search]
 +
[https://news.google.com/search?q=ai+Governance ...Google News]
 +
[https://www.bing.com/news/search?q=ai+Governance&qft=interval%3d%228%22 ...Bing News]
  
* [[Case Studies]]
+
* [[Risk, Compliance and Regulation]]  ... [[Ethics]]  ... [[Privacy]]  ... [[Law]]  ... [[AI Governance]]  ... [[AI Verification and Validation]]
** [[Requirements Management]]
+
* [[Analytics]] ... [[Visualization]] ... [[Graphical Tools for Modeling AI Components|Graphical Tools]] ... [[Diagrams for Business Analysis|Diagrams]] & [[Generative AI for Business Analysis|Business Analysis]] ... [[Requirements Management|Requirements]] ... [[Loop]] ... [[Bayes]] ... [[Network Pattern]]
* [[Data Governance]]
+
* [[Cybersecurity]] ... [[Open-Source Intelligence - OSINT |OSINT]] ... [[Cybersecurity Frameworks, Architectures & Roadmaps | Frameworks]] ... [[Cybersecurity References|References]] ... [[Offense - Adversarial Threats/Attacks| Offense]] ... [[National Institute of Standards and Technology (NIST)|NIST]] ... [[U.S. Department of Homeland Security (DHS)| DHS]] ... [[Screening; Passenger, Luggage, & Cargo|Screening]] ... [[Law Enforcement]] ... [[Government Services|Government]] ... [[Defense]] ... [[Joint Capabilities Integration and Development System (JCIDS)#Cybersecurity & Acquisition Lifecycle Integration| Lifecycle Integration]] ... [[Cybersecurity Companies/Products|Products]] ... [[Cybersecurity: Evaluating & Selling|Evaluating]]
* [[Enterprise Architecture (EA)]]
+
* [[Policy]]  ... [[Policy vs Plan]] ... [[Constitutional AI]] ... [[Trust Region Policy Optimization (TRPO)]] ... [[Policy Gradient (PG)]] ... [[Proximal Policy Optimization (PPO)]]
* [[Enterprise Portfolio Management (EPM)]]
+
* [[Data Science]] ... [[Data Governance|Governance]] ... [[Data Preprocessing|Preprocessing]] ... [[Feature Exploration/Learning|Exploration]] ... [[Data Interoperability|Interoperability]] ... [[Algorithm Administration#Master Data Management (MDM)|Master Data Management (MDM)]] ... [[Bias and Variances]] ... [[Benchmarks]] ... [[Datasets]]  
* [[Architectures]] supporting machine learning
+
* [[Data Quality]] ...[[AI Verification and Validation|validity]], [[Evaluation - Measures#Accuracy|accuracy]], [[Data Quality#Data Cleaning|cleaning]], [[Data Quality#Data Completeness|completeness]], [[Data Quality#Data Consistency|consistency]], [[Data Quality#Data Encoding|encoding]], [[Data Quality#Zero Padding|padding]], [[Data Quality#Data Augmentation, Data Labeling, and Auto-Tagging|augmentation, labeling, auto-tagging]], [[Data Quality#Batch Norm(alization) & Standardization| normalization, standardization]], and [[Data Quality#Imbalanced Data|imbalanced data]]
* [[Evaluation]]
+
* [[Architectures]] for AI ... [[Generative AI Stack]] ... [[Enterprise Architecture (EA)]] ... [[Enterprise Portfolio Management (EPM)]] ... [[Architecture and Interior Design]]
* [http://www.cio.com/article/3328495/tackling-artificial-intelligence-using-architecture.html Tackling artificial intelligence using architecture | Daniel Lambert - CIO]
+
* [[Strategy & Tactics]] ... [[Project Management]] ... [[Best Practices]] ... [[Checklists]] ... [[Project Check-in]] ... [[Evaluation]] ... [[Evaluation - Measures|Measures]]
 +
* [https://www.cio.com/article/3328495/tackling-artificial-intelligence-using-architecture.html Tackling artificial intelligence using architecture | Daniel Lambert - CIO]
  
 
= AI Goverance =
 
= AI Goverance =
Line 22: Line 35:
 
{| class="wikitable" style="width: 550px;"
 
{| class="wikitable" style="width: 550px;"
 
||
 
||
<youtube>AVDIQvJVhso</youtube>
+
<youtube>3T7Gpwhtc6Q</youtube>
<b>Why companies should be leading on AI governance | Jade Leung | EA Global: London 2018
+
<b>Shahar Avin–AI Governance
 
</b><br>Why Companies Should be Leading on AI Governance by Jade Leung from EA Global 2018: London.  Centre for Effective Altruism  Jade Leung
 
</b><br>Why Companies Should be Leading on AI Governance by Jade Leung from EA Global 2018: London.  Centre for Effective Altruism  Jade Leung
 
|}
 
|}
Line 32: Line 45:
 
<youtube>XxmYOT_ZUeI</youtube>
 
<youtube>XxmYOT_ZUeI</youtube>
 
<b>Keep your AI under Control - Governance of AI
 
<b>Keep your AI under Control - Governance of AI
</b><br>Dolf van der Haven - Artificial Intelligence (AI) is becoming widespread and will reach the mainstream soon. With its increasing capabilities, however, how do we ensure that AI keeps doing what we want it to do? What governance frameworks, standards and methods do we have to control AI such that it stays within the bounds of what it was designed for? This presentation looks at Governance and Management of AI, including applicable ISO standards, Ethics and Risks. Join BrightTALK's LinkedIn Group for BI & Analytics! http://bit.ly/BrightTALKBI
+
</b><br>Shahar is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of your time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems by leading AI Governance role-playing workshops (https://intelligencerising.org/).
 +
 
 +
Transcript & Audio: https://theinsideview.ai/shahar
 
|}
 
|}
 
|}<!-- B -->
 
|}<!-- B -->
Line 49: Line 64:
 
<youtube>bSTYiIgjgrk</youtube>
 
<youtube>bSTYiIgjgrk</youtube>
 
<b>Fireside Chat: AI governance | Markus Anderljung | Ben Garfinkel | EA Global: Virtual 2020
 
<b>Fireside Chat: AI governance | Markus Anderljung | Ben Garfinkel | EA Global: Virtual 2020
</b><br>Markus Anderljung and Ben Garfinkel discuss how they got into the field of AI governance and how the field has developed over the past few years. They discuss the question, "How sure are we about this AI stuff?", and finish with an update on GovAI's latest research and how to pursue a career in AI governance. Markus is a Project Manager at the Centre for the Governance of AI ("GovAI"). He is focused on growing GovAI and making their research relevant to important stakeholders. He has a background in history and philosophy of science, with a focus on evidence-based policy and philosophy of economics. Before joining GovAI, Markus was the Executive Director of Effective Altruism Sweden. Ben is a Research Fellow at the Future of Humanity Institute and a DPhil student at Oxford’s Department of Politics and International Relations. Ben’s research interests include the security and privacy implications of artificial intelligence, the causes of interstate war, and the methodological challenge of forecasting and reducing technological risks. He previously earned degrees in Physics and in Mathematics and Philosophy from Yale University.
+
</b><br>Markus Anderljung and Ben Garfinkel discuss how they got into the field of AI governance and how the field has developed over the past few years. They discuss the question, "How sure are we about this AI stuff?", and finish with an update on GovAI's latest research and how to pursue a career in AI governance. Markus is a Project Manager at the Centre for the Governance of AI ("GovAI"). He is focused on growing GovAI and making their research relevant to important stakeholders. He has a background in history and philosophy of science, with a focus on evidence-based policy and philosophy of economics. Before joining GovAI, Markus was the Executive Director of Effective Altruism Sweden. Ben is a Research Fellow at the Future of Humanity Institute and a DPhil student at Oxford’s Department of Politics and International Relations. Ben’s research interests include the security and [[privacy]] implications of artificial intelligence, the causes of interstate war, and the methodological challenge of forecasting and reducing technological risks. He previously earned degrees in Physics and in Mathematics and Philosophy from Yale University.
 
|}
 
|}
 
|}<!-- B -->
 
|}<!-- B -->
Line 59: Line 74:
 
<b>Model Governance and Explainable AI
 
<b>Model Governance and Explainable AI
 
</b><br>This meetup was recorded in Washington, D.C. on May 22nd, 2019.  We are thrilled to host Nick Schmidt and Dr. Bryce Stephens of BLDS partners for an informed discussion about machine learning for high-impact and highly-regulated real-world applications. Our panelists will address policy, regulatory, and technical concerns regarding the use of AI for automated decision-making in areas like credit lending and employment. We'll also leave lots of time for audience questions. The discussion will be moderated by Patrick Hall of H2O.ai.  Presenters:  Nick Schmidt, Director and Head of the AI/ML Innovation Practice, BLDS LLC  Dr. Bryce Stephens, Director, BLDS LLC  Patrick Hall, Senior Director of Product, H2O.ai  Bios:
 
</b><br>This meetup was recorded in Washington, D.C. on May 22nd, 2019.  We are thrilled to host Nick Schmidt and Dr. Bryce Stephens of BLDS partners for an informed discussion about machine learning for high-impact and highly-regulated real-world applications. Our panelists will address policy, regulatory, and technical concerns regarding the use of AI for automated decision-making in areas like credit lending and employment. We'll also leave lots of time for audience questions. The discussion will be moderated by Patrick Hall of H2O.ai.  Presenters:  Nick Schmidt, Director and Head of the AI/ML Innovation Practice, BLDS LLC  Dr. Bryce Stephens, Director, BLDS LLC  Patrick Hall, Senior Director of Product, H2O.ai  Bios:
Nicholas Schmidt is a Partner and the A.I. Practice Leader at BLDS, LLC. In these roles, Nick specializes in the application of statistics and economics to questions of law, regulatory compliance, and best practices in model governance. His work involves developing techniques that allow his clients to make their A.I. models fairer and more inclusive. He has also helped his clients understand and implement methods that open “black-box” A.I. models, enabling a clearer understanding A.I.’s decision-making process. Bryce Stephens provides economic research, econometric analysis, and compliance advisory services, with a specific focus on issues related to consumer financial protection, such as the Equal Credit Opportunity Act (ECOA), and emerging analytical methods. Prior to joining BLDS, Dr. Stephens spent over seven years as an economist and Section Chief in the Office of Research at the Consumer Financial Protection Bureau. At the Bureau, he led a team of economists and analysts that conducted analysis and supported policy development on fair lending related supervisory exams, enforcement matters, rulemakings, and other policy initiatives.  Before joining the Bureau, Dr. Stephens served as an economic litigation consultant, conducting research and econometric analysis across of broad range of practice areas including: fair lending and consumer finance; labor, employment, and earnings; product liability; and healthcare.  Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability and model management. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute.
+
Nicholas Schmidt is a Partner and the A.I. Practice Leader at BLDS, LLC. In these roles, Nick specializes in the application of statistics and economics to questions of law, regulatory compliance, and best practices in model governance. His work involves developing techniques that allow his clients to make their A.I. models fairer and more inclusive. He has also helped his clients understand and implement methods that open “black-box” A.I. models, enabling a clearer understanding A.I.’s decision-making process. Bryce Stephens provides economic research, econometric analysis, and compliance advisory services, with a specific focus on issues related to consumer financial protection, such as the Equal Credit Opportunity Act (ECOA), and emerging analytical methods. Prior to joining BLDS, Dr. Stephens spent over seven years as an economist and Section Chief in the Office of Research at the Consumer Financial Protection Bureau. At the Bureau, he led a team of economists and analysts that conducted analysis and supported policy [[development]] on fair lending related supervisory exams, enforcement matters, rulemakings, and other policy initiatives.  Before joining the Bureau, Dr. Stephens served as an economic litigation consultant, conducting research and econometric analysis across of broad range of practice areas including: fair lending and consumer finance; labor, employment, and earnings; product liability; and healthcare.  Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability and model management. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and [[development]] roles at SAS Institute.
 
|}
 
|}
 
|<!-- M -->
 
|<!-- M -->
Line 66: Line 81:
 
||
 
||
 
<youtube>k0jF-UMC1b4</youtube>
 
<youtube>k0jF-UMC1b4</youtube>
<b>AI Ethics, Policy, and Governance at Stanford - Day One
+
<b>AI [[Ethics]], Policy, and Governance at Stanford - Day One
</b><br>Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) via livestream on Oct. 28-29 for our 2019 fall conference on AI Ethics, Policy, and Governance. With experts from academia, industry, civil society, and government, we’ll explore critical and emerging issues around understanding and guiding AI’s human and societal impact to benefit humanity.  The program starts at 15 minutes, 30 seconds.
+
</b><br>Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) via livestream on Oct. 28-29 for our 2019 fall conference on AI [[Ethics]], Policy, and Governance. With experts from academia, industry, civil society, and government, we’ll explore critical and emerging issues around understanding and guiding AI’s human and societal impact to benefit humanity.  The program starts at 15 minutes, 30 seconds.
 
|}
 
|}
 
|}<!-- B -->
 
|}<!-- B -->

Latest revision as of 14:56, 18 September 2023

YouTube ... Quora ...Google search ...Google News ...Bing News

AI Goverance

Shahar Avin–AI Governance
Why Companies Should be Leading on AI Governance by Jade Leung from EA Global 2018: London. Centre for Effective Altruism Jade Leung

Keep your AI under Control - Governance of AI
Shahar is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of your time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems by leading AI Governance role-playing workshops (https://intelligencerising.org/).

Transcript & Audio: https://theinsideview.ai/shahar

CPDP 2019: AI Governance: role of legislators, tech companies and standard bodies.
Organised by Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg Chair Mark Cole, University of Luxembourg (LU) Moderator: Erik Valgaeren, Stibbe (BE) Speakers: Alain Herrmann, National Commission for Data Protection (LU); Christian Wagner, University of Nottingham (UK); Jan Schallaböck, iRights/ISO (DE); Janna Lingenfelder, IBM/ ISO (DE) AI calls for a “coordinated action plan” as recently stated by the European Commission. With its societal and ethical implications, it is a matter of general impact across sectors, going beyond se- curity and trustworthiness or the creation of a regulatory framework. Hence this panel intends to address the topic of AI governance, whether such governance is needed and if so, how to ensure its consistency. It will also discuss whether existing structures and bodies are adequate to deal with such governance, or, if we perhaps need to think about creating new structures and man- date them with this task. Where do we stand and where are we heading in terms of how we are collectively dealing with the soon to be almost ubiquitous phenomenon of AI? Do we need AI governance? If so, who should be in charge of it? Is there a need to ensure consistency of such governance? What are the risks? Do we know them and are we in the right position to address them? Are existing structures/bodies sufficient to address these issues or do we perhaps need to create news ones?

Fireside Chat: AI governance | Markus Anderljung | Ben Garfinkel | EA Global: Virtual 2020
Markus Anderljung and Ben Garfinkel discuss how they got into the field of AI governance and how the field has developed over the past few years. They discuss the question, "How sure are we about this AI stuff?", and finish with an update on GovAI's latest research and how to pursue a career in AI governance. Markus is a Project Manager at the Centre for the Governance of AI ("GovAI"). He is focused on growing GovAI and making their research relevant to important stakeholders. He has a background in history and philosophy of science, with a focus on evidence-based policy and philosophy of economics. Before joining GovAI, Markus was the Executive Director of Effective Altruism Sweden. Ben is a Research Fellow at the Future of Humanity Institute and a DPhil student at Oxford’s Department of Politics and International Relations. Ben’s research interests include the security and privacy implications of artificial intelligence, the causes of interstate war, and the methodological challenge of forecasting and reducing technological risks. He previously earned degrees in Physics and in Mathematics and Philosophy from Yale University.

Model Governance and Explainable AI
This meetup was recorded in Washington, D.C. on May 22nd, 2019. We are thrilled to host Nick Schmidt and Dr. Bryce Stephens of BLDS partners for an informed discussion about machine learning for high-impact and highly-regulated real-world applications. Our panelists will address policy, regulatory, and technical concerns regarding the use of AI for automated decision-making in areas like credit lending and employment. We'll also leave lots of time for audience questions. The discussion will be moderated by Patrick Hall of H2O.ai. Presenters: Nick Schmidt, Director and Head of the AI/ML Innovation Practice, BLDS LLC Dr. Bryce Stephens, Director, BLDS LLC Patrick Hall, Senior Director of Product, H2O.ai Bios: Nicholas Schmidt is a Partner and the A.I. Practice Leader at BLDS, LLC. In these roles, Nick specializes in the application of statistics and economics to questions of law, regulatory compliance, and best practices in model governance. His work involves developing techniques that allow his clients to make their A.I. models fairer and more inclusive. He has also helped his clients understand and implement methods that open “black-box” A.I. models, enabling a clearer understanding A.I.’s decision-making process. Bryce Stephens provides economic research, econometric analysis, and compliance advisory services, with a specific focus on issues related to consumer financial protection, such as the Equal Credit Opportunity Act (ECOA), and emerging analytical methods. Prior to joining BLDS, Dr. Stephens spent over seven years as an economist and Section Chief in the Office of Research at the Consumer Financial Protection Bureau. At the Bureau, he led a team of economists and analysts that conducted analysis and supported policy development on fair lending related supervisory exams, enforcement matters, rulemakings, and other policy initiatives. Before joining the Bureau, Dr. Stephens served as an economic litigation consultant, conducting research and econometric analysis across of broad range of practice areas including: fair lending and consumer finance; labor, employment, and earnings; product liability; and healthcare. Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability and model management. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute.

AI Ethics, Policy, and Governance at Stanford - Day One
Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) via livestream on Oct. 28-29 for our 2019 fall conference on AI Ethics, Policy, and Governance. With experts from academia, industry, civil society, and government, we’ll explore critical and emerging issues around understanding and guiding AI’s human and societal impact to benefit humanity. The program starts at 15 minutes, 30 seconds.

What is Enterprise AI Model Governance? [Applied AI ML in Business] AL ML DL Introduction
Enterprise Machine Learning Model Governance or Enterprise AI Governance will be an important topic in the next few years. Along with AI Governance within an enterprise, we need an end-to-end AI Governance and Machine Learning model Governance operation. Everything about Applied Artificial Intelligence, Machine Learning in real world. Mind Data Intelligence is Brian Ka Chan - Applied AI Strategist, Technology/Data/Analytics Executive, ex-Oracle Architect, ex-SAP Specialist. "Artificial intelligence for Everyone" is my vision about the channel. And it will also include fintech, smart cities, and all latest cutting edge technologies. The goal of the channel to sharing AI & Machine Learning knowledge, expand common sense, and demystify AI Myths. We want everyone from all level of walks to understand Artificial Intelligence.

AI Model Governance in a High Compliance Industry
Model governance defines a collection of best practices for data science – versioning, reproducibility, experiment tracking, automated CI/CD, and others. Within a high-compliance setting where the data used for training or inference contains private health information (PHI) or similarly sensitive data, additional requirements such as strong identity management, role-based access control, approval workflows, and full audit trail are added. This webinar summarizes requirements and best practices for establishing a high-productivity data science team within a high-compliance environment. It then demonstrates how these requirements can be met using John Snow Labs’ Healthcare AI Platform.