Difference between revisions of "AI Governance"
m |
m |
||
(17 intermediate revisions by the same user not shown) | |||
Line 2: | Line 2: | ||
|title=PRIMO.ai | |title=PRIMO.ai | ||
|titlemode=append | |titlemode=append | ||
− | |keywords=artificial, intelligence, machine, learning, models | + | |keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools |
− | |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools | + | |
+ | <!-- Google tag (gtag.js) --> | ||
+ | <script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script> | ||
+ | <script> | ||
+ | window.dataLayer = window.dataLayer || []; | ||
+ | function gtag(){dataLayer.push(arguments);} | ||
+ | gtag('js', new Date()); | ||
+ | |||
+ | gtag('config', 'G-4GCWLBVJ7T'); | ||
+ | </script> | ||
}} | }} | ||
− | [ | + | [https://www.youtube.com/results?search_query=ai+Governance YouTube] |
− | [ | + | [https://www.quora.com/search?q=ai%20Governance ... Quora] |
+ | [https://www.google.com/search?q=ai+Governance ...Google search] | ||
+ | [https://news.google.com/search?q=ai+Governance ...Google News] | ||
+ | [https://www.bing.com/news/search?q=ai+Governance&qft=interval%3d%228%22 ...Bing News] | ||
− | * [[ | + | * [[Risk, Compliance and Regulation]] ... [[Ethics]] ... [[Privacy]] ... [[Law]] ... [[AI Governance]] ... [[AI Verification and Validation]] |
− | * | + | * [[Analytics]] ... [[Visualization]] ... [[Graphical Tools for Modeling AI Components|Graphical Tools]] ... [[Diagrams for Business Analysis|Diagrams]] & [[Generative AI for Business Analysis|Business Analysis]] ... [[Requirements Management|Requirements]] ... [[Loop]] ... [[Bayes]] ... [[Network Pattern]] |
− | * [[Data Governance]] | + | * [[Cybersecurity]] ... [[Open-Source Intelligence - OSINT |OSINT]] ... [[Cybersecurity Frameworks, Architectures & Roadmaps | Frameworks]] ... [[Cybersecurity References|References]] ... [[Offense - Adversarial Threats/Attacks| Offense]] ... [[National Institute of Standards and Technology (NIST)|NIST]] ... [[U.S. Department of Homeland Security (DHS)| DHS]] ... [[Screening; Passenger, Luggage, & Cargo|Screening]] ... [[Law Enforcement]] ... [[Government Services|Government]] ... [[Defense]] ... [[Joint Capabilities Integration and Development System (JCIDS)#Cybersecurity & Acquisition Lifecycle Integration| Lifecycle Integration]] ... [[Cybersecurity Companies/Products|Products]] ... [[Cybersecurity: Evaluating & Selling|Evaluating]] |
− | * [[Enterprise Architecture (EA)]] | + | * [[Policy]] ... [[Policy vs Plan]] ... [[Constitutional AI]] ... [[Trust Region Policy Optimization (TRPO)]] ... [[Policy Gradient (PG)]] ... [[Proximal Policy Optimization (PPO)]] |
− | + | * [[Data Science]] ... [[Data Governance|Governance]] ... [[Data Preprocessing|Preprocessing]] ... [[Feature Exploration/Learning|Exploration]] ... [[Data Interoperability|Interoperability]] ... [[Algorithm Administration#Master Data Management (MDM)|Master Data Management (MDM)]] ... [[Bias and Variances]] ... [[Benchmarks]] ... [[Datasets]] | |
− | * [[ | + | * [[Data Quality]] ...[[AI Verification and Validation|validity]], [[Evaluation - Measures#Accuracy|accuracy]], [[Data Quality#Data Cleaning|cleaning]], [[Data Quality#Data Completeness|completeness]], [[Data Quality#Data Consistency|consistency]], [[Data Quality#Data Encoding|encoding]], [[Data Quality#Zero Padding|padding]], [[Data Quality#Data Augmentation, Data Labeling, and Auto-Tagging|augmentation, labeling, auto-tagging]], [[Data Quality#Batch Norm(alization) & Standardization| normalization, standardization]], and [[Data Quality#Imbalanced Data|imbalanced data]] |
− | + | * [[Architectures]] for AI ... [[Generative AI Stack]] ... [[Enterprise Architecture (EA)]] ... [[Enterprise Portfolio Management (EPM)]] ... [[Architecture and Interior Design]] | |
− | * [ | + | * [[Strategy & Tactics]] ... [[Project Management]] ... [[Best Practices]] ... [[Checklists]] ... [[Project Check-in]] ... [[Evaluation]] ... [[Evaluation - Measures|Measures]] |
+ | * [https://www.cio.com/article/3328495/tackling-artificial-intelligence-using-architecture.html Tackling artificial intelligence using architecture | Daniel Lambert - CIO] | ||
= AI Goverance = | = AI Goverance = | ||
Line 22: | Line 35: | ||
{| class="wikitable" style="width: 550px;" | {| class="wikitable" style="width: 550px;" | ||
|| | || | ||
− | <youtube> | + | <youtube>3T7Gpwhtc6Q</youtube> |
− | <b> | + | <b>Shahar Avin–AI Governance |
</b><br>Why Companies Should be Leading on AI Governance by Jade Leung from EA Global 2018: London. Centre for Effective Altruism Jade Leung | </b><br>Why Companies Should be Leading on AI Governance by Jade Leung from EA Global 2018: London. Centre for Effective Altruism Jade Leung | ||
|} | |} | ||
Line 32: | Line 45: | ||
<youtube>XxmYOT_ZUeI</youtube> | <youtube>XxmYOT_ZUeI</youtube> | ||
<b>Keep your AI under Control - Governance of AI | <b>Keep your AI under Control - Governance of AI | ||
− | </b><br> | + | </b><br>Shahar is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of your time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems by leading AI Governance role-playing workshops (https://intelligencerising.org/). |
+ | |||
+ | Transcript & Audio: https://theinsideview.ai/shahar | ||
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
Line 49: | Line 64: | ||
<youtube>bSTYiIgjgrk</youtube> | <youtube>bSTYiIgjgrk</youtube> | ||
<b>Fireside Chat: AI governance | Markus Anderljung | Ben Garfinkel | EA Global: Virtual 2020 | <b>Fireside Chat: AI governance | Markus Anderljung | Ben Garfinkel | EA Global: Virtual 2020 | ||
− | </b><br>Markus Anderljung and Ben Garfinkel discuss how they got into the field of AI governance and how the field has developed over the past few years. They discuss the question, "How sure are we about this AI stuff?", and finish with an update on GovAI's latest research and how to pursue a career in AI governance. Markus is a Project Manager at the Centre for the Governance of AI ("GovAI"). He is focused on growing GovAI and making their research relevant to important stakeholders. He has a background in history and philosophy of science, with a focus on evidence-based policy and philosophy of economics. Before joining GovAI, Markus was the Executive Director of Effective Altruism Sweden. Ben is a Research Fellow at the Future of Humanity Institute and a DPhil student at Oxford’s Department of Politics and International Relations. Ben’s research interests include the security and privacy implications of artificial intelligence, the causes of interstate war, and the methodological challenge of forecasting and reducing technological risks. He previously earned degrees in Physics and in Mathematics and Philosophy from Yale University. | + | </b><br>Markus Anderljung and Ben Garfinkel discuss how they got into the field of AI governance and how the field has developed over the past few years. They discuss the question, "How sure are we about this AI stuff?", and finish with an update on GovAI's latest research and how to pursue a career in AI governance. Markus is a Project Manager at the Centre for the Governance of AI ("GovAI"). He is focused on growing GovAI and making their research relevant to important stakeholders. He has a background in history and philosophy of science, with a focus on evidence-based policy and philosophy of economics. Before joining GovAI, Markus was the Executive Director of Effective Altruism Sweden. Ben is a Research Fellow at the Future of Humanity Institute and a DPhil student at Oxford’s Department of Politics and International Relations. Ben’s research interests include the security and [[privacy]] implications of artificial intelligence, the causes of interstate war, and the methodological challenge of forecasting and reducing technological risks. He previously earned degrees in Physics and in Mathematics and Philosophy from Yale University. |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
Line 59: | Line 74: | ||
<b>Model Governance and Explainable AI | <b>Model Governance and Explainable AI | ||
</b><br>This meetup was recorded in Washington, D.C. on May 22nd, 2019. We are thrilled to host Nick Schmidt and Dr. Bryce Stephens of BLDS partners for an informed discussion about machine learning for high-impact and highly-regulated real-world applications. Our panelists will address policy, regulatory, and technical concerns regarding the use of AI for automated decision-making in areas like credit lending and employment. We'll also leave lots of time for audience questions. The discussion will be moderated by Patrick Hall of H2O.ai. Presenters: Nick Schmidt, Director and Head of the AI/ML Innovation Practice, BLDS LLC Dr. Bryce Stephens, Director, BLDS LLC Patrick Hall, Senior Director of Product, H2O.ai Bios: | </b><br>This meetup was recorded in Washington, D.C. on May 22nd, 2019. We are thrilled to host Nick Schmidt and Dr. Bryce Stephens of BLDS partners for an informed discussion about machine learning for high-impact and highly-regulated real-world applications. Our panelists will address policy, regulatory, and technical concerns regarding the use of AI for automated decision-making in areas like credit lending and employment. We'll also leave lots of time for audience questions. The discussion will be moderated by Patrick Hall of H2O.ai. Presenters: Nick Schmidt, Director and Head of the AI/ML Innovation Practice, BLDS LLC Dr. Bryce Stephens, Director, BLDS LLC Patrick Hall, Senior Director of Product, H2O.ai Bios: | ||
− | Nicholas Schmidt is a Partner and the A.I. Practice Leader at BLDS, LLC. In these roles, Nick specializes in the application of statistics and economics to questions of law, regulatory compliance, and best practices in model governance. His work involves developing techniques that allow his clients to make their A.I. models fairer and more inclusive. He has also helped his clients understand and implement methods that open “black-box” A.I. models, enabling a clearer understanding A.I.’s decision-making process. Bryce Stephens provides economic research, econometric analysis, and compliance advisory services, with a specific focus on issues related to consumer financial protection, such as the Equal Credit Opportunity Act (ECOA), and emerging analytical methods. Prior to joining BLDS, Dr. Stephens spent over seven years as an economist and Section Chief in the Office of Research at the Consumer Financial Protection Bureau. At the Bureau, he led a team of economists and analysts that conducted analysis and supported policy development on fair lending related supervisory exams, enforcement matters, rulemakings, and other policy initiatives. Before joining the Bureau, Dr. Stephens served as an economic litigation consultant, conducting research and econometric analysis across of broad range of practice areas including: fair lending and consumer finance; labor, employment, and earnings; product liability; and healthcare. Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability and model management. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute. | + | Nicholas Schmidt is a Partner and the A.I. Practice Leader at BLDS, LLC. In these roles, Nick specializes in the application of statistics and economics to questions of law, regulatory compliance, and best practices in model governance. His work involves developing techniques that allow his clients to make their A.I. models fairer and more inclusive. He has also helped his clients understand and implement methods that open “black-box” A.I. models, enabling a clearer understanding A.I.’s decision-making process. Bryce Stephens provides economic research, econometric analysis, and compliance advisory services, with a specific focus on issues related to consumer financial protection, such as the Equal Credit Opportunity Act (ECOA), and emerging analytical methods. Prior to joining BLDS, Dr. Stephens spent over seven years as an economist and Section Chief in the Office of Research at the Consumer Financial Protection Bureau. At the Bureau, he led a team of economists and analysts that conducted analysis and supported policy [[development]] on fair lending related supervisory exams, enforcement matters, rulemakings, and other policy initiatives. Before joining the Bureau, Dr. Stephens served as an economic litigation consultant, conducting research and econometric analysis across of broad range of practice areas including: fair lending and consumer finance; labor, employment, and earnings; product liability; and healthcare. Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability and model management. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and [[development]] roles at SAS Institute. |
|} | |} | ||
|<!-- M --> | |<!-- M --> | ||
Line 66: | Line 81: | ||
|| | || | ||
<youtube>k0jF-UMC1b4</youtube> | <youtube>k0jF-UMC1b4</youtube> | ||
− | <b>AI Ethics, Policy, and Governance at Stanford - Day One | + | <b>AI [[Ethics]], Policy, and Governance at Stanford - Day One |
− | </b><br>Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) via livestream on Oct. 28-29 for our 2019 fall conference on AI Ethics, Policy, and Governance. With experts from academia, industry, civil society, and government, we’ll explore critical and emerging issues around understanding and guiding AI’s human and societal impact to benefit humanity. The program starts at 15 minutes, 30 seconds. | + | </b><br>Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) via livestream on Oct. 28-29 for our 2019 fall conference on AI [[Ethics]], Policy, and Governance. With experts from academia, industry, civil society, and government, we’ll explore critical and emerging issues around understanding and guiding AI’s human and societal impact to benefit humanity. The program starts at 15 minutes, 30 seconds. |
|} | |} | ||
|}<!-- B --> | |}<!-- B --> |
Latest revision as of 14:56, 18 September 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
- Risk, Compliance and Regulation ... Ethics ... Privacy ... Law ... AI Governance ... AI Verification and Validation
- Analytics ... Visualization ... Graphical Tools ... Diagrams & Business Analysis ... Requirements ... Loop ... Bayes ... Network Pattern
- Cybersecurity ... OSINT ... Frameworks ... References ... Offense ... NIST ... DHS ... Screening ... Law Enforcement ... Government ... Defense ... Lifecycle Integration ... Products ... Evaluating
- Policy ... Policy vs Plan ... Constitutional AI ... Trust Region Policy Optimization (TRPO) ... Policy Gradient (PG) ... Proximal Policy Optimization (PPO)
- Data Science ... Governance ... Preprocessing ... Exploration ... Interoperability ... Master Data Management (MDM) ... Bias and Variances ... Benchmarks ... Datasets
- Data Quality ...validity, accuracy, cleaning, completeness, consistency, encoding, padding, augmentation, labeling, auto-tagging, normalization, standardization, and imbalanced data
- Architectures for AI ... Generative AI Stack ... Enterprise Architecture (EA) ... Enterprise Portfolio Management (EPM) ... Architecture and Interior Design
- Strategy & Tactics ... Project Management ... Best Practices ... Checklists ... Project Check-in ... Evaluation ... Measures
- Tackling artificial intelligence using architecture | Daniel Lambert - CIO
AI Goverance
|
|
|
|
|
|
|
|