AI Governance

From
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News

AI Goverance

Shahar Avin–AI Governance
Why Companies Should be Leading on AI Governance by Jade Leung from EA Global 2018: London. Centre for Effective Altruism Jade Leung

Keep your AI under Control - Governance of AI
Shahar is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of your time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems by leading AI Governance role-playing workshops (https://intelligencerising.org/).

Transcript & Audio: https://theinsideview.ai/shahar

CPDP 2019: AI Governance: role of legislators, tech companies and standard bodies.
Organised by Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg Chair Mark Cole, University of Luxembourg (LU) Moderator: Erik Valgaeren, Stibbe (BE) Speakers: Alain Herrmann, National Commission for Data Protection (LU); Christian Wagner, University of Nottingham (UK); Jan Schallaböck, iRights/ISO (DE); Janna Lingenfelder, IBM/ ISO (DE) AI calls for a “coordinated action plan” as recently stated by the European Commission. With its societal and ethical implications, it is a matter of general impact across sectors, going beyond se- curity and trustworthiness or the creation of a regulatory framework. Hence this panel intends to address the topic of AI governance, whether such governance is needed and if so, how to ensure its consistency. It will also discuss whether existing structures and bodies are adequate to deal with such governance, or, if we perhaps need to think about creating new structures and man- date them with this task. Where do we stand and where are we heading in terms of how we are collectively dealing with the soon to be almost ubiquitous phenomenon of AI? Do we need AI governance? If so, who should be in charge of it? Is there a need to ensure consistency of such governance? What are the risks? Do we know them and are we in the right position to address them? Are existing structures/bodies sufficient to address these issues or do we perhaps need to create news ones?

Fireside Chat: AI governance | Markus Anderljung | Ben Garfinkel | EA Global: Virtual 2020
Markus Anderljung and Ben Garfinkel discuss how they got into the field of AI governance and how the field has developed over the past few years. They discuss the question, "How sure are we about this AI stuff?", and finish with an update on GovAI's latest research and how to pursue a career in AI governance. Markus is a Project Manager at the Centre for the Governance of AI ("GovAI"). He is focused on growing GovAI and making their research relevant to important stakeholders. He has a background in history and philosophy of science, with a focus on evidence-based policy and philosophy of economics. Before joining GovAI, Markus was the Executive Director of Effective Altruism Sweden. Ben is a Research Fellow at the Future of Humanity Institute and a DPhil student at Oxford’s Department of Politics and International Relations. Ben’s research interests include the security and privacy implications of artificial intelligence, the causes of interstate war, and the methodological challenge of forecasting and reducing technological risks. He previously earned degrees in Physics and in Mathematics and Philosophy from Yale University.

Model Governance and Explainable AI
This meetup was recorded in Washington, D.C. on May 22nd, 2019. We are thrilled to host Nick Schmidt and Dr. Bryce Stephens of BLDS partners for an informed discussion about machine learning for high-impact and highly-regulated real-world applications. Our panelists will address policy, regulatory, and technical concerns regarding the use of AI for automated decision-making in areas like credit lending and employment. We'll also leave lots of time for audience questions. The discussion will be moderated by Patrick Hall of H2O.ai. Presenters: Nick Schmidt, Director and Head of the AI/ML Innovation Practice, BLDS LLC Dr. Bryce Stephens, Director, BLDS LLC Patrick Hall, Senior Director of Product, H2O.ai Bios: Nicholas Schmidt is a Partner and the A.I. Practice Leader at BLDS, LLC. In these roles, Nick specializes in the application of statistics and economics to questions of law, regulatory compliance, and best practices in model governance. His work involves developing techniques that allow his clients to make their A.I. models fairer and more inclusive. He has also helped his clients understand and implement methods that open “black-box” A.I. models, enabling a clearer understanding A.I.’s decision-making process. Bryce Stephens provides economic research, econometric analysis, and compliance advisory services, with a specific focus on issues related to consumer financial protection, such as the Equal Credit Opportunity Act (ECOA), and emerging analytical methods. Prior to joining BLDS, Dr. Stephens spent over seven years as an economist and Section Chief in the Office of Research at the Consumer Financial Protection Bureau. At the Bureau, he led a team of economists and analysts that conducted analysis and supported policy development on fair lending related supervisory exams, enforcement matters, rulemakings, and other policy initiatives. Before joining the Bureau, Dr. Stephens served as an economic litigation consultant, conducting research and econometric analysis across of broad range of practice areas including: fair lending and consumer finance; labor, employment, and earnings; product liability; and healthcare. Patrick Hall is senior director for data science products at H2O.ai where he focuses mainly on model interpretability and model management. Patrick is also currently an adjunct professor in the Department of Decision Sciences at George Washington University, where he teaches graduate classes in data mining and machine learning. Prior to joining H2O.ai, Patrick held global customer facing roles and research and development roles at SAS Institute.

AI Ethics, Policy, and Governance at Stanford - Day One
Join the Stanford Institute for Human-Centered Artificial Intelligence (HAI) via livestream on Oct. 28-29 for our 2019 fall conference on AI Ethics, Policy, and Governance. With experts from academia, industry, civil society, and government, we’ll explore critical and emerging issues around understanding and guiding AI’s human and societal impact to benefit humanity. The program starts at 15 minutes, 30 seconds.

What is Enterprise AI Model Governance? [Applied AI ML in Business] AL ML DL Introduction
Enterprise Machine Learning Model Governance or Enterprise AI Governance will be an important topic in the next few years. Along with AI Governance within an enterprise, we need an end-to-end AI Governance and Machine Learning model Governance operation. Everything about Applied Artificial Intelligence, Machine Learning in real world. Mind Data Intelligence is Brian Ka Chan - Applied AI Strategist, Technology/Data/Analytics Executive, ex-Oracle Architect, ex-SAP Specialist. "Artificial intelligence for Everyone" is my vision about the channel. And it will also include fintech, smart cities, and all latest cutting edge technologies. The goal of the channel to sharing AI & Machine Learning knowledge, expand common sense, and demystify AI Myths. We want everyone from all level of walks to understand Artificial Intelligence.

AI Model Governance in a High Compliance Industry
Model governance defines a collection of best practices for data science – versioning, reproducibility, experiment tracking, automated CI/CD, and others. Within a high-compliance setting where the data used for training or inference contains private health information (PHI) or similarly sensitive data, additional requirements such as strong identity management, role-based access control, approval workflows, and full audit trail are added. This webinar summarizes requirements and best practices for establishing a high-productivity data science team within a high-compliance environment. It then demonstrates how these requirements can be met using John Snow Labs’ Healthcare AI Platform.