Difference between revisions of "AI Governance"

From
Jump to: navigation, search
m
m
Line 16: Line 16:
 
* [http://www.cio.com/article/3328495/tackling-artificial-intelligence-using-architecture.html Tackling artificial intelligence using architecture | Daniel Lambert - CIO]
 
* [http://www.cio.com/article/3328495/tackling-artificial-intelligence-using-architecture.html Tackling artificial intelligence using architecture | Daniel Lambert - CIO]
  
 +
= AI Goverance =
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>3ZJg-2D2QIA</youtube>
 +
<b>CPDP 2019: AI Governance: role of legislators, tech companies and standard bodies.
 +
</b><br>Organised by Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg Chair Mark Cole, University of Luxembourg (LU) Moderator: Erik Valgaeren, Stibbe (BE)  Speakers: Alain Herrmann, National Commission for Data Protection (LU); Christian Wagner, University of Nottingham (UK); Jan Schallaböck, iRights/ISO (DE); Janna Lingenfelder, IBM/ ISO (DE)  AI calls for a “coordinated action plan” as recently stated by the European Commission. With its societal and ethical implications, it is a matter of general impact across sectors, going beyond se- curity and trustworthiness or the creation of a regulatory framework. Hence this panel intends to address the topic of AI governance, whether such governance is needed and if so, how to ensure its consistency. It will also discuss whether existing structures and bodies are adequate to deal with such governance, or, if we perhaps need to think about creating new structures and man- date them with this task. Where do we stand and where are we heading in terms of how we are collectively dealing with the soon to be almost ubiquitous phenomenon of AI?  Do we need AI governance? If so, who should be in charge of it?  Is there a need to ensure consistency of such governance?  What are the risks? Do we know them and are we in the right position to address them?  Are existing structures/bodies sufficient to address these issues or do we perhaps need to create news ones?
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>XxmYOT_ZUeI</youtube>
 +
<b>Keep your AI under Control - Governance of AI
 +
</b><br>Dolf van der Haven - Artificial Intelligence (AI) is becoming widespread and will reach the mainstream soon. With its increasing capabilities, however, how do we ensure that AI keeps doing what we want it to do? What governance frameworks, standards and methods do we have to control AI such that it stays within the bounds of what it was designed for? This presentation looks at Governance and Management of AI, including applicable ISO standards, Ethics and Risks. Join BrightTALK's LinkedIn Group for BI & Analytics! http://bit.ly/BrightTALKBI
 +
|}
 +
|}<!-- B -->
 +
 +
 +
 +
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>AVDIQvJVhso</youtube>
 
<youtube>AVDIQvJVhso</youtube>
 +
<b>HH1
 +
</b><br>BB1
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>_PH5NQqlYQ8</youtube>
 
<youtube>_PH5NQqlYQ8</youtube>
 +
<b>HH2
 +
</b><br>BB2
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 
<youtube>h4bRMYQE0Os</youtube>
 
<youtube>h4bRMYQE0Os</youtube>
<youtube>AVDIQvJVhso</youtube>
+
<b>HH3
 +
</b><br>BB3
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>h4bRMYQE0Os</youtube>
 +
<b>HH4
 +
</b><br>BB4
 +
|}
 +
|}<!-- B -->

Revision as of 07:47, 7 September 2020

Youtube search... ...Google search

AI Goverance

CPDP 2019: AI Governance: role of legislators, tech companies and standard bodies.
Organised by Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg Chair Mark Cole, University of Luxembourg (LU) Moderator: Erik Valgaeren, Stibbe (BE) Speakers: Alain Herrmann, National Commission for Data Protection (LU); Christian Wagner, University of Nottingham (UK); Jan Schallaböck, iRights/ISO (DE); Janna Lingenfelder, IBM/ ISO (DE) AI calls for a “coordinated action plan” as recently stated by the European Commission. With its societal and ethical implications, it is a matter of general impact across sectors, going beyond se- curity and trustworthiness or the creation of a regulatory framework. Hence this panel intends to address the topic of AI governance, whether such governance is needed and if so, how to ensure its consistency. It will also discuss whether existing structures and bodies are adequate to deal with such governance, or, if we perhaps need to think about creating new structures and man- date them with this task. Where do we stand and where are we heading in terms of how we are collectively dealing with the soon to be almost ubiquitous phenomenon of AI? Do we need AI governance? If so, who should be in charge of it? Is there a need to ensure consistency of such governance? What are the risks? Do we know them and are we in the right position to address them? Are existing structures/bodies sufficient to address these issues or do we perhaps need to create news ones?

Keep your AI under Control - Governance of AI
Dolf van der Haven - Artificial Intelligence (AI) is becoming widespread and will reach the mainstream soon. With its increasing capabilities, however, how do we ensure that AI keeps doing what we want it to do? What governance frameworks, standards and methods do we have to control AI such that it stays within the bounds of what it was designed for? This presentation looks at Governance and Management of AI, including applicable ISO standards, Ethics and Risks. Join BrightTALK's LinkedIn Group for BI & Analytics! http://bit.ly/BrightTALKBI



HH1
BB1

HH2
BB2

HH3
BB3

HH4
BB4