Difference between revisions of "Law Enforcement"
m |
m |
||
| Line 79: | Line 79: | ||
<b>How AI Could Reinforce Biases In The Criminal Justice System | <b>How AI Could Reinforce Biases In The Criminal Justice System | ||
</b><br>Increasingly, algorithms and machine learning are being implemented at various touch points throughout the criminal justice system, from deciding where to deploy police officers to aiding in bail and sentencing decisions. The question is, will this tech make the system more fair for minorities and low-income residents, or will it simply amplify our human biases? We all know humans are imperfect. We're subject to biases and stereotypes, and when these come into play in the criminal justice system, the most disadvantaged communities end up suffering. It's easy to imagine that there's a better way, that one day we'll find a tool that can make neutral, dispassionate decisions about policing and punishment. Some think that day has already arrived. Around the country, police departments and courtrooms are turning to artificial intelligence algorithms to help them decide everything from where to deploy police officers to whether to release defendants on bail. Supporters believe that the technology will lead to increased objectivity, ultimately creating safer communities. Others however, say that the data fed into these algorithms is encoded with human bias, meaning the tech will simply reinforce historical disparities. Learn more about the ways in which communities, policemen and judges across the U.S. are using these algorithms to make decisions about public safety and people's lives. | </b><br>Increasingly, algorithms and machine learning are being implemented at various touch points throughout the criminal justice system, from deciding where to deploy police officers to aiding in bail and sentencing decisions. The question is, will this tech make the system more fair for minorities and low-income residents, or will it simply amplify our human biases? We all know humans are imperfect. We're subject to biases and stereotypes, and when these come into play in the criminal justice system, the most disadvantaged communities end up suffering. It's easy to imagine that there's a better way, that one day we'll find a tool that can make neutral, dispassionate decisions about policing and punishment. Some think that day has already arrived. Around the country, police departments and courtrooms are turning to artificial intelligence algorithms to help them decide everything from where to deploy police officers to whether to release defendants on bail. Supporters believe that the technology will lead to increased objectivity, ultimately creating safer communities. Others however, say that the data fed into these algorithms is encoded with human bias, meaning the tech will simply reinforce historical disparities. Learn more about the ways in which communities, policemen and judges across the U.S. are using these algorithms to make decisions about public safety and people's lives. | ||
| + | |} | ||
| + | |}<!-- B --> | ||
| + | {|<!-- T --> | ||
| + | | valign="top" | | ||
| + | {| class="wikitable" style="width: 550px;" | ||
| + | || | ||
| + | <youtube>E_vzxAtoH9Q</youtube> | ||
| + | <b>CPDP 2020: Regulating Artificial Intelligence in Criminal Justice? | ||
| + | </b><br>MODERATOR: Juraj Sajfert | ||
| + | SPEAKERS: Katalin Ligeti, University of Luxembourg (LU); Anna Moscibroda, DG JUST (EU); Lani Cossette, Microsoft (BE); Frank Schuermans, Supervisory Body for Police Information (BE) Panel Description AI can make predictions about where, when, and by whom crimes are likely to be committed. AI can also estimate how likely it is that a suspect, defendant or convict flees or commits further crimes. Against the backdrop that AI helps predictive policing and predictive justice, what should the EU’s legal and policy responses be, in particular after the adoption of the Artificial Intelligence Ethics Guidelines? One approach is to count on the vitality of recently adopted data protection laws -in particular, Law Enforcement Directive (EU) 2016/680. Another approach would be to launch a regulatory reform process, either in or out of the classical data protection realm. This panel will look at the usefulness and reliability of AI for criminal justice and will critically asses the different regulatory avenues the new European Commission might consider. - How does the idea of “trustworthy AI” translate into the area of criminal law? - Should we not ban the use of predictive policing systems or the use of AI in criminal law cases, on the basis of ethics? - Does the new European Commission plan to propose legislation in this area? If yes, what would be the objectives of such new laws? Should the actors leading such a reform be different from the ones that were leading the EU data protection reform? - Is it possible to develop predictive justice and predictive policing, and still respect the requirements of the GDPR and Directive (EU) 2016/680? | ||
| + | |} | ||
| + | |<!-- M --> | ||
| + | | valign="top" | | ||
| + | {| class="wikitable" style="width: 550px;" | ||
| + | || | ||
| + | <youtube>134huBl7MAA</youtube> | ||
| + | <b>Artificial Intelligence: The World According to AI |Targeted by Algorithm (Ep1)| The Big Picture | ||
| + | </b><br>Artificial intelligence is already here. | ||
| + | There's a lot of debate and hype about AI, and it's tended to focus on the extreme possibilities of a technology still in its infancy. From self-aware computers and killer robots taking over the world, to a fully-automated world where humans are made redundant by machines, the brave new world of Artificial Intelligence is prophesied by some to be a doomed, scary place, no place for people. For others, AI is ushering in great technological advances for humanity, helping the world communicate, manufacture, trade and innovate faster, longer, better. But in between these competing utopian and dystopian visions, AI is allowing new ways of maintaining an old order. It is being used across public and private spheres to make decisions about the lives of millions of people around the world - and sometimes those decisions can mean life or death. "Communities, particularly vulnerable communities, children, people of colour, women are often characterised by these systems, in quite misrepresentative ways," says Safiya Umoja Noble, author of the book, Algorithms of Oppression. In episode one of The Big Picture: The World According to AI, we chart the evolution of artificial intelligence from its post-World War II origins and, dissect the mechanisms by which existing prejudices are built into the very systems that are supposed to be free of human bias. We shed a harsh light on computerised targeting everywhere from foreign drone warfare to civilian policing. In the UK, we witness the trialling of revolutionary new facial recognition technology by the London Metropolitan Police Service. We examine how these technologies, that are far from proven, are being sold as new policing solutions to maintain in some of the world's biggest cities. The Big Picture: The World According to AI explores how artificial intelligence is being used today, and what it means to those on its receiving end. Watch Episode 2 here: https://youtu.be/dtDZ-a57a7k | ||
|} | |} | ||
|}<!-- B --> | |}<!-- B --> | ||
Revision as of 23:04, 3 November 2020
Youtube search... ...Google search
|
|
|
|
|
|
|
|
|
|