Difference between revisions of "Politics"

From
Jump to: navigation, search
m
m (AI is Transforming Campaign Strategies and Voter Engagement)
 
(60 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
|title=PRIMO.ai
 
|title=PRIMO.ai
 
|titlemode=append
 
|titlemode=append
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS  
+
|keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools  
+
 
 +
<!-- Google tag (gtag.js) -->
 +
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script>
 +
<script>
 +
  window.dataLayer = window.dataLayer || [];
 +
  function gtag(){dataLayer.push(arguments);}
 +
  gtag('js', new Date());
 +
 
 +
  gtag('config', 'G-4GCWLBVJ7T');
 +
</script>
 
}}
 
}}
[http://www.youtube.com/results?search_query=Politics+election+vote+campaign+artificial+intelligence+deep+machine+learning YouTube search...]
+
[https://www.youtube.com/results?search_query=Politics+election+vote+campaign+democracy+artificial+intelligence+deep+machine+learning YouTube search...]
[http://www.google.com/search?q=Politics+election+vote+campaign+artificial+intelligence+deep+machine+learning  ...Google search]
+
[https://www.google.com/search?q=Politics+election+vote+campaign+democracy+artificial+intelligence+deep+machine+learning  ...Google search]
  
 +
* [[Prescriptive Analytics|Prescriptive &]] [[Predictive Analytics]] ... [[Operations & Maintenance|Predictive Operations]] ... [[Forecasting]] ... [[Excel#Excel - Forecasting|with Excel]] ... [[Market Trading]] ... [[Sports Prediction]] ... [[Marketing]] ... [[Politics]]
 +
* [[Policy]] ... [[Policy vs Plan]] ... [[Constitutional AI]] ... [[Trust Region Policy Optimization (TRPO)]] ... [[Policy Gradient (PG)]] ... [[Proximal Policy Optimization (PPO)]]
 +
* [[Humor]] ... [[Writing/Publishing]] ... [[Storytelling]] ... [[AI Generated Broadcast Content|Broadcast]]  ... [[Journalism|Journalism/News]] ... [[Podcasts]] ... [[Books, Radio & Movies - Exploring Possibilities]]
 
* [[Case Studies]]
 
* [[Case Studies]]
** [[Risk, Compliance and Regulation]]
+
** [[Social Science]]
*** [[Screening; Passenger, Luggage, & Cargo]]
+
** [[Economics]]
*** [[Cybersecurity]]
+
* [[Cybersecurity]] ... [[Open-Source Intelligence - OSINT |OSINT]] ... [[Cybersecurity Frameworks, Architectures & Roadmaps | Frameworks]] ... [[Cybersecurity References|References]] ... [[Offense - Adversarial Threats/Attacks| Offense]] ... [[National Institute of Standards and Technology (NIST)|NIST]] ... [[U.S. Department of Homeland Security (DHS)| DHS]] ... [[Screening; Passenger, Luggage, & Cargo|Screening]] ... [[Law Enforcement]] ... [[Government Services|Government]] ... [[Defense]] ... [[Joint Capabilities Integration and Development System (JCIDS)#Cybersecurity & Acquisition Lifecycle Integration| Lifecycle Integration]] ... [[Cybersecurity Companies/Products|Products]] ... [[Cybersecurity: Evaluating & Selling|Evaluating]]
** [[Law]]
+
* [[Video/Image]] ... [[Vision]] ... [[Enhancement]] ... [[Fake]] ... [[Reconstruction]] ... [[Colorize]] ... [[Occlusions]] ... [[Predict image]] ... [[Image/Video Transfer Learning]] ... [[Art]] ... [[Photography]]
** [[Defense]]
+
* [[What is Artificial Intelligence (AI)? | Artificial Intelligence (AI)]] ... [[Generative AI]] ... [[Machine Learning (ML)]] ... [[Deep Learning]] ... [[Neural Network]] ... [[Reinforcement Learning (RL)|Reinforcement]] ... [[Learning Techniques]]
{{#seo:
+
* [[Conversational AI]] ... [[ChatGPT]] | [[OpenAI]] ... [[Bing/Copilot]] | [[Microsoft]] ... [[Gemini]] | [[Google]] ... [[Claude]] | [[Anthropic]] ... [[Perplexity]] ... [[You]] ... [[phind]] ... [[Ernie]] | [[Baidu]]
|title=PRIMO.ai
+
** [https://minnesotareformer.com/2023/01/31/what-chatgpt-can-teach-us-about-election-misinformation/ What ChatGPT can teach us about election misinformation | Max Hailperin]  
|titlemode=append
+
** [https://www.washingtonpost.com/politics/2023/01/23/chatgpt-is-now-writing-legislation-is-this-future/ ChatGPT is now writing legislation. Is this the future?  | Cristiano Lima - The Washington Post]  ... help write a bill aimed at restricting it: ChatGPT, the artificial intelligence [[Assistants#Chatbot | Chatbot]].
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, Tensorflow, Google, Nvidia, Microsoft, Azure, Amazon, AWS
+
** [https://spectrum.ieee.org/ai-lobbyist AI Goes to K Street: ChatGPT Turns Lobbyist | Edd Gent - IEEE Spectrum] ... Automated influence campaigns could spell trouble for society  ... able to predict 75 percent of the time whether a summary of a U.S. congressional bill was relevant to a specific company. What’s more, the AI was able to then draft a letter to the bill’s sponsor arguing for changes to the legislation.
|description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
+
* [https://www.japantimes.co.jp/commentary/2023/09/12/world/ai-international-politics/ Will generative AI hold power in international relations? | Makoto Shiono - The Japan TImes] ... can be applied to information warfare and other domains, we can see the risk
}}
+
* [https://www.wsj.com/tech/ai/fcc-bans-ai-artificial-intelligence-voices-in-robocalls-texts-3ea20d9f FCC Bans AI Voices in Unsolicited Robocalls | Ginger Adams Otis - Wall Street Journal (WSJ)] ... Ruling comes amid investigation of AI-generated robocalls in New Hampshire mimicking President Biden’s voice
[http://www.youtube.com/results?search_query=Politics+vote+political+artificial+intelligence+deep+machine+learning YouTube search...]
+
 
[http://www.google.com/search?q=Politics+vote+political+artificial+intelligence+deep+machine+learning ...Google search]
+
 
 +
Artificial intelligence (AI) has the potential to support politics in a number of ways, including:
 +
 
 +
* <b>Predicting election outcomes</b>. AI can be used to analyze large amounts of data, such as voter demographics, past election results, and social media activity, to predict the outcome of elections. This information can be used by political campaigns to target their messages and resources more effectively.
 +
* <b>Personalizing political communication</b>. AI can be used to personalize political communication, such as ads and emails, to the individual voter. This can be done by using data about the voter's demographics, interests, and online activity to tailor the message to their specific needs and concerns.
 +
* <b>Automating tasks</b>. AI can be used to automate tasks that are currently performed by humans, such as voter registration, campaign fundraising, and constituent outreach. This can free up human resources to focus on other tasks, such as policy development and constituent engagement.
 +
* <b>Analyzing policy options</b>. AI can be used to analyze large amounts of data to identify potential policy options and their likely impact. This information can be used by policymakers to make more informed decisions about public policy.
 +
* <b>Overseeing elections</b>. AI can be used to oversee elections to prevent voter fraud and other irregularities. This can be done by using AI to monitor voter registration rolls, verify voter identification, and count votes.
 +
However, there are also some potential risks associated with the use of AI in politics, such as:
 +
 
 +
* <b>Bias</b>. AI systems can be biased, if they are trained on data that is itself biased. This could lead to AI systems making unfair or discriminatory decisions.
 +
* <b>Privacy concerns</b>. AI systems collect and analyze large amounts of data about individuals. This data could be used to track individuals' political activity or to target them with political ads.
 +
* <b>Misinformation</b>. AI systems could be used to create and spread misinformation about political candidates or issues. This could undermine public trust in the political process.
 +
 
  
* [[Case Studies]]
 
** [[Law]]
 
** [[Government Services]]
 
  
 
{|<!-- T -->
 
{|<!-- T -->
Line 39: Line 61:
 
{| class="wikitable" style="width: 550px;"
 
{| class="wikitable" style="width: 550px;"
 
||
 
||
<youtube>ID2</youtube>
+
<youtube>YbvHDtpbr68</youtube>
<b>HH2
+
<b>Election tech: How political campaigns use data and AI
</b><br>BB2
+
</b><br>Chris Wilson, CEO WPA Intelligence and former head of analytics for the Ted Cruz campaign, explains how election data modeling works.
 
|}
 
|}
 
|}<!-- B -->
 
|}<!-- B -->
Line 48: Line 70:
 
{| class="wikitable" style="width: 550px;"
 
{| class="wikitable" style="width: 550px;"
 
||
 
||
<youtube>ID3</youtube>
+
<youtube>yKITbszPPx0</youtube>
<b>HH3
+
<b>Election tech: The future of politics is AI, big data, and social media
</b><br>BB3
+
</b><br>Strategist and election tech pioneer Joe Trippi shares the history of political data modeling, what business and politics have in common, and why the future of both depends on machine learning.
 
|}
 
|}
 
|<!-- M -->
 
|<!-- M -->
Line 56: Line 78:
 
{| class="wikitable" style="width: 550px;"
 
{| class="wikitable" style="width: 550px;"
 
||
 
||
<youtube>ID4</youtube>
+
<youtube>SZY6_3nS4WM</youtube>
<b>HH4
+
<b>Tech Talk: Machine Intelligence and Political Campaigns
</b><br>BB4
+
</b><br>In this video, Mike Williams combines his years of government experience in Washington, DC with his passion for machine intelligence. Mike provides a baseline understanding of both political campaigns and machine intelligence before diving deeper into Bayesian machine learning and its application in collaborative filtering -- one of the methodologies for recommendation systems such as Netflix -- as a means to better target individual voters, as well as groups of voters. Watch to learn how the political campaign has become one of the most advanced and efficient startups of all time. 
 
|}
 
|}
 
|}<!-- B -->
 
|}<!-- B -->
Line 65: Line 87:
 
{| class="wikitable" style="width: 550px;"
 
{| class="wikitable" style="width: 550px;"
 
||
 
||
<youtube>ID5</youtube>
+
<youtube>xDnAFFWZYME</youtube>
<b>HH5
+
<b>How will AI impact the year of elections?
</b><br>BB5
+
</b><br>As nations globally approach a critical juncture with 68 countries partaking in elections, the rise of AI-generated synthetic media presents both challenges and opportunities for the electoral process. In this event experts from multidisciplinary backgrounds explored the multifaceted impact of artificial intelligence on political elections.
 +
 
 +
They address the fine balance between the need for regulation and the drive for innovation in AI, alongside the media’s crucial role in ensuring accurate and fair political discourse in the face of deepfakes and disinformation.
 
|}
 
|}
 
|<!-- M -->
 
|<!-- M -->
Line 75: Line 99:
 
<youtube>NFDYimMNcSY</youtube>
 
<youtube>NFDYimMNcSY</youtube>
 
<b>Christopher Wylie explains how AI can manipulate political discourse
 
<b>Christopher Wylie explains how AI can manipulate political discourse
</b><br>Cambridge Analytica whistleblower Christopher Wylie speaks to The Democracy Project on how artificial intelligence affects what users see  on social media.  He references an error that appeared on YouTube, where the Notre Dame fire was mistakenly affiliated with 9/11. Special thanks to SFU Public Square for arranging this interview. Check out our website: http://thedemocracyproject.ca
+
</b><br>Cambridge Analytica whistleblower Christopher Wylie speaks to The Democracy Project on how artificial intelligence affects what users see  on social media.  He references an error that appeared on YouTube, where the Notre Dame fire was mistakenly affiliated with 9/11. Special thanks to SFU Public Square for arranging this interview. Check out our website: https://thedemocracyproject.ca
 
|}
 
|}
 
|}<!-- B -->
 
|}<!-- B -->
Line 95: Line 119:
 
|}
 
|}
 
|}<!-- B -->
 
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>auSNAzKGzdg</youtube>
 +
<b>The Electome: Where political journalism meets AI
 +
</b><br>Built at the Laboratory for Social Machines (LSM) with support from Twitter and Knight Foundation, The Electome is a data project aimed at improving journalism and electoral politics in the social-media age. During the 2016 US presidential election, The Electome used machine learning, network science, and other artificial-intelligence techniques to track the public response to the campaign, with a focus on policy issues. Dozens of stories were published with news organizations including The Washington Post, CNN, and Vice. The Electome was also an official partner of the Commission on Presidential Debates, providing data and suggested questions to the moderators. One of its analytic tools was the focus of an exhibit at the Newseum in Washington, D.C. LSM is the only science lab in the world with access to Twitter’s full output of approximately 500 million tweets per day.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>A5wFejK6U-0</youtube>
 +
<b>The Age of Machine Learning Politics | Brett Horvath
 +
</b><br>Ignite Talks is a fast-paced geek event started in 2006 by Brady Forrest and Bre Pettis. Since the first Ignite took place in Seattle around 10 years ago, Ignite has become an international phenomenon, with Ignite events produced in Helsinki, Tunisia, Paris, New York City and over 350 other locations in between.
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>7Um3iTU6KUQ</youtube>
 +
<b>Deep Learning Lecture 9: Using [[Keras]] to Predict Political Parties (June 2019 update)
 +
</b><br>We'll talk in more depth about how to use [[Keras]] with different kinds of classification problems, how to integrate [[Keras]] with [[Python#scikit-learn| scikit-learn]] and k-fold [[Cross-Validation]], and do an exercise where you predict the political parties of congressmen based only on their votes on 16 different issues.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>DwUDXOP_TCw</youtube>
 +
<b>Using Artificial Intelligence to predict election outcomes
 +
</b><br>Erin Kelly, CEO of Advanced Symbolics, explains how the company's AI platform (known as "Polly") can predict trends in market research and even election outcomes based on sample riding information from online information.
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>XnDbsm_USrg</youtube>
 +
<b>How will government and politics be transformed by technology?
 +
</b><br>The Institute for Government was delighted to welcome Jamie Susskind to discuss his new book Future Politics: Living together in a World Transformed by Tech. In his book, he argues that those who control digital technology – mainly technology firms and the state – will increasingly use data to control our lives. He suggests that the government must take advantage of the digital age to strengthen democracy. In conversation with Gavin Freeguard, Programme Director and Head of Data and Transparency, at the Institute for Government, they discussed what these issues mean for policymakers.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>_m2dRDQEC1A</youtube>
 +
<b>Could deepfakes weaken democracy? | The Economist
 +
</b><br>Videos can be faked to make people say things they never actually said. This poses dangers for democracy. Can you spot ALL the deep fake interviews in this film?
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>PazlKN_FuWQ</youtube>
 +
<b>The Rise of the Weaponized AI Propaganda Machine by Berit Anderson
 +
</b><br>Silicon Valley spent the last 10 years building digital addiction machines. And during the 2016 U.S. Election, [[Government Services#Russia|Russia]], Trump and their allies hijacked them. All across Europe, elections have been targeted by [[Government Services#Russia|Russian]] propagandists determined to aggravate underlying cultural divides. Alt-right data & AI strategists at Cambridge Analytica are targeting democracies in India, Australia, Kenya, and South America. As platforms struggle to determine their role in the new emerging world order, our biggest strategic advantage as technologists has become fluency in three areas: The motivations and behavior of the international actors at play, a systems understanding of the political and economic drivers of technology, and a deep focus on how to protect the tools that we build every day. Berit Anderson is the CEO and Editor-in-Chief of Scout.ai, which creates media to help you anticipate the impacts of technology. She frequently speaks about her work on Weaponized AI Propaganda and its impact on international democracy. In 2017 she won a debate with the former prime minister of Sweden about whether the internet is a force for democracy.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>ZvrzTqqy028</youtube>
 +
<b>How AI Inference Threats Might Influence the Outcome of 2020 Election
 +
</b><br>Karel Baloun, Software Architect and Entrepreneur, UC Berkeley
 +
Ken Chang, Cybersecurity Researcher, UC Berkeley
 +
Matthew Holmes, Cybersecurity Student, UC Berkeley
 +
 +
What we have learned from the US 2016 election interfered by [[Government Services#Russia|Russian]]. This session will inspect the inference threats on how elections in the international community interfered by inference threats. The session will also analyze the patterns of disinformation and misinformation used in the past elections and how artificial intelligence might be applied to influence the outcome of the 2020 election.Pre-Requisites: Understanding of Data [[Privacy]] Engineering concepts and general information technology background interested in inference threats.
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>oNqG3YOOHtA</youtube>
 +
<b>Controlling the Narrative with Artificial Intelligence
 +
</b><br>Speaker: Dustin Heart  Abstract: Social media has played a tremendous impact in this current election season, and campaigns are scrambling to adapt. Behind the scenes, many companies have emerged with methods to engage potential voters, and in many ways have been too obvious in their attempts to be seen. This talk highlights many of the strategies that these campaigners are using, counter-methods to detect this (to help remove said content from social media), and the realities and [[ethics]] of these applications.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>_b-kXQo-KjY</youtube>
 +
<b>Stanford HAI 2019 Fall Conference - AI, Democracy and Elections
 +
</b><br>Renee DiResta, Research Manager, Stanford Internet Observatory  Andy Grotto, Research Scholar, Cyber Policy Center; Director, Program on Geopolitics, Technology and Governance, Stanford University  Nathaniel Persily, James B. McClatchy Professor of Law, Stanford Law School, Stanford University  Moderator: Michael McFaul, Ken Olivier and Angela Nomellini Professor of International Studies in Political Science, Director and Senior Fellow at the Freeman Spogli Institute for International Studies, and the Peter and Helen Bing Senior Fellow at the Hoover Institution, Stanford University
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>6VGqyWvg-Oc</youtube>
 +
<b>Big Data and Its Impact on Democracy
 +
</b><br>Martin Hilbert discussed the impact of big data, computational analysis and machine learning on the democratic process. In this conversation, Hilbert addressed both challenges and opportunities presented by emerging big data technologies. Speaker Biography: Martin Hilbert is an associate professor of communication at the University of California, Davis. Prior to his current position, Hilbert created and coordinated the Information Society program of the United Nations Regional Commission for Latin America and the Caribbean. In his 15 years as a United Nations economic affairs officer, he delivered hands-on technical assistance in the field of digital [[development]] to presidents, government experts, legislators, diplomats, non-governmental organizations and companies in more than 20 countries.
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>Ah9H4-QSBLo</youtube>
 +
<b>Don’t blame bots, fake news is spread by humans | Sinan Aral | TEDxCERN
 +
</b><br>Fake news does not only disrupt society but also economy and the deep roots of democracy. Sometimes, their impact can even be measured in terms of people killed by the misinformation that it’s spread around.
 +
Sinan Aral, a scientist, entrepreneur and investor with a PhD in IT economics, applied econometrics and statistics, has run some of the largest randomised experiments in digital social networks like [[Meta|Facebook]] and Twitter to measure the impact of persuasive messages and peer influence on our economy, our society and our public health. Having conducted the most extensive longitudinal study of false news spread on Twitter, which was published on the cover of Science this March, Aral has proven that false news diffuses farther, faster, deeper, and more broadly than the truth online. But why? The answer will leave you astonished as the main cause for such an effective spread of false news is not bots, it’s…us. So, how can we be sure that something is real?  As well as teaching at MIT as a Professor of IT & Marketing, Professor in the Institute for Data, Systems and Society, Aral is currently a founding partner at Manifest Capital and on the Advisory Boards of the Alan Turing Institute, the British National Institute for Data Science, in London and C6 Bank, the first all-digital bank of Brazil, in Sao Paulo.  TEDx
 +
|}
 +
|}<!-- B -->
 +
{|<!-- T -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>C7kGOt12sZg</youtube>
 +
<b>Discussion: Do deepfakes pose a threat to democracy?
 +
</b><br>This is a recording of a breakaway discussion from the "Riga StratCom Dialogue 2019" that took place in Riga, 11 June. Speakers: Mr James McLeod-Hatch (Research Director, M&C Saatchi World Services, Great Britain), Dr Gabriele Rizzo (Lead Scientist, Strategic Innovation & Principal Futurist, Leonardo, Italy) and Prof Dr Bjorn Ommer (Professor, Heidelberg University, Germany). The discussion is moderated by Ms Margo Gontar - Journalist and independent media expert on disinformation, Ukraine. Machine learning technologies can be used to distort reality and twist facts like never before. Artificial Intelligence enables video and audio productions to offer a completely fabricated ‘reality’. What is more, it has never been so easy or cheap to produce and share such content. The question is, to what extent these so-called “deep fakes” will influence our societies and political processes. Can “deep fakes” tip the scale in tight elections and open the doors of high public office to someone who really doesn’t meet the requirements of the job?  Do we see party organisations employing robots to make phone calls in the hopes of getting more votes? Fundamentally, is democracy threatened by “deep fakes”?
 +
|}
 +
|<!-- M -->
 +
| valign="top" |
 +
{| class="wikitable" style="width: 550px;"
 +
||
 +
<youtube>Qf-k0iW_Kys</youtube>
 +
<b>Megan Smith — The (Inclusive) Future of Work, AI, and Democracy
 +
</b><br>How can we make sure the technology we build includes everyone who makes makes up the United States? Megan Smith, 3rd US CTO under President Obama and now the CEO of Shift7, discusses what equitable AI looks like at the 5th Annual Lesbians Who Tech + Allies NY Summit.
 +
|}
 +
|}<!-- B -->
 +
 +
= AI is Transforming Campaign Strategies =
 +
 +
There have been several notable instances where AI has been used in the political arena, primarily as a tool to assist human candidates and engage with voters. Here are a few examples:
 +
 +
<b>Campaign Tools and Data Analysis:</b> AI has been increasingly used in political campaigns for data analysis, voter targeting, and strategizing. Companies and political consultants use AI algorithms to analyze voter data, predict voting behaviors, and tailor campaign messages. This is particularly evident in recent US presidential campaigns, where sophisticated data analytics have played a crucial role.
 +
 +
<b>AI Chatbots for Voter Engagement:</b> Several political campaigns have employed AI chatbots to interact with voters, answer questions, and gather feedback. These chatbots can provide information about a candidate's platform, help voters find their polling places, and even assist with voter registration processes.
 +
 +
<b>Policy Simulation and Decision Support:</b> While not a candidate, AI systems are being used by think tanks and government agencies to simulate the outcomes of various policy decisions. These AI tools help policymakers understand the potential impacts of their decisions and craft more effective policies.
 +
 +
<b>AI in Debates and Public Discourse:</b> Some organizations have experimented with AI systems to participate in public debates or provide real-time fact-checking during political events. For example, IBM's Watson has been used in various capacities to analyze debates and provide insights based on vast amounts of data.
 +
 +
= Role of AI in Governance =
 +
 +
AI candidates have started to appear in various parts of the world, reflecting a growing interest in integrating artificial intelligence into political processes.
 +
 +
* SAM (New Zealand): SAM is an AI chatbot created by entrepreneur Nick Gerritsen. Launched in 2017, SAM's goal is to engage with voters and understand their concerns. While not an official candidate, SAM represents a new approach to political engagement and aims to enhance democratic participation by providing a platform for citizens to interact with AI on political issues.
 +
* AI-powered Campaign in Moscow (Russia): In 2019, a Russian AI named "Alisa" was used in a municipal election campaign in Moscow. Although Alisa was not an official candidate, the AI assisted a human candidate by analyzing voter preferences and helping shape the campaign's strategy. This highlighted the potential for AI to play a supporting role in political campaigns.
 +
* XiaoIce (China): Developed by Microsoft, XiaoIce is an AI chatbot that has been used for various applications, including political engagement. While not a candidate, XiaoIce has participated in public discussions and debates, demonstrating the potential for AI to engage in political discourse and provide information to the public.
 +
 +
These examples illustrate the diverse ways AI is being explored and utilized in political contexts, from direct candidacies to supporting roles in campaigns and voter engagement. As AI technology continues to evolve, its influence on politics is likely to grow, prompting further discussions about the role of AI in democratic processes.
 +
 +
== AI Mayor (Japan) ==
 +
In a notable event in Japanese political history, a candidate who promised to use artificial intelligence (AI) to guide his decision-making process ran for mayor in Tama City in 2018. Mashido Matsuda, a real person, spearheaded this unconventional campaign, asserting that AI could provide impartial and balanced decisions for the city's governance. To emphasize his commitment to this innovative approach, Matsuda used an AI avatar on his campaign posters, capturing the imagination and serious consideration of many voters. This unique strategy highlighted the potential of AI in political leadership and governance, appealing to those who believe technology could enhance objectivity and efficiency in public administration. Despite the innovative approach and significant public interest, Matsuda's AI-backed campaign resulted in him securing about 4,000 votes, placing him third in the mayoral race. Although he did not win, the campaign's impact was significant, demonstrating a growing willingness among the electorate to explore the integration of AI in politics. The idea of an AI-influenced governance model, as championed by Matsuda, opens up a dialogue on the future of political processes and the potential role of technology in achieving fairer and more balanced decision-making. There has been no indication if Matsuda or his AI avatar plans to run in future elections, but the campaign has undoubtedly left a lasting impression on the political landscape in Japan.
 +
 +
<youtube>KomiEpEik-Q</youtube>
 +
 +
== AI Steve (UK) ==
 +
The upcoming UK general elections are stirring up excitement with the introduction of AI Steve, a groundbreaking AI candidate aiming to become Britain's first AI Member of Parliament (MP). AI Steve is the digital embodiment of Steven Endacott, a businessman who has embraced technology to engage with voters on a massive scale. This AI persona can manage up to 10,000 conversations simultaneously, allowing voters to ask questions and voice their concerns directly. This unprecedented use of AI in politics raises intriguing questions about the future of political representation and the role of technology in governance. With AI Steve, voters are prompted to consider whether they are comfortable with an AI candidate representing their interests and how such a candidate would function within the traditional structures of Parliament. If elected, it remains unclear whether AI Steve or Steven Endacott would physically take the seat in Parliament, posing a unique dilemma about the intersection of human and artificial intelligence in political leadership. This scenario challenges conventional notions of representation and accountability, compelling voters to reflect on the implications of an AI-driven political landscape. As AI continues to permeate various facets of life, the prospect of an AI MP like Steve could be a pivotal moment in redefining how democratic processes evolve in the digital age.
 +
 +
<youtube>aQacXaa4qPI</youtube>

Latest revision as of 20:40, 19 June 2024

YouTube search... ...Google search


Artificial intelligence (AI) has the potential to support politics in a number of ways, including:

  • Predicting election outcomes. AI can be used to analyze large amounts of data, such as voter demographics, past election results, and social media activity, to predict the outcome of elections. This information can be used by political campaigns to target their messages and resources more effectively.
  • Personalizing political communication. AI can be used to personalize political communication, such as ads and emails, to the individual voter. This can be done by using data about the voter's demographics, interests, and online activity to tailor the message to their specific needs and concerns.
  • Automating tasks. AI can be used to automate tasks that are currently performed by humans, such as voter registration, campaign fundraising, and constituent outreach. This can free up human resources to focus on other tasks, such as policy development and constituent engagement.
  • Analyzing policy options. AI can be used to analyze large amounts of data to identify potential policy options and their likely impact. This information can be used by policymakers to make more informed decisions about public policy.
  • Overseeing elections. AI can be used to oversee elections to prevent voter fraud and other irregularities. This can be done by using AI to monitor voter registration rolls, verify voter identification, and count votes.

However, there are also some potential risks associated with the use of AI in politics, such as:

  • Bias. AI systems can be biased, if they are trained on data that is itself biased. This could lead to AI systems making unfair or discriminatory decisions.
  • Privacy concerns. AI systems collect and analyze large amounts of data about individuals. This data could be used to track individuals' political activity or to target them with political ads.
  • Misinformation. AI systems could be used to create and spread misinformation about political candidates or issues. This could undermine public trust in the political process.


Big Data, AI and Cambridge Analytica
ReedSmithLLP

Election tech: How political campaigns use data and AI
Chris Wilson, CEO WPA Intelligence and former head of analytics for the Ted Cruz campaign, explains how election data modeling works.

Election tech: The future of politics is AI, big data, and social media
Strategist and election tech pioneer Joe Trippi shares the history of political data modeling, what business and politics have in common, and why the future of both depends on machine learning.

Tech Talk: Machine Intelligence and Political Campaigns
In this video, Mike Williams combines his years of government experience in Washington, DC with his passion for machine intelligence. Mike provides a baseline understanding of both political campaigns and machine intelligence before diving deeper into Bayesian machine learning and its application in collaborative filtering -- one of the methodologies for recommendation systems such as Netflix -- as a means to better target individual voters, as well as groups of voters. Watch to learn how the political campaign has become one of the most advanced and efficient startups of all time.

How will AI impact the year of elections?
As nations globally approach a critical juncture with 68 countries partaking in elections, the rise of AI-generated synthetic media presents both challenges and opportunities for the electoral process. In this event experts from multidisciplinary backgrounds explored the multifaceted impact of artificial intelligence on political elections.

They address the fine balance between the need for regulation and the drive for innovation in AI, alongside the media’s crucial role in ensuring accurate and fair political discourse in the face of deepfakes and disinformation.

Christopher Wylie explains how AI can manipulate political discourse
Cambridge Analytica whistleblower Christopher Wylie speaks to The Democracy Project on how artificial intelligence affects what users see on social media. He references an error that appeared on YouTube, where the Notre Dame fire was mistakenly affiliated with 9/11. Special thanks to SFU Public Square for arranging this interview. Check out our website: https://thedemocracyproject.ca

Can We Replace Politicians With Machines? | Alvin Carpio | TEDxOTHRegensburg
In his talk, he discusses whether it is possible to replace politicians with machines, and touches upon the wider implications of automation and machine-learning on humanity. Alvin has spent the last decade campaigning on issues of social justice, human rights, and public policy. Earlier this year he was listed on Forbes 30 under 30 EMEA for his work. In 2016 he founded The Fourth Group, a global community creating a new politics for the fourth industrial revolution (https://www.thefourthgroup.org).

A bold idea to replace politicians | César Hidalgo
César Hidalgo has a radical suggestion for fixing our broken political system: automate it! In this provocative talk, he outlines a bold idea to bypass politicians by empowering citizens to create personalized AI representatives that participate directly in democratic decisions. Explore a new way to make collective decisions and expand your understanding of democracy.

The Electome: Where political journalism meets AI
Built at the Laboratory for Social Machines (LSM) with support from Twitter and Knight Foundation, The Electome is a data project aimed at improving journalism and electoral politics in the social-media age. During the 2016 US presidential election, The Electome used machine learning, network science, and other artificial-intelligence techniques to track the public response to the campaign, with a focus on policy issues. Dozens of stories were published with news organizations including The Washington Post, CNN, and Vice. The Electome was also an official partner of the Commission on Presidential Debates, providing data and suggested questions to the moderators. One of its analytic tools was the focus of an exhibit at the Newseum in Washington, D.C. LSM is the only science lab in the world with access to Twitter’s full output of approximately 500 million tweets per day.

The Age of Machine Learning Politics | Brett Horvath
Ignite Talks is a fast-paced geek event started in 2006 by Brady Forrest and Bre Pettis. Since the first Ignite took place in Seattle around 10 years ago, Ignite has become an international phenomenon, with Ignite events produced in Helsinki, Tunisia, Paris, New York City and over 350 other locations in between.

Deep Learning Lecture 9: Using Keras to Predict Political Parties (June 2019 update)
We'll talk in more depth about how to use Keras with different kinds of classification problems, how to integrate Keras with scikit-learn and k-fold Cross-Validation, and do an exercise where you predict the political parties of congressmen based only on their votes on 16 different issues.

Using Artificial Intelligence to predict election outcomes
Erin Kelly, CEO of Advanced Symbolics, explains how the company's AI platform (known as "Polly") can predict trends in market research and even election outcomes based on sample riding information from online information.

How will government and politics be transformed by technology?
The Institute for Government was delighted to welcome Jamie Susskind to discuss his new book Future Politics: Living together in a World Transformed by Tech. In his book, he argues that those who control digital technology – mainly technology firms and the state – will increasingly use data to control our lives. He suggests that the government must take advantage of the digital age to strengthen democracy. In conversation with Gavin Freeguard, Programme Director and Head of Data and Transparency, at the Institute for Government, they discussed what these issues mean for policymakers.

Could deepfakes weaken democracy? | The Economist
Videos can be faked to make people say things they never actually said. This poses dangers for democracy. Can you spot ALL the deep fake interviews in this film?

The Rise of the Weaponized AI Propaganda Machine by Berit Anderson
Silicon Valley spent the last 10 years building digital addiction machines. And during the 2016 U.S. Election, Russia, Trump and their allies hijacked them. All across Europe, elections have been targeted by Russian propagandists determined to aggravate underlying cultural divides. Alt-right data & AI strategists at Cambridge Analytica are targeting democracies in India, Australia, Kenya, and South America. As platforms struggle to determine their role in the new emerging world order, our biggest strategic advantage as technologists has become fluency in three areas: The motivations and behavior of the international actors at play, a systems understanding of the political and economic drivers of technology, and a deep focus on how to protect the tools that we build every day. Berit Anderson is the CEO and Editor-in-Chief of Scout.ai, which creates media to help you anticipate the impacts of technology. She frequently speaks about her work on Weaponized AI Propaganda and its impact on international democracy. In 2017 she won a debate with the former prime minister of Sweden about whether the internet is a force for democracy.

How AI Inference Threats Might Influence the Outcome of 2020 Election
Karel Baloun, Software Architect and Entrepreneur, UC Berkeley Ken Chang, Cybersecurity Researcher, UC Berkeley Matthew Holmes, Cybersecurity Student, UC Berkeley

What we have learned from the US 2016 election interfered by Russian. This session will inspect the inference threats on how elections in the international community interfered by inference threats. The session will also analyze the patterns of disinformation and misinformation used in the past elections and how artificial intelligence might be applied to influence the outcome of the 2020 election.Pre-Requisites: Understanding of Data Privacy Engineering concepts and general information technology background interested in inference threats.

Controlling the Narrative with Artificial Intelligence
Speaker: Dustin Heart Abstract: Social media has played a tremendous impact in this current election season, and campaigns are scrambling to adapt. Behind the scenes, many companies have emerged with methods to engage potential voters, and in many ways have been too obvious in their attempts to be seen. This talk highlights many of the strategies that these campaigners are using, counter-methods to detect this (to help remove said content from social media), and the realities and ethics of these applications.

Stanford HAI 2019 Fall Conference - AI, Democracy and Elections
Renee DiResta, Research Manager, Stanford Internet Observatory Andy Grotto, Research Scholar, Cyber Policy Center; Director, Program on Geopolitics, Technology and Governance, Stanford University Nathaniel Persily, James B. McClatchy Professor of Law, Stanford Law School, Stanford University Moderator: Michael McFaul, Ken Olivier and Angela Nomellini Professor of International Studies in Political Science, Director and Senior Fellow at the Freeman Spogli Institute for International Studies, and the Peter and Helen Bing Senior Fellow at the Hoover Institution, Stanford University

Big Data and Its Impact on Democracy
Martin Hilbert discussed the impact of big data, computational analysis and machine learning on the democratic process. In this conversation, Hilbert addressed both challenges and opportunities presented by emerging big data technologies. Speaker Biography: Martin Hilbert is an associate professor of communication at the University of California, Davis. Prior to his current position, Hilbert created and coordinated the Information Society program of the United Nations Regional Commission for Latin America and the Caribbean. In his 15 years as a United Nations economic affairs officer, he delivered hands-on technical assistance in the field of digital development to presidents, government experts, legislators, diplomats, non-governmental organizations and companies in more than 20 countries.

Don’t blame bots, fake news is spread by humans | Sinan Aral | TEDxCERN
Fake news does not only disrupt society but also economy and the deep roots of democracy. Sometimes, their impact can even be measured in terms of people killed by the misinformation that it’s spread around. Sinan Aral, a scientist, entrepreneur and investor with a PhD in IT economics, applied econometrics and statistics, has run some of the largest randomised experiments in digital social networks like Facebook and Twitter to measure the impact of persuasive messages and peer influence on our economy, our society and our public health. Having conducted the most extensive longitudinal study of false news spread on Twitter, which was published on the cover of Science this March, Aral has proven that false news diffuses farther, faster, deeper, and more broadly than the truth online. But why? The answer will leave you astonished as the main cause for such an effective spread of false news is not bots, it’s…us. So, how can we be sure that something is real? As well as teaching at MIT as a Professor of IT & Marketing, Professor in the Institute for Data, Systems and Society, Aral is currently a founding partner at Manifest Capital and on the Advisory Boards of the Alan Turing Institute, the British National Institute for Data Science, in London and C6 Bank, the first all-digital bank of Brazil, in Sao Paulo. TEDx

Discussion: Do deepfakes pose a threat to democracy?
This is a recording of a breakaway discussion from the "Riga StratCom Dialogue 2019" that took place in Riga, 11 June. Speakers: Mr James McLeod-Hatch (Research Director, M&C Saatchi World Services, Great Britain), Dr Gabriele Rizzo (Lead Scientist, Strategic Innovation & Principal Futurist, Leonardo, Italy) and Prof Dr Bjorn Ommer (Professor, Heidelberg University, Germany). The discussion is moderated by Ms Margo Gontar - Journalist and independent media expert on disinformation, Ukraine. Machine learning technologies can be used to distort reality and twist facts like never before. Artificial Intelligence enables video and audio productions to offer a completely fabricated ‘reality’. What is more, it has never been so easy or cheap to produce and share such content. The question is, to what extent these so-called “deep fakes” will influence our societies and political processes. Can “deep fakes” tip the scale in tight elections and open the doors of high public office to someone who really doesn’t meet the requirements of the job? Do we see party organisations employing robots to make phone calls in the hopes of getting more votes? Fundamentally, is democracy threatened by “deep fakes”?

Megan Smith — The (Inclusive) Future of Work, AI, and Democracy
How can we make sure the technology we build includes everyone who makes makes up the United States? Megan Smith, 3rd US CTO under President Obama and now the CEO of Shift7, discusses what equitable AI looks like at the 5th Annual Lesbians Who Tech + Allies NY Summit.

AI is Transforming Campaign Strategies

There have been several notable instances where AI has been used in the political arena, primarily as a tool to assist human candidates and engage with voters. Here are a few examples:

Campaign Tools and Data Analysis: AI has been increasingly used in political campaigns for data analysis, voter targeting, and strategizing. Companies and political consultants use AI algorithms to analyze voter data, predict voting behaviors, and tailor campaign messages. This is particularly evident in recent US presidential campaigns, where sophisticated data analytics have played a crucial role.

AI Chatbots for Voter Engagement: Several political campaigns have employed AI chatbots to interact with voters, answer questions, and gather feedback. These chatbots can provide information about a candidate's platform, help voters find their polling places, and even assist with voter registration processes.

Policy Simulation and Decision Support: While not a candidate, AI systems are being used by think tanks and government agencies to simulate the outcomes of various policy decisions. These AI tools help policymakers understand the potential impacts of their decisions and craft more effective policies.

AI in Debates and Public Discourse: Some organizations have experimented with AI systems to participate in public debates or provide real-time fact-checking during political events. For example, IBM's Watson has been used in various capacities to analyze debates and provide insights based on vast amounts of data.

Role of AI in Governance

AI candidates have started to appear in various parts of the world, reflecting a growing interest in integrating artificial intelligence into political processes.

  • SAM (New Zealand): SAM is an AI chatbot created by entrepreneur Nick Gerritsen. Launched in 2017, SAM's goal is to engage with voters and understand their concerns. While not an official candidate, SAM represents a new approach to political engagement and aims to enhance democratic participation by providing a platform for citizens to interact with AI on political issues.
  • AI-powered Campaign in Moscow (Russia): In 2019, a Russian AI named "Alisa" was used in a municipal election campaign in Moscow. Although Alisa was not an official candidate, the AI assisted a human candidate by analyzing voter preferences and helping shape the campaign's strategy. This highlighted the potential for AI to play a supporting role in political campaigns.
  • XiaoIce (China): Developed by Microsoft, XiaoIce is an AI chatbot that has been used for various applications, including political engagement. While not a candidate, XiaoIce has participated in public discussions and debates, demonstrating the potential for AI to engage in political discourse and provide information to the public.

These examples illustrate the diverse ways AI is being explored and utilized in political contexts, from direct candidacies to supporting roles in campaigns and voter engagement. As AI technology continues to evolve, its influence on politics is likely to grow, prompting further discussions about the role of AI in democratic processes.

AI Mayor (Japan)

In a notable event in Japanese political history, a candidate who promised to use artificial intelligence (AI) to guide his decision-making process ran for mayor in Tama City in 2018. Mashido Matsuda, a real person, spearheaded this unconventional campaign, asserting that AI could provide impartial and balanced decisions for the city's governance. To emphasize his commitment to this innovative approach, Matsuda used an AI avatar on his campaign posters, capturing the imagination and serious consideration of many voters. This unique strategy highlighted the potential of AI in political leadership and governance, appealing to those who believe technology could enhance objectivity and efficiency in public administration. Despite the innovative approach and significant public interest, Matsuda's AI-backed campaign resulted in him securing about 4,000 votes, placing him third in the mayoral race. Although he did not win, the campaign's impact was significant, demonstrating a growing willingness among the electorate to explore the integration of AI in politics. The idea of an AI-influenced governance model, as championed by Matsuda, opens up a dialogue on the future of political processes and the potential role of technology in achieving fairer and more balanced decision-making. There has been no indication if Matsuda or his AI avatar plans to run in future elections, but the campaign has undoubtedly left a lasting impression on the political landscape in Japan.

AI Steve (UK)

The upcoming UK general elections are stirring up excitement with the introduction of AI Steve, a groundbreaking AI candidate aiming to become Britain's first AI Member of Parliament (MP). AI Steve is the digital embodiment of Steven Endacott, a businessman who has embraced technology to engage with voters on a massive scale. This AI persona can manage up to 10,000 conversations simultaneously, allowing voters to ask questions and voice their concerns directly. This unprecedented use of AI in politics raises intriguing questions about the future of political representation and the role of technology in governance. With AI Steve, voters are prompted to consider whether they are comfortable with an AI candidate representing their interests and how such a candidate would function within the traditional structures of Parliament. If elected, it remains unclear whether AI Steve or Steven Endacott would physically take the seat in Parliament, posing a unique dilemma about the intersection of human and artificial intelligence in political leadership. This scenario challenges conventional notions of representation and accountability, compelling voters to reflect on the implications of an AI-driven political landscape. As AI continues to permeate various facets of life, the prospect of an AI MP like Steve could be a pivotal moment in redefining how democratic processes evolve in the digital age.