Difference between revisions of "Fake"
m |
m |
||
Line 15: | Line 15: | ||
* [[Journalism]]/News | * [[Journalism]]/News | ||
* [[Capabilities]] | * [[Capabilities]] | ||
− | ** [[Video | + | ** [[Video/Image]] ... [[Vision]] ... [[Colorize]] ... [[Image/Video Transfer Learning]] |
** [[End-to-End Speech]] ... [[Synthesize Speech]] ... [[Speech Recognition]] | ** [[End-to-End Speech]] ... [[Synthesize Speech]] ... [[Speech Recognition]] | ||
* [[Attention]] Mechanism ...[[Transformer]] Model ...[[Generative Pre-trained Transformer (GPT)]] | * [[Attention]] Mechanism ...[[Transformer]] Model ...[[Generative Pre-trained Transformer (GPT)]] |
Revision as of 21:33, 30 March 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
- Generative Adversarial Network (GAN)
- Journalism/News
- Capabilities
- Attention Mechanism ...Transformer Model ...Generative Pre-trained Transformer (GPT)
- Generative AI ... OpenAI's ChatGPT ... Perplexity ... Microsoft's Bing ... You ...Google's Bard ... Baidu's Ernie
- The new AI tools spreading fake news in politics and business | Hannah Murphy - Financial Times
- Making Deepfakes Gets Cheaper and Easier Thanks to A.I. | The New York Times
- How deep learning fakes videos (Deepfakes) and how to detect it? | Jonathan Hui - Medium
Disinformation — the deliberate spreading of false narratives through news, email and social media. Artificial intelligence can generate fakes through the process of generative modeling. Generative models are AI algorithms that learn to generate data that is similar to a given dataset. This can include generating images, audio, text, or other types of data.
One popular type of generative model is a Generative Adversarial Network (GAN). GANs consist of two neural networks: a generator and a discriminator. The generator network learns to generate data that is similar to a given dataset, while the discriminator network learns to distinguish between the generated data and real data.
During training, the generator and discriminator networks compete against each other in a game-like process, with the generator attempting to create data that will fool the discriminator into thinking it is real, and the discriminator attempting to correctly identify the real data from the generated data. Through this process, the generator network gradually learns to generate data that is increasingly difficult for the discriminator to distinguish from real data.
While generative models have many useful applications, they can also be used to generate fakes, such as fake images, videos, or text. For example, a GAN could be trained to generate realistic-looking images of people who do not actually exist, or to generate text that mimics the writing style of a particular author. This has raised concerns about the potential use of AI-generated fakes for malicious purposes, such as spreading disinformation or creating fake identities.
Sassy Justice
Youtube search... ...Google search
- The creators of South Park have a new weekly deepfake satire show | Karen Hao - MIT Technology Review ...It’s the first example of a recurring production that will rely on deepfakes as part of its core premise.
From South Park's Trey Parker & Matt Stone w/ Peter Serafinowicz
Solving the Problem
Youtube search... ...Google search
- Fake News Challenge - Exploring how artificial intelligence technologies could be leveraged to combat fake news | Fake News Challenge (FNC)
- Journalist's Toolbox | Society of Professional Journalists
Emergent
- Craig Silverman
- Verification Handbook - For Disinformation And Media Manipulation | DataJournalism.com
- BuzzFeed News
- Radio Host Craig Silverman Says He Was Fired Mid-Show For Criticizing Trump: 'I See Corruption and Blatant Dishonesty' | Khaleda Rahman
- Emergent: a novel data-set for stance classification | William Ferreira and Andreas Vlachos a new realworld dataset derived from the digital journalism project Emergent
- Emergent.info Blog
Emergent is part of a research project with the Tow Center for Digital Journalism at Columbia University that focuses on how unverified information and rumor are reported in the media. It aims to develop best practices for debunking misinformation.
Emergent | Craig Silverman and Adam Hooper - A real-time rumor tracker
So how does Emergent work? Silverman and a research assistant comb through social media and news websites using a variety of feeds, alerts and filters, and then enter claims that need debunking into the database and assign what Silverman calls a “truthiness” rating that marks each report as supporting the claim (i.e. stating it to be true), debunking it in some way or simply repeating it.
At that point, an algorithm takes over, and watches the URLs of the stories or posts that Silverman and his assistant entered into the database to see whether the content has been changed — that is, updated with a correction or some evidence that suggests it’s true or false. If there’s enough evidence, the status of the claim is changed, but that decision is always made by a human.