Difference between revisions of "Fake"

From
Jump to: navigation, search
m
m
Line 12: Line 12:
 
* [http://www.ft.com/content/55a39e92-8357-11ea-b872-8db45d5f6714 The new AI tools spreading fake news in politics and business | Hannah Murphy - Financial Times]
 
* [http://www.ft.com/content/55a39e92-8357-11ea-b872-8db45d5f6714 The new AI tools spreading fake news in politics and business | Hannah Murphy - Financial Times]
 
* [[Generative AI]]  ... [[OpenAI]]'s [[ChatGPT]] ... [[Perplexity]]  ... [[Microsoft]]'s [[BingAI]] ... [[You]] ...[[Google]]'s [[Bard]]
 
* [[Generative AI]]  ... [[OpenAI]]'s [[ChatGPT]] ... [[Perplexity]]  ... [[Microsoft]]'s [[BingAI]] ... [[You]] ...[[Google]]'s [[Bard]]
 +
  
 
Disinformation — the deliberate spreading of false narratives through news, email and social media
 
Disinformation — the deliberate spreading of false narratives through news, email and social media

Revision as of 16:04, 8 March 2023

YouTube search... ...Google search


Disinformation — the deliberate spreading of false narratives through news, email and social media

Sassy Justice

Youtube search... ...Google search

From South Park's Trey Parker & Matt Stone w/ Peter Serafinowicz


Solving the Problem

Youtube search... ...Google search



Emergent

Emergent is part of a research project with the Tow Center for Digital Journalism at Columbia University that focuses on how unverified information and rumor are reported in the media. It aims to develop best practices for debunking misinformation.


Emergent | Craig Silverman and Adam Hooper - A real-time rumor tracker


So how does Emergent work? Silverman and a research assistant comb through social media and news websites using a variety of feeds, alerts and filters, and then enter claims that need debunking into the database and assign what Silverman calls a “truthiness” rating that marks each report as supporting the claim (i.e. stating it to be true), debunking it in some way or simply repeating it.

At that point, an algorithm takes over, and watches the URLs of the stories or posts that Silverman and his assistant entered into the database to see whether the content has been changed — that is, updated with a correction or some evidence that suggests it’s true or false. If there’s enough evidence, the status of the claim is changed, but that decision is always made by a human.