Difference between revisions of "Fake"

From
Jump to: navigation, search
m
m
 
(37 intermediate revisions by the same user not shown)
Line 11: Line 11:
 
[https://www.bing.com/news/search?q=ai+fake+news+social+media&qft=interval%3d%228%22 ...Bing News]
 
[https://www.bing.com/news/search?q=ai+fake+news+social+media&qft=interval%3d%228%22 ...Bing News]
  
 
+
* [[Video/Image]] ... [[Vision]] ... [[Enhancement]] ... [[Fake]] ... [[Reconstruction]] ... [[Colorize]] ... [[Occlusions]] ... [[Predict image]] ... [[Image/Video Transfer Learning]] ... [[Art]] ... [[Photography]]
* [[Generative Adversarial Network (GAN)]]
+
* [[End-to-End Speech]] ... [[Synthesize Speech]] ... [[Speech Recognition]] ... [[Music]]
* [[Journalism]]/News
+
* [[Humor]] ... [[Writing/Publishing]] ... [[Storytelling]] ... [[AI Generated Broadcast Content|Broadcast]]  ... [[Journalism|Journalism/News]] ... [[Podcasts]] ... [[Books, Radio & Movies - Exploring Possibilities]]
* [[Capabilities]]  
+
* [[Attention]] Mechanism  ... [[Transformer]] ... [[Generative Pre-trained Transformer (GPT)]] ... [[Generative Adversarial Network (GAN)|GAN]] ... [[Bidirectional Encoder Representations from Transformers (BERT)|BERT]]
** [[Video]] ... [[Generated Image]] ... [[Vision]] ... [[Colorize]] ... [[Image/Video Transfer Learning]]
+
* [[What is Artificial Intelligence (AI)? | Artificial Intelligence (AI)]] ... [[Generative AI]] ... [[Machine Learning (ML)]] ... [[Deep Learning]] ... [[Neural Network]] ... [[Reinforcement Learning (RL)|Reinforcement]] ... [[Learning Techniques]]
** [[End-to-End Speech]] ... [[Synthesize Speech]] ... [[Speech Recognition]]  
+
* [[Conversational AI]] ... [[ChatGPT]] | [[OpenAI]] ... [[Bing/Copilot]] | [[Microsoft]] ... [[Gemini]] | [[Google]] ... [[Claude]] | [[Anthropic]] ... [[Perplexity]] ... [[You]] ... [[phind]] ... [[Grok]] | [https://x.ai/ xAI] ... [[Groq]] ... [[Ernie]] | [[Baidu]]
* [[Attention]] Mechanism  ...[[Transformer]] Model  ...[[Generative Pre-trained Transformer (GPT)]]
+
* [[Immersive Reality]] ... [[Metaverse]] ... [[Omniverse]] ... [[Transhumanism]] ... [[Religion]]
* [[Generative AI]] ... [[OpenAI]]'s [[ChatGPT]] ... [[Perplexity]] ... [[Microsoft]]'s [[Bing]] ... [[You]] ...[[Google]]'s [[Bard]] ... [[Baidu]]'s [[Ernie]]
+
* [[Creatives]] ... [[History of Artificial Intelligence (AI)]] ... [[Neural Network#Neural Network History|Neural Network History]] ... [[Rewriting Past, Shape our Future]] ... [[Archaeology]] ... [[Paleontology]]
 +
* [[Reading Material & Glossary|Reading/Glossary]] ... [[Courses & Certifications|Courses/Certs]] ... [[Podcasts]] ... [[Books, Radio & Movies - Exploring Possibilities]] ... [[Help Wanted]]
 
* [https://www.ft.com/content/55a39e92-8357-11ea-b872-8db45d5f6714 The new AI tools spreading fake news in politics and business | Hannah Murphy - Financial Times]
 
* [https://www.ft.com/content/55a39e92-8357-11ea-b872-8db45d5f6714 The new AI tools spreading fake news in politics and business | Hannah Murphy - Financial Times]
 
* [https://www.nytimes.com/2023/03/12/technology/deepfakes-cheapfakes-videos-ai.html Making Deepfakes Gets Cheaper and Easier Thanks to A.I. | The New York Times]  
 
* [https://www.nytimes.com/2023/03/12/technology/deepfakes-cheapfakes-videos-ai.html Making Deepfakes Gets Cheaper and Easier Thanks to A.I. | The New York Times]  
 +
* [https://medium.com/@jonathan_hui/how-deep-learning-fakes-videos-deepfakes-and-how-to-detect-it-c0b50fbf7cb9 How deep learning fakes videos (Deepfakes) and how to detect it? | Jonathan Hui - Medium]
 +
* [https://techcrunch.com/2022/08/24/deepfakes-for-all-uncensored-ai-art-model-prompts-ethics-questions/ Deepfakes for all: Uncensored AI art model prompts ethics questions | Kyle Wiggers - TechCrunch]
 +
* [https://www.npr.org/2023/06/08/1181097435/desantis-campaign-shares-apparent-ai-generated-fake-images-of-trump-and-fauci DeSantis campaign shares apparent AI-generated fake images of Trump and Fauci | Shannon Bond - NPR]
 +
* [https://www.defensenews.com/information-warfare/2023/08/01/us-military-targets-deepfakes-misinformation-with-ai-powered-tool/ US military targets deepfakes, misinformation with AI-powered tool | Colin Demarest & Jaime Moore-Carrillo - Defense News]
 +
* [https://www.abc.net.au/news/2023-10-02/ai-tom-hanks-dental-plan-ad-scam/102924118 Actor Tom Hanks warns fans against trusting AI-generated video promoting dental insurance as video begins circulating online | Brianna Morris-Grant - ABC News]
 +
* [https://aimojo.pro/top-10-free-undress-ai-tools-exploring-safe-practices-for-image-manipulation/ Top 15 Free Undress AI Tools 2024: Remove clothes with AI] ... also known as ‘Undress AI Apps‘, are a type of deepfake that can manipulate images to make subjects appear without clothes.
 +
* [https://www.responsible.ai/a-look-at-global-deepfake-regulation-approaches/ A Look at Global Deepfake Regulation Approaches | Amanda Lawson - Responsible AI Institute (RAI Institute)]
  
  
Line 30: Line 38:
  
 
While generative models have many useful applications, they can also be used to generate fakes, such as fake images, videos, or text. For example, a GAN could be trained to generate realistic-looking images of people who do not actually exist, or to generate text that mimics the writing style of a particular author. This has raised concerns about the potential use of AI-generated fakes for malicious purposes, such as spreading disinformation or creating fake identities.
 
While generative models have many useful applications, they can also be used to generate fakes, such as fake images, videos, or text. For example, a GAN could be trained to generate realistic-looking images of people who do not actually exist, or to generate text that mimics the writing style of a particular author. This has raised concerns about the potential use of AI-generated fakes for malicious purposes, such as spreading disinformation or creating fake identities.
 +
_____________________________
 +
 +
To rework a famous saying, a fake picture is worth a thousand fake words, and with the increasing democratization of this kind of technology its going to become harder and harder to trust what we see on the web. As Joshua Rothman notes in the New Yorker, that presents a double-edge sword—not only will people be able to create forgeries to twist the public discourse, public figures will also have plausible deniability for anything they’re caught doing on camera. [https://singularityhub.com/2018/12/24/nvidias-fake-faces-are-a-masterpiece-but-have-deeper-implications Nvidia’s Fake Faces Are a Masterpiece—But Have Deeper Implications | Edd Gent - SingularityHub]
  
 +
<youtube>5L2YAIk0vSc</youtube>
 
<youtube>vJFEMzEATJQ</youtube>
 
<youtube>vJFEMzEATJQ</youtube>
 
<youtube>qaqRLopz0wA</youtube>
 
<youtube>qaqRLopz0wA</youtube>
Line 37: Line 49:
 
<youtube>IZZQdn89mso</youtube>
 
<youtube>IZZQdn89mso</youtube>
 
<youtube>zAibdueUxkg</youtube>
 
<youtube>zAibdueUxkg</youtube>
 +
<youtube>njKP3FqW3Sk</youtube>
 +
<youtube>2edOMMREazo</youtube>
 +
<youtube>DglrYx9F3UU</youtube>
 +
<youtube>dMF2i3A9Lzw</youtube>
 +
<youtube>ghTb2kZSpZE</youtube>
 +
<youtube>ttGUiwfTYvg</youtube>
  
 
= Sassy Justice =
 
= Sassy Justice =

Latest revision as of 05:07, 10 May 2024

YouTube ... Quora ...Google search ...Google News ...Bing News


Disinformation — the deliberate spreading of false narratives through news, email and social media. Artificial intelligence can generate fakes through the process of generative modeling. Generative models are AI algorithms that learn to generate data that is similar to a given dataset. This can include generating images, audio, text, or other types of data.

One popular type of generative model is a Generative Adversarial Network (GAN). GANs consist of two neural networks: a generator and a discriminator. The generator network learns to generate data that is similar to a given dataset, while the discriminator network learns to distinguish between the generated data and real data.

During training, the generator and discriminator networks compete against each other in a game-like process, with the generator attempting to create data that will fool the discriminator into thinking it is real, and the discriminator attempting to correctly identify the real data from the generated data. Through this process, the generator network gradually learns to generate data that is increasingly difficult for the discriminator to distinguish from real data.

While generative models have many useful applications, they can also be used to generate fakes, such as fake images, videos, or text. For example, a GAN could be trained to generate realistic-looking images of people who do not actually exist, or to generate text that mimics the writing style of a particular author. This has raised concerns about the potential use of AI-generated fakes for malicious purposes, such as spreading disinformation or creating fake identities. _____________________________

To rework a famous saying, a fake picture is worth a thousand fake words, and with the increasing democratization of this kind of technology its going to become harder and harder to trust what we see on the web. As Joshua Rothman notes in the New Yorker, that presents a double-edge sword—not only will people be able to create forgeries to twist the public discourse, public figures will also have plausible deniability for anything they’re caught doing on camera. Nvidia’s Fake Faces Are a Masterpiece—But Have Deeper Implications | Edd Gent - SingularityHub

Sassy Justice

Youtube search... ...Google search

From South Park's Trey Parker & Matt Stone w/ Peter Serafinowicz


Solving the Problem

Youtube search... ...Google search



Emergent

Emergent is part of a research project with the Tow Center for Digital Journalism at Columbia University that focuses on how unverified information and rumor are reported in the media. It aims to develop best practices for debunking misinformation.


Emergent | Craig Silverman and Adam Hooper - A real-time rumor tracker


So how does Emergent work? Silverman and a research assistant comb through social media and news websites using a variety of feeds, alerts and filters, and then enter claims that need debunking into the database and assign what Silverman calls a “truthiness” rating that marks each report as supporting the claim (i.e. stating it to be true), debunking it in some way or simply repeating it.

At that point, an algorithm takes over, and watches the URLs of the stories or posts that Silverman and his assistant entered into the database to see whether the content has been changed — that is, updated with a correction or some evidence that suggests it’s true or false. If there’s enough evidence, the status of the claim is changed, but that decision is always made by a human.