Difference between revisions of "Generative Pre-trained Transformer (GPT)"

From
Jump to: navigation, search
(Generative Pre-trained Transformer (GPT-3))
Line 31: Line 31:
 
* [http://openai.com/blog/openai-api/ OpenAI API] ...today the API runs models with weights from the GPT-3 family with many speed and throughput improvements.
 
* [http://openai.com/blog/openai-api/ OpenAI API] ...today the API runs models with weights from the GPT-3 family with many speed and throughput improvements.
 
* [http://medium.com/@praveengovi.analytics/gpt-3-by-openai-outlook-and-examples-f234f9c62c41 GPT-3 by OpenAI – Outlook and Examples | Praveen Govindaraj | Medium]
 
* [http://medium.com/@praveengovi.analytics/gpt-3-by-openai-outlook-and-examples-f234f9c62c41 GPT-3 by OpenAI – Outlook and Examples | Praveen Govindaraj | Medium]
 
 
* [http://www.gwern.net/GPT-3 GPT-3 Creative Fiction | R. Gwern]
 
* [http://www.gwern.net/GPT-3 GPT-3 Creative Fiction | R. Gwern]
  
Line 41: Line 40:
 
<hr>
 
<hr>
  
 +
<youtube>5fqxPOaaqi0</youtube>
 
<youtube>8psgEDhT1MM</youtube>
 
<youtube>8psgEDhT1MM</youtube>
 
<youtube>lQnLwUfwgyA</youtube>
 
<youtube>lQnLwUfwgyA</youtube>
Line 54: Line 54:
 
* [[Development]]
 
* [[Development]]
 
* http://analyticsindiamag.com/will-the-much-hyped-gpt-3-impact-the-coders/ Will The Much-Hyped GPT-3 Impact The Coders? | Analytics India Magazine]
 
* http://analyticsindiamag.com/will-the-much-hyped-gpt-3-impact-the-coders/ Will The Much-Hyped GPT-3 Impact The Coders? | Analytics India Magazine]
 +
* [http://twitter.com/sharifshameem/status/1283322990625607681 With GPT-3, I built a layout generator where you just describe any layout you want, and it generates the JSX code for you. | Sharif Shameem] - [http://debuild.co/ debuild]
  
 
<youtube>fZSFNUT6iY8</youtube>
 
<youtube>fZSFNUT6iY8</youtube>
Line 75: Line 76:
 
a text-generating bot based on a model with 1.5 billion parameters. ...Ultimately, OpenAI's researchers kept the full thing to themselves, only releasing a pared-down 117 million parameter version of the model (which we have dubbed "GPT-2 Junior") as a safer demonstration of what the full GPT-2 model could do.[http://arstechnica.com/information-technology/2019/02/researchers-scared-by-their-own-work-hold-back-deepfakes-for-text-ai/ Twenty minutes into the future with OpenAI’s Deep Fake Text AI | Sean Gallagher]
 
a text-generating bot based on a model with 1.5 billion parameters. ...Ultimately, OpenAI's researchers kept the full thing to themselves, only releasing a pared-down 117 million parameter version of the model (which we have dubbed "GPT-2 Junior") as a safer demonstration of what the full GPT-2 model could do.[http://arstechnica.com/information-technology/2019/02/researchers-scared-by-their-own-work-hold-back-deepfakes-for-text-ai/ Twenty minutes into the future with OpenAI’s Deep Fake Text AI | Sean Gallagher]
  
 
+
<youtube>0OtZ8dUFxXA</youtube>
 +
<youtube>8ypnLjwpzK8</youtube>
 +
<youtube>rEGB7-FlPRs</youtube>
 
<youtube>UULqu7LQoHs</youtube>
 
<youtube>UULqu7LQoHs</youtube>
 
<youtube>OJGDmXDt5QA</youtube>
 
<youtube>OJGDmXDt5QA</youtube>
Line 81: Line 84:
 
<youtube>M6EXmoP5jX8</youtube>
 
<youtube>M6EXmoP5jX8</youtube>
 
<youtube>gg-JnQ1E2ek</youtube>
 
<youtube>gg-JnQ1E2ek</youtube>
<youtube>0OtZ8dUFxXA</youtube>
 
<youtube>8ypnLjwpzK8</youtube>
 
 
<youtube>LWDbAoPyQAk</youtube>
 
<youtube>LWDbAoPyQAk</youtube>
 
<youtube>H1lncbq8NC0</youtube>
 
<youtube>H1lncbq8NC0</youtube>
Line 88: Line 89:
 
<youtube>u1_qMdb0kYU</youtube>
 
<youtube>u1_qMdb0kYU</youtube>
 
<youtube>0n95f-eqZdw</youtube>
 
<youtube>0n95f-eqZdw</youtube>
 
+
<youtube>LjkubM5IIos</youtube>
  
 
== r/SubSimulator ==
 
== r/SubSimulator ==

Revision as of 15:45, 26 July 2020

YouTube search... ...Google search

1*jbcwhhB8PEpJRk781rML_g.png

Generative Pre-trained Transformer (GPT-3)


Sushant Kumar's micro-site - Replace your 'word' in the following URL to see what GPT-3 generates: http://thoughts.sushant-kumar.com/word


GPT Impact to Development

Generative Pre-trained Transformer (GPT-2)

a text-generating bot based on a model with 1.5 billion parameters. ...Ultimately, OpenAI's researchers kept the full thing to themselves, only releasing a pared-down 117 million parameter version of the model (which we have dubbed "GPT-2 Junior") as a safer demonstration of what the full GPT-2 model could do.Twenty minutes into the future with OpenAI’s Deep Fake Text AI | Sean Gallagher

r/SubSimulator

Subreddit populated entirely by AI personifications of other subreddits -- all posts and comments are generated automatically using:

results in coherent and realistic simulated content.


GetBadNews

  • Get Bad News game - Can you beat my score? Play the fake news game! Drop all pretense of ethics and choose the path that builds your persona as an unscrupulous media magnate. Your task is to get as many followers as you can while

share-score.png