Difference between revisions of "Generative Pre-trained Transformer (GPT)"

From
Jump to: navigation, search
(GPT-3)
(GPT-3)
Line 30: Line 30:
 
* [http://openai.com/blog/openai-api/ OpenAI API] ...today the API runs models with weights from the GPT-3 family with many speed and throughput improvements.
 
* [http://openai.com/blog/openai-api/ OpenAI API] ...today the API runs models with weights from the GPT-3 family with many speed and throughput improvements.
 
* [http://medium.com/@praveengovi.analytics/gpt-3-by-openai-outlook-and-examples-f234f9c62c41 GPT-3 by OpenAI – Outlook and Examples | Praveen Govindaraj | Medium]
 
* [http://medium.com/@praveengovi.analytics/gpt-3-by-openai-outlook-and-examples-f234f9c62c41 GPT-3 by OpenAI – Outlook and Examples | Praveen Govindaraj | Medium]
 +
 +
Why did OpenAI choose to release an API instead of open-sourcing the models? There are three main reasons we did this.
 +
 +
# First, commercializing the technology helps us pay for our ongoing AI research, safety, and policy efforts.
 +
# Second, many of the models underlying the API are very large, taking a lot of expertise to develop and deploy and making them very expensive to run. This makes it hard for anyone except larger companies to benefit from the underlying technology. We’re hopeful that the API will make powerful AI systems more accessible to smaller businesses and organizations.
 +
# Third, the API model allows us to more easily respond to misuse of the technology. Since it is hard to predict the downstream use cases of our models, it feels inherently safer to release them via an API and broaden access over time, rather than release an open source model where access cannot be adjusted if it turns out to have harmful applications.
  
 
<youtube>lQnLwUfwgyA</youtube>
 
<youtube>lQnLwUfwgyA</youtube>

Revision as of 10:54, 19 July 2020

YouTube search... ...Google search

1*jbcwhhB8PEpJRk781rML_g.png

GPT-3

Why did OpenAI choose to release an API instead of open-sourcing the models? There are three main reasons we did this.

  1. First, commercializing the technology helps us pay for our ongoing AI research, safety, and policy efforts.
  2. Second, many of the models underlying the API are very large, taking a lot of expertise to develop and deploy and making them very expensive to run. This makes it hard for anyone except larger companies to benefit from the underlying technology. We’re hopeful that the API will make powerful AI systems more accessible to smaller businesses and organizations.
  3. Third, the API model allows us to more easily respond to misuse of the technology. Since it is hard to predict the downstream use cases of our models, it feels inherently safer to release them via an API and broaden access over time, rather than release an open source model where access cannot be adjusted if it turns out to have harmful applications.

GPT-2

a text-generating bot based on a model with 1.5 billion parameters. ...Ultimately, OpenAI's researchers kept the full thing to themselves, only releasing a pared-down 117 million parameter version of the model (which we have dubbed "GPT-2 Junior") as a safer demonstration of what the full GPT-2 model could do.Twenty minutes into the future with OpenAI’s Deep Fake Text AI | Sean Gallagher




r/SubSimulator

Subreddit populated entirely by AI personifications of other subreddits -- all posts and comments are generated automatically using:

results in coherent and realistic simulated content.


GetBadNews

  • Get Bad News game - Can you beat my score? Play the fake news game! Drop all pretense of ethics and choose the path that builds your persona as an unscrupulous media magnate. Your task is to get as many followers as you can while

share-score.png