Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News

Generative Pre-trained Transformer 5 (GPT-5) is a hypothetical AI system that is expected to be the next generation of OpenAI’s GPT series of LLMs. GPT-5 has not been released yet, and there is no official information about its development or capabilities. When will GPT 5 be released, and what should you expect from it? | Eray Eliaçık - Dataconomy

  • GPT5 might have 100 times more parameters than GPT-3, which had 175 billion parameters. This means that GPT-5 could have around 17.5 trillion parameters, making it one of the largest neural networks ever created.
  • GPT5 might use 200 to 400 times more computing than GPT-3, which used about 3.14 exaflops of computing during training. This means that GPT5 could use up to 1.26 zettaflops of computing, which is more than the combined computing power of all the supercomputers in the world.
  • GPT5 might be able to work with longer context and be trained with a different loss function than GPT-3, which used cross-entropy loss. This could improve its ability to generate coherent and relevant text across different domains and tasks.
  • GPT5 might be able to reach Artificial General Intelligence (AGI), which is the level of intelligence where an AI system can perform any task that a human can do. Some experts believe that GPT-5 could pass the Turing test, which is a test of whether a machine can exhibit human-like behavior in a conversation.

Google DeepMind's 'Model evaluation for extreme risks'

Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through "dangerous capability evaluations") and the propensity of models to apply their capabilities for harm (through "alignment evaluations"). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security.