Generative Pre-trained Transformer (GPT)

From
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News

Writing/Publishing - SynthPub


Generative Pre-trained Transformer 5 (GPT-5)

See page... GPT-5 | OpenAI ... what will the future bring?

Generative Pre-trained Transformer 4 (GPT-4 & GPT-4o)

See page... GPT-4 | OpenAI ... can accept prompts of both text and images1. This means that it can take images as well as text as input, giving it the ability to describe the humor in unusual images, summarize text from screenshots, and answer exam questions that contain diagrams. rumored to be more than 1 trillion parameters.

Generative Pre-trained Transformer 3 (GPT-3 & GPT 3.5)

YouTube ... Quora ...Google search ...Google News ...Bing News




Try...




What is GPT-3? Showcase, possibilities, and implications
What is going on in AI research lately? GPT-3 crashed the party, let’s see what it is and what it can do. Hoping we do not forget how problematic it might also become. GPT-3 Paper : Brown, Tom B., Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan et al. "Language models are few-shot learners." arXiv preprint arXiv:2005.14165 (2020). https://arxiv.org/pdf/2005.14165.pdf

14 Cool Apps Built on OpenAI's GPT-3 API
14 Cool applications just built on top of OpenAI's GPT-3 (generative predictive transformer) API (currently in private beta).

GPT Impact to Development

OpenAI Model Generates Python Code
Credits: Microsoft, OpenAI

OpenAI Model Generates Python Code
This code completion engine can write an entire function from just the name! OpenAI demonstrates what happens when you learn a language model on thousands of GitHub Python repositories. Source Clip: https://youtu.be/fZSFNUT6iY8

Generative Python Code with GPT
In a quest to teach neural networks via transformers to write Python code. Project name: Generative Python Transformers!

GPT 3 Explanation And Demo Reaction | Should You Be Scared ? Techies Reaction To GPT3 AI OpenAI
In this video we will look at some of the demos and reactions across social media on GPT-3. The links to the tweets and demo you see in this video have been linked below so please do react out to them if you have any questions.

Code 10x Faster With This CRAZY New AI Tool (GPT-3)
In this FREE LIVE training, Aaron and Naz will show you the new cutting edge machine learning AI, OpenAI's GPT-3.

Build 2 projects using GPT-3 in just a couple of minutes. Bare bones: branding generator, chat bot
Co-founded by Elon Musk OpenAI wants to make AI safe and accessible. A year ago the startup released GPT-2. That language model was at that time deemed too powerful to release. Eventually OpenAI made the model available. This year they've trained a much more powerful model at least 1 magnitude larger than GPT-2. I was one of the lucky 750+ people granted access as Beta testers by OpenAI to see what can be built using the GPT-3 API. The model costs millions of dollars to train which makes it out of reach for most organizations. This video skims the surface of what you can get done with this amazing new model.

Custom GPTs

Custom GPTs are personalized versions of AI models like ChatGPT that can be tailored for specific tasks or projects. They represent a significant advancement in AI implementation, allowing businesses and individuals to customize AI tools to meet unique challenges and operational needs.

OpenAI Platform

OpenAI allows Plus and Enterprise users to create custom GPTs that can browse the web, create images, and run code. Users can upload knowledge files, modify the GPT's appearance, and define its actions

OpenAI GPT Store

The OpenAI GPT Store provides a platform for users to create, share, and monetize their custom GPTs, expanding the capabilities and possibilities of AI assistants like ChatGPT. It allows users of ChatGPT Plus to create and share their own custom chatbots, known as GPTs (Generative Pre-trained Transformers). The GPT Store offers a platform for developers to monetize their custom GPTs and provides a wide range of AI tools and capabilities for users to explore and enhance their AI assistant capabilities

OpenAI GPT Builder

With the GPT Builder, users can tailor GPTs for specific tasks or topics by combining instructions, knowledge, and capabilities. It enables users to build AI agents without the need for coding skills, making it accessible to a wide range of individuals, including educators, coaches, and anyone interested in building helpful tools.

To create a GPT using the GPT Builder, users can access the builder interface through the OpenAI platform at chat.openai.com/gpts/editor or by selecting "My GPTs" after logging in. The builder interface provides a split screen with a Create panel where users can enter prompts and instructions to build their chatbot, and a Preview panel that allows users to interact with the chatbot as they build it, making it easier to refine and customize the GPT.

The GPT Builder also offers features such as the ability to add images to the GPT, either by asking the builder to create an image or by uploading custom images. Additionally, GPTs can be granted access to web browsing, DALL-E (an image generation model), and OpenAI's Code Interpreter tool for writing and executing software. The builder interface also includes a Knowledge section where users can upload custom data to enhance the capabilities of their GPTs .

Let's build GPT: from scratch, in code, spelled out - Andrej Karpathy

Let's build GPT: from scratch, in code, spelled out.
Andrej Karpathy ...We build a Generatively Pretrained Transformer (GPT), following the paper "Attention is All You Need" and OpenAI's GPT-2 / GPT-3. We talk about connections to ChatGPT, which has taken the world by storm. We watch GitHub Copilot, itself a GPT, help us write a GPT (meta :D!) . I recommend people watch the earlier makemore videos to get comfortable with the autoregressive language modeling framework and basics of tensors and PyTorch nn, which we take for granted in this video.

Links:

Supplementary links:

Suggested exercises:

  • EX1: The n-dimensional tensor mastery challenge: Combine the `Head` and `MultiHeadAttention` into one class that processes all the heads in parallel, treating the heads as another batch dimension (answer is in nanoGPT).
  • EX2: Train the GPT on your own dataset of choice! What other data could be fun to blabber on about? (A fun advanced suggestion if you like: train a GPT to do addition of two numbers, i.e. a+b=c. You may find it helpful to predict the digits of c in reverse order, as the typical addition algorithm (that you're hoping it learns) would proceed right to left too. You may want to modify the data loader to simply serve random problems and skip the generation of train.bin, val.bin. You may want to mask out the loss at the input positions of a+b that just specify the problem using y=-1 in the targets (see CrossEntropyLoss ignore_index). Does your Transformer learn to add? Once you have this, swole doge project: build a calculator clone in GPT, for all of +-*/. Not an easy problem. You may need Chain of Thought traces.)
  • EX3: Find a dataset that is very large, so large that you can't see a gap between train and val loss. Pretrain the transformer on this data, then initialize with that model and finetune it on tiny shakespeare with a smaller number of steps and lower learning rate. Can you obtain a lower validation loss by the use of pretraining?
  • EX4: Read some transformer papers and implement one additional feature or change that people seem to use. Does it improve the performance of your GPT?

Chapters:

  • 00:00:00 intro: ChatGPT, Transformers, nanoGPT, Shakespeare baseline language modeling, code setup
  • 00:07:52 reading and exploring the data
  • 00:09:28 tokenization, train/val split
  • 00:14:27 data loader: batches of chunks of data
  • 00:22:11 simplest baseline: bigram language model, loss, generation
  • 00:34:53 training the bigram model
  • 00:38:00 port our code to a script Building the "self-attention"
  • 00:42:13 version 1: averaging past context with for loops, the weakest form of aggregation
  • 00:47:11 the trick in self-attention: matrix multiply as weighted aggregation
  • 00:51:54 version 2: using matrix multiply
  • 00:54:42 version 3: adding softmax
  • 00:58:26 minor code cleanup
  • 01:00:18 positional encoding
  • 01:02:00 THE CRUX OF THE VIDEO: version 4: self-attention
  • 01:11:38 note 1: attention as communication
  • 01:12:46 note 2: attention has no notion of space, operates over sets
  • 01:13:40 note 3: there is no communication across batch dimension
  • 01:14:14 note 4: encoder blocks vs. decoder blocks
  • 01:15:39 note 5: attention vs. self-attention vs. cross-attention
  • 01:16:56 note 6: "scaled" self-attention. why divide by sqrt(head_size) Building the Transformer
  • 01:19:11 inserting a single self-attention block to our network
  • 01:21:59 multi-headed self-attention
  • 01:24:25 feedforward layers of transformer block
  • 01:26:48 residual connections
  • 01:32:51 layernorm (and its relationship to our previous batchnorm)
  • 01:37:49 scaling up the model! creating a few variables. adding dropout Notes on Transformer
  • 01:42:39 encoder vs. decoder vs. both (?) Transformers
  • 01:46:22 super quick walkthrough of nanoGPT, batched multi-headed self-attention
  • 01:48:53 back to ChatGPT, GPT-3, pretraining vs. finetuning, RLHF
  • 01:54:32 conclusions

Corrections:

  • 00:57:00 Oops "tokens from the future cannot communicate", not "past". Sorry! :)
  • 01:20:05 Oops I should be using the head_size for the normalization, not C

Transformers: The best idea in AI | Andrej Karpathy and Lex Fridman
Lex Fridman Podcast full episode Please support this podcast by checking out our sponsors:

GUEST BIO: Andrej Karpathy is a legendary AI researcher, engineer, and educator. He's the former director of AI at Tesla, a founding member of OpenAI, and an educator at Stanford.

PODCAST INFO:

Clips playlist: https://www.youtube.com/playlist?list...

SOCIAL: