Difference between revisions of "GPT-4"

From
Jump to: navigation, search
m
m (PrivateGPT)
 
(34 intermediate revisions by the same user not shown)
Line 2: Line 2:
 
|title=PRIMO.ai
 
|title=PRIMO.ai
 
|titlemode=append
 
|titlemode=append
|keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, TensorFlow, Facebook, Google, Nvidia, Microsoft, Azure, Amazon, AWS  
+
|keywords=ChatGPT, artificial, intelligence, machine, learning, GPT-4, GPT-5, NLP, NLG, NLC, NLU, models, data, singularity, moonshot, Sentience, AGI, Emergence, Moonshot, Explainable, TensorFlow, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Hugging Face, OpenAI, Tensorflow, OpenAI, Google, Nvidia, Microsoft, Azure, Amazon, AWS, Meta, LLM, metaverse, assistants, agents, digital twin, IoT, Transhumanism, Immersive Reality, Generative AI, Conversational AI, Perplexity, Bing, You, Bard, Ernie, prompt Engineering LangChain, Video/Image, Vision, End-to-End Speech, Synthesize Speech, Speech Recognition, Stanford, MIT |description=Helpful resources for your journey with artificial intelligence; videos, articles, techniques, courses, profiles, and tools
|description=Helpful resources for your journey with artificial intelligence; Attention, GPT, chat, videos, articles, techniques, courses, profiles, and tools  
+
 
 +
<!-- Google tag (gtag.js) -->
 +
<script async src="https://www.googletagmanager.com/gtag/js?id=G-4GCWLBVJ7T"></script>
 +
<script>
 +
  window.dataLayer = window.dataLayer || [];
 +
  function gtag(){dataLayer.push(arguments);}
 +
  gtag('js', new Date());
 +
 
 +
  gtag('config', 'G-4GCWLBVJ7T');
 +
</script>
 
}}
 
}}
 
[https://www.youtube.com/results?search_query=GPT-4+Language+Multimodal+Model YouTube]
 
[https://www.youtube.com/results?search_query=GPT-4+Language+Multimodal+Model YouTube]
Line 11: Line 20:
 
[https://www.bing.com/news/search?q=GPT-4+Language+Multimodal+Model&qft=interval%3d%228%22 ...Bing News]
 
[https://www.bing.com/news/search?q=GPT-4+Language+Multimodal+Model&qft=interval%3d%228%22 ...Bing News]
  
* [[Attention]] Mechanism  ...[[Transformer]] ...[[Generative Pre-trained Transformer (GPT)]] ... [[Generative Adversarial Network (GAN)|GAN]] ... [[Bidirectional Encoder Representations from Transformers (BERT)|BERT]]
+
* [[Large Language Model (LLM)]] ... [[Large Language Model (LLM)#Multimodal|Multimodal]] ... [[Foundation Models (FM)]] ... [[Generative Pre-trained Transformer (GPT)|Generative Pre-trained]] ... [[Transformer]] ... [[GPT-4]] ... [[GPT-5]] ... [[Attention]] ... [[Generative Adversarial Network (GAN)|GAN]] ... [[Bidirectional Encoder Representations from Transformers (BERT)|BERT]]
* [[Large Language Model (LLM)#Multimodal|Multimodal Language Model]]s ... [[GPT-4]] ... [[GPT-5]]
+
* [[Conversational AI]] ... [[ChatGPT]] | [[OpenAI]] ... [[Bing/Copilot]] | [[Microsoft]] ... [[Gemini]] | [[Google]] ... [[Claude]] | [[Anthropic]] ... [[Perplexity]] ... [[You]] ... [[phind]] ... [[Ernie]] | [[Baidu]]
* [[Large Language Model (LLM)]] ... [[Natural Language Processing (NLP)]] ...[[Natural Language Generation (NLG)|Generation]] ... [[Natural Language Classification (NLC)|Classification]] ... [[Natural Language Processing (NLP)#Natural Language Understanding (NLU)|Understanding]] ... [[Language Translation|Translation]] ... [[Natural Language Tools & Services|Tools & Services]]
+
* [[Natural Language Processing (NLP)]] ... [[Natural Language Generation (NLG)|Generation (NLG)]] ... [[Natural Language Classification (NLC)|Classification (NLC)]] ... [[Natural Language Processing (NLP)#Natural Language Understanding (NLU)|Understanding (NLU)]] ... [[Language Translation|Translation]] ... [[Summarization]] ... [[Sentiment Analysis|Sentiment]] ... [[Natural Language Tools & Services|Tools]]
 
* [https://openai.com/product/gpt-4 GPT-4 |] [[OpenAI]]
 
* [https://openai.com/product/gpt-4 GPT-4 |] [[OpenAI]]
 
* [https://openai.com/research/gpt-4 Research Paper |] [[OpenAI]]
 
* [https://openai.com/research/gpt-4 Research Paper |] [[OpenAI]]
* [[Generative AI]] ... [[Conversational AI]] ... [[OpenAI]]'s [[ChatGPT]] ... [[Perplexity]] ... [[Microsoft]]'s [[Bing]] ... [[You]] ...[[Google]]'s [[Bard]] ... [[Baidu]]'s [[Ernie]]
+
* [[What is Artificial Intelligence (AI)? | Artificial Intelligence (AI)]] ... [[Generative AI]] ... [[Machine Learning (ML)]] ... [[Deep Learning]] ... [[Neural Network]] ... [[Reinforcement Learning (RL)|Reinforcement]] ... [[Learning Techniques]]
* [[Assistants]] ... [[Agents]] ... [[Negotiation]] ... [[LangChain]]
+
* [[Agents]] ... [[Robotic Process Automation (RPA)|Robotic Process Automation]] ... [[Assistants]] ... [[Personal Companions]] ... [[Personal Productivity|Productivity]] ... [[Email]] ... [[Negotiation]] ... [[LangChain]]
* [[Capabilities]]  
+
* [[Video/Image]] ... [[Vision]] ... [[Enhancement]] ... [[Fake]] ... [[Reconstruction]] ... [[Colorize]] ... [[Occlusions]] ... [[Predict image]] ... [[Image/Video Transfer Learning]]
** [[Video/Image]] ... [[Vision]] ... [[Colorize]] ... [[Image/Video Transfer Learning]]
+
* [[End-to-End Speech]] ... [[Synthesize Speech]] ... [[Speech Recognition]] ... [[Music]]
** [[End-to-End Speech]] ... [[Synthesize Speech]] ... [[Speech Recognition]]  
+
* [[Analytics]] ... [[Visualization]] ... [[Graphical Tools for Modeling AI Components|Graphical Tools]] ... [[Diagrams for Business Analysis|Diagrams]] & [[Generative AI for Business Analysis|Business Analysis]] ... [[Requirements Management|Requirements]] ... [[Loop]] ... [[Bayes]] ... [[Network Pattern]]
* [[Development]] ...[[Development#AI Pair Programming Tools|AI Pair Programming Tools]] ... [[Analytics]] ... [[Visualization]] ... [[Diagrams for Business Analysis]]
+
* [[Development]] ... [[Notebooks]] ... [[Development#AI Pair Programming Tools|AI Pair Programming]] ... [[Codeless Options, Code Generators, Drag n' Drop|Codeless]] ... [[Hugging Face]] ... [[Algorithm Administration#AIOps/MLOps|AIOps/MLOps]] ... [[Platforms: AI/Machine Learning as a Service (AIaaS/MLaaS)|AIaaS/MLaaS]]
* [[Prompt Engineering (PE)]]
+
* [[Prompt Engineering (PE)]] ... [[Prompt Engineering (PE)#PromptBase|PromptBase]] ... [[Prompt Injection Attack]]  
* [[Foundation Models (FM)]]
+
* [[Artificial General Intelligence (AGI) to Singularity]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ...  [[Algorithm Administration#Automated Learning|Automated Learning]]
* [[Singularity]] ... [[Artificial Consciousness / Sentience|Sentience]] ... [[Artificial General Intelligence (AGI)| AGI]] ... [[Inside Out - Curious Optimistic Reasoning| Curious Reasoning]] ... [[Emergence]] ... [[Moonshots]] ... [[Explainable / Interpretable AI|Explainable AI]] ...  [[Algorithm Administration#Automated Learning|Automated Learning]]
 
 
* [https://arxiv.org/pdf/2303.12712.pdf Sparks of Artificial General Intelligence: Early experiments with GPT-4 | S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. Tat Lee, Y. Li, S. Lundberg, H. Nori, H. Palangi, M. Ribeiro, Y. Zhang -] [[Microsoft]] Research
 
* [https://arxiv.org/pdf/2303.12712.pdf Sparks of Artificial General Intelligence: Early experiments with GPT-4 | S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. Tat Lee, Y. Li, S. Lundberg, H. Nori, H. Palangi, M. Ribeiro, Y. Zhang -] [[Microsoft]] Research
 
* [https://www.zdnet.com/article/what-is-gpt-4-heres-everything-you-need-to-know/ What is GPT-4? Here's everything you need to know | Sabrina Ortiz - ZDnet]
 
* [https://www.zdnet.com/article/what-is-gpt-4-heres-everything-you-need-to-know/ What is GPT-4? Here's everything you need to know | Sabrina Ortiz - ZDnet]
Line 35: Line 43:
  
  
Can accept prompts of both text and images1. This means that it can take images as well as text as input, giving it the ability to describe the humor in unusual images, summarize text from screenshots, and answer exam questions that contain diagrams. rumored to be more than 1 trillion parameters.
+
== GPT-4o ==
 +
[https://www.youtube.com/results?search_query=Generative+Pre+trained+Transformer+GPT4o+AI YouTube]
 +
[https://www.quora.com/search?q=Generative%20Pre%20trained%20Transformer%20%GPT4o20AI ... Quora]
 +
[https://www.google.com/search?q=Generative+Pre+trained+Transformer+GPT4o+AI ...Google search]
 +
[https://news.google.com/search?q=Generative+Pre+trained+Transformer+GPT4o+AI ...Google News]
 +
[https://www.bing.com/news/search?q=Generative+Pre+trained+Transformer+GPT4o+AI&qft=interval%3d%228%22 ...Bing News]
 +
 
 +
GPT-4o is [[OpenAI]]'s latest advanced AI model, which is described as a multimodal model integrating text, vision, and audio capabilities. This model offers significant improvements over its predecessors, including faster processing and enhanced capabilities in understanding and generating text, images, and audio content​ ([[OpenAI]])​​ ([[Azure]])​.
 +
 
 +
One of the standout features of GPT-4o is its advanced voice-to-voice capabilities, which allow for real-time, seamless voice interactions without relying on other models. It has also set new benchmarks in multilingual support and vision tasks, scoring higher than GPT-4 in the Massive Multitask Language Understanding (MMLU) benchmark​​.
 +
 
 +
GPT-4o supports over 50 languages, covering about 97% of the world's speakers, and features a more efficient tokenizer that reduces the number of tokens required, particularly for non-Latin alphabet languages. This makes it more cost-effective and accessible for users across different languages​​.
 +
 
 +
This model is available to [[ChatGPT]] Plus and Team users, with plans to expand to Enterprise users soon. It is also accessible in a limited capacity to free users, with certain usage limits in place​​. GPT-4o is now powering [[ChatGPT]], enhancing its abilities to provide more accurate and insightful responses across various inputs​.
 +
 
 +
<youtube>WkB2bvYi73k</youtube>
 +
<youtube>GPNq0WiXa50</youtube>
 +
 
 +
== GPT-4 ==
 +
GPT-4 can accept prompts of both text and images. This means that it can take images as well as text as input, giving it the ability to describe the humor in unusual images, summarize text from screenshots, and answer exam questions that contain diagrams. It has 1 trillion parameters, short-term [[memory]] extends to around 64,000 words, while GPT-3.5's short-term [[memory]] is around 8,000 words.
  
  
Line 66: Line 93:
 
* [https://atlas.nomic.ai/map/gpt4all_data_clean Dataset viewer | NOMIC.ai]
 
* [https://atlas.nomic.ai/map/gpt4all_data_clean Dataset viewer | NOMIC.ai]
 
* [https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbkg2by0wWXBQWDZNR2FSc05Wc0p2dWxGZ24xQXxBQ3Jtc0tsMWpqMW14clFBTDhDZi1HekxzVGp6aHFiaVM2YWNBVk9kX3VsMmNkbDFOanZEVEJocVBCS1hsVWtXTUZGNWFFb1d2ZTc4Z3BUQTVRa29YWTlKXzZ6OVBmUTdiNzZtRWM4VGs4VnJJbEdtUXFuTGtNMA&q=https%3A%2F%2Fs3.amazonaws.com%2Fstatic.nomic.ai%2Fgpt4all%2F2023_GPT4All_Technical_Report.pdf Tech report: GPT4All: Training an Assistant-style Chatbot with Large Scale DataDistillation from GPT-3.5-Turbo | Y. Anand, Z. Nussbaum, B. Duderstadt, B. Schmidt, & A. Mulyar - NOMIC.ai]
 
* [https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbkg2by0wWXBQWDZNR2FSc05Wc0p2dWxGZ24xQXxBQ3Jtc0tsMWpqMW14clFBTDhDZi1HekxzVGp6aHFiaVM2YWNBVk9kX3VsMmNkbDFOanZEVEJocVBCS1hsVWtXTUZGNWFFb1d2ZTc4Z3BUQTVRa29YWTlKXzZ6OVBmUTdiNzZtRWM4VGs4VnJJbEdtUXFuTGtNMA&q=https%3A%2F%2Fs3.amazonaws.com%2Fstatic.nomic.ai%2Fgpt4all%2F2023_GPT4All_Technical_Report.pdf Tech report: GPT4All: Training an Assistant-style Chatbot with Large Scale DataDistillation from GPT-3.5-Turbo | Y. Anand, Z. Nussbaum, B. Duderstadt, B. Schmidt, & A. Mulyar - NOMIC.ai]
 
  
 
A chatbot trained on a massive collection of clean assistant data including code, stories and dialogue.  Demo, data and code to train an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMa
 
A chatbot trained on a massive collection of clean assistant data including code, stories and dialogue.  Demo, data and code to train an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMa
 
  
 
{|<!-- T -->
 
{|<!-- T -->
Line 76: Line 101:
 
||
 
||
 
<youtube>DDfUoQWnrfM</youtube>
 
<youtube>DDfUoQWnrfM</youtube>
<b>GPT4ALL: Install '[[ChatGPT]]' Locally (weights & fine-tuning!) - Tutorial
+
<b>GPT4ALL: Install '[[ChatGPT]]' Locally (weights & [[fine-tuning]]!) - Tutorial
 
</b><br>Matthew Berman - In this video, I walk you through installing the newly released GPT4ALL large language model on your local computer. This model is brought to you by the fine people at Nomic AI, furthering the open-source LLM mission. GPT4ALL is trained using the same technique as Alpaca, which is an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMa. IMO, it works even better than Alpaca and is super fast. This is basically like having ChatGPT on your local computer. Easy install. Nomic AI was also kind enough to include the weights in addition to the quantized model.   
 
</b><br>Matthew Berman - In this video, I walk you through installing the newly released GPT4ALL large language model on your local computer. This model is brought to you by the fine people at Nomic AI, furthering the open-source LLM mission. GPT4ALL is trained using the same technique as Alpaca, which is an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMa. IMO, it works even better than Alpaca and is super fast. This is basically like having ChatGPT on your local computer. Easy install. Nomic AI was also kind enough to include the weights in addition to the quantized model.   
 
|}
 
|}
Line 85: Line 110:
 
<youtube>GhRNIuTA2Z0</youtube>
 
<youtube>GhRNIuTA2Z0</youtube>
 
<b>Is GPT4All your new personal ChatGPT?
 
<b>Is GPT4All your new personal ChatGPT?
</b><br>In this video we are looking at the GPT4ALL model which an interesting (even though not for commercial use) project of taking a LLaMa model and finetuning with a lot more instruction tasks than Alpaca.
+
</b><br>In this video we are looking at the GPT4ALL model which an interesting (even though not for commercial use) project of taking a LLaMa model and [[finetuning]] with a lot more instruction tasks than Alpaca.
  
 
* [https://colab.research.google.com/drive/1NWZN15plz8rxrk-9OcxNwwIk1V1MfBsJ?usp=sharing Colab | Sam Witteveen]
 
* [https://colab.research.google.com/drive/1NWZN15plz8rxrk-9OcxNwwIk1V1MfBsJ?usp=sharing Colab | Sam Witteveen]
 
|}
 
|}
 
|}<!-- B -->
 
|}<!-- B -->
 +
 +
== Edge Impulse ==
 +
Using GPT-4o to train a 2,000,000x smaller model (that runs directly on device)
 +
The latest generation LLMs are absolutely astonishing — thanks to their multi-modal capabilities you can ask questions in natural language about stuff you can see or hear in the real world ("is there a person without a hard hat standing close to a machine?") and get relatively fast and reliable answers. But these large LLMs have downsides; they're absolutely huge, so you need to run them in the cloud, adding high latency (often seconds per inference), high cost (think about the tokens you'll burn when running inference 24/7), and high power (need a constant network connection).
 +
 +
In this video we're distilling knowledge from a large multimodal LLM (GPT-4o) and putting it in a tiny model, which we can run directly on device; for ultra-low latency, and without the need for a network connection, scaling to even microcontrollers with kilobytes of RAM if needed. Training was done fully unsupervised, all labels were set by GPT-4o, including deciding when to throw out data, then trained onto a transfer learning model w/ default settings.
 +
 +
One of the models we train has 800K parameters (an NVIDIA TAO model with MobileNet backend), a cool 2,200,000x fewer parameters than GPT-4o :-) with similar accuracy on this very narrow and specific task.
 +
 +
The GPT-4o labeling block and TAO transfer learning models are available for any enterprise customers in Edge Impulse. There's a 2-week free trial available, sign up at [https://edgeimpulse.com Edge Impulse]
 +
 +
<youtube>Jou0aRgGiis</youtube>
 +
 +
== PrivateGPT ==
 +
* [https://github.com/imartinez/privateGPT PrivateGPT | GitHub]
 +
* [https://www.privategpt.io PrivateGPT]
 +
* [https://openaimaster.com/what-is-privategpt What is PrivateGPT? How It Works, Benefits & Use]
 +
 +
PrivateGPT, which allows you to chat directly with your documents (PDF, TXT, and CSV) completely locally, securely, privately, and open-source. PrivateGPT is a project that uses [https://github.com/nomic-ai/gpt4all GPT4All] to achieve a specific task, i.e. querying over documents using the [[LangChain]] framework. It does this by using the [https://github.com/nomic-ai/gpt4all GPT4All] model, however, any model can be used and sentence_[[transformer]] [[embedding]]s, which can also be replaced by any [[embedding]]s that [[LangChain]] supports. PrivateGPT is built with [[LangChain]], GPT4All, [[LLaMA|LlamaCpp]], Chroma and SentenceTransformers. You can ingest documents and ask questions without an internet connection. PrivateGPT works by ingesting your documents into a vector store and then using a [[Large Language Model (LLM)]] to answer questions about the information contained in those documents.  PrivateGPT can be used offline without connecting to any online servers or adding any API keys from [[OpenAI]] or [[Database#Pinecone|Pinecone]]. To facilitate this, it runs an [[Large Language Model (LLM)]] locally on your computer. This makes it possible to use PrivateGPT without an internet connection and ensures that your data remains private and secure. You can set up PrivateGPT by installing the required dependencies, downloading the LLM, and configuring the environment variables in the `.env` file¹. Once set up, you can ingest your documents into the vector store and then use PrivateGPT to ask questions about the information contained in those documents.
 +
 +
<b>SentenceTransformers</b> is a [[Python]] framework for state-of-the-art sentence, text, and image [[embedding]]s. It is based on PyTorch and [[Transformer]]s and offers a large collection of pre-trained models tuned for various tasks. You can use this framework to compute sentence/text [[embedding]]s for more than 100 languages. These [[embedding]]s can then be compared, for example, with cosine similarity to find sentences with a similar meaning. This can be useful for semantic textual similarity, semantic search, or paraphrase mining. You can install the Sentence Transformers library using pip: pip install -U sentence-transformers
 +
 +
 +
<youtube>A3F5riM5BNE</youtube>
 +
<youtube>jxSPx1bfl2M</youtube>

Latest revision as of 08:20, 2 June 2024

YouTube ... Quora ...Google search ...Google News ...Bing News


GPT-4o

YouTube ... Quora ...Google search ...Google News ...Bing News

GPT-4o is OpenAI's latest advanced AI model, which is described as a multimodal model integrating text, vision, and audio capabilities. This model offers significant improvements over its predecessors, including faster processing and enhanced capabilities in understanding and generating text, images, and audio content​ (OpenAI)​​ (Azure)​.

One of the standout features of GPT-4o is its advanced voice-to-voice capabilities, which allow for real-time, seamless voice interactions without relying on other models. It has also set new benchmarks in multilingual support and vision tasks, scoring higher than GPT-4 in the Massive Multitask Language Understanding (MMLU) benchmark​​.

GPT-4o supports over 50 languages, covering about 97% of the world's speakers, and features a more efficient tokenizer that reduces the number of tokens required, particularly for non-Latin alphabet languages. This makes it more cost-effective and accessible for users across different languages​​.

This model is available to ChatGPT Plus and Team users, with plans to expand to Enterprise users soon. It is also accessible in a limited capacity to free users, with certain usage limits in place​​. GPT-4o is now powering ChatGPT, enhancing its abilities to provide more accurate and insightful responses across various inputs​.

GPT-4

GPT-4 can accept prompts of both text and images. This means that it can take images as well as text as input, giving it the ability to describe the humor in unusual images, summarize text from screenshots, and answer exam questions that contain diagrams. It has 1 trillion parameters, short-term memory extends to around 64,000 words, while GPT-3.5's short-term memory is around 8,000 words.



GPT-4, known as Prometheus can be used on:



One of ChatGPT-4’s most dazzling new features is the ability to handle not only words, but pictures too, in what is being called “multimodal” technology. A user will have the ability to submit a picture alongside text — both of which ChatGPT-4 will be able to process and discuss. The ability to input video is also on the horizon. - Everything You Need to Know About ChatGPT-4 | Alex Millson - Bloomberg, Time


GPT4All

YouTube ... Quora ...Google search ...Google News ...Bing News

A chatbot trained on a massive collection of clean assistant data including code, stories and dialogue. Demo, data and code to train an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMa

GPT4ALL: Install 'ChatGPT' Locally (weights & fine-tuning!) - Tutorial
Matthew Berman - In this video, I walk you through installing the newly released GPT4ALL large language model on your local computer. This model is brought to you by the fine people at Nomic AI, furthering the open-source LLM mission. GPT4ALL is trained using the same technique as Alpaca, which is an assistant-style large language model with ~800k GPT-3.5-Turbo Generations based on LLaMa. IMO, it works even better than Alpaca and is super fast. This is basically like having ChatGPT on your local computer. Easy install. Nomic AI was also kind enough to include the weights in addition to the quantized model.

Is GPT4All your new personal ChatGPT?
In this video we are looking at the GPT4ALL model which an interesting (even though not for commercial use) project of taking a LLaMa model and finetuning with a lot more instruction tasks than Alpaca.

Edge Impulse

Using GPT-4o to train a 2,000,000x smaller model (that runs directly on device) The latest generation LLMs are absolutely astonishing — thanks to their multi-modal capabilities you can ask questions in natural language about stuff you can see or hear in the real world ("is there a person without a hard hat standing close to a machine?") and get relatively fast and reliable answers. But these large LLMs have downsides; they're absolutely huge, so you need to run them in the cloud, adding high latency (often seconds per inference), high cost (think about the tokens you'll burn when running inference 24/7), and high power (need a constant network connection).

In this video we're distilling knowledge from a large multimodal LLM (GPT-4o) and putting it in a tiny model, which we can run directly on device; for ultra-low latency, and without the need for a network connection, scaling to even microcontrollers with kilobytes of RAM if needed. Training was done fully unsupervised, all labels were set by GPT-4o, including deciding when to throw out data, then trained onto a transfer learning model w/ default settings.

One of the models we train has 800K parameters (an NVIDIA TAO model with MobileNet backend), a cool 2,200,000x fewer parameters than GPT-4o :-) with similar accuracy on this very narrow and specific task.

The GPT-4o labeling block and TAO transfer learning models are available for any enterprise customers in Edge Impulse. There's a 2-week free trial available, sign up at Edge Impulse

PrivateGPT

PrivateGPT, which allows you to chat directly with your documents (PDF, TXT, and CSV) completely locally, securely, privately, and open-source. PrivateGPT is a project that uses GPT4All to achieve a specific task, i.e. querying over documents using the LangChain framework. It does this by using the GPT4All model, however, any model can be used and sentence_transformer embeddings, which can also be replaced by any embeddings that LangChain supports. PrivateGPT is built with LangChain, GPT4All, LlamaCpp, Chroma and SentenceTransformers. You can ingest documents and ask questions without an internet connection. PrivateGPT works by ingesting your documents into a vector store and then using a Large Language Model (LLM) to answer questions about the information contained in those documents. PrivateGPT can be used offline without connecting to any online servers or adding any API keys from OpenAI or Pinecone. To facilitate this, it runs an Large Language Model (LLM) locally on your computer. This makes it possible to use PrivateGPT without an internet connection and ensures that your data remains private and secure. You can set up PrivateGPT by installing the required dependencies, downloading the LLM, and configuring the environment variables in the `.env` file¹. Once set up, you can ingest your documents into the vector store and then use PrivateGPT to ask questions about the information contained in those documents.

SentenceTransformers is a Python framework for state-of-the-art sentence, text, and image embeddings. It is based on PyTorch and Transformers and offers a large collection of pre-trained models tuned for various tasks. You can use this framework to compute sentence/text embeddings for more than 100 languages. These embeddings can then be compared, for example, with cosine similarity to find sentences with a similar meaning. This can be useful for semantic textual similarity, semantic search, or paraphrase mining. You can install the Sentence Transformers library using pip: pip install -U sentence-transformers