Difference between revisions of "PaLM"
(Created page with "{{#seo: |title=PRIMO.ai |titlemode=append |keywords=artificial, intelligence, machine, learning, models, algorithms, data, singularity, moonshot, TensorFlow, Facebook, Google,...") |
m |
||
Line 5: | Line 5: | ||
|description=Helpful resources for your journey with artificial intelligence; Attention, GPT, chat, videos, articles, techniques, courses, profiles, and tools | |description=Helpful resources for your journey with artificial intelligence; Attention, GPT, chat, videos, articles, techniques, courses, profiles, and tools | ||
}} | }} | ||
− | [https://www.youtube.com/results?search_query= | + | [https://www.youtube.com/results?search_query=PaLM+Language+Multimodal+Model YouTube] |
− | [https://www.quora.com/search?q= | + | [https://www.quora.com/search?q=PaLM%20Language%20Multimodal%20Model ... Quora] |
− | [https://www.google.com/search?q= | + | [https://www.google.com/search?q=PaLM+Language+Multimodal+Model ...Google search] |
− | [https://news.google.com/search?q= | + | [https://news.google.com/search?q=PaLM+Language+Multimodal+Model ...Google News] |
− | [https://www.bing.com/news/search?q= | + | [https://www.bing.com/news/search?q=PaLM+Language+Multimodal+Model&qft=interval%3d%228%22 ...Bing News] |
+ | * [[Large Language Model (LLM)#Multimodal|Multimodal Language Model]]s | ||
* [[Large Language Model (LLM)]] ... [[Natural Language Processing (NLP)]] ...[[Natural Language Generation (NLG)|Generation]] ... [[Natural Language Classification (NLC)|Classification]] ... [[Natural Language Processing (NLP)#Natural Language Understanding (NLU)|Understanding]] ... [[Language Translation|Translation]] ... [[Natural Language Tools & Services|Tools & Services]] | * [[Large Language Model (LLM)]] ... [[Natural Language Processing (NLP)]] ...[[Natural Language Generation (NLG)|Generation]] ... [[Natural Language Classification (NLC)|Classification]] ... [[Natural Language Processing (NLP)#Natural Language Understanding (NLU)|Understanding]] ... [[Language Translation|Translation]] ... [[Natural Language Tools & Services|Tools & Services]] | ||
* [[Assistants]] ... [[Agents]] ... [[Negotiation]] ... [[Hugging_Face#HuggingGPT|HuggingGPT]] ... [[LangChain]] | * [[Assistants]] ... [[Agents]] ... [[Negotiation]] ... [[Hugging_Face#HuggingGPT|HuggingGPT]] ... [[LangChain]] |
Revision as of 09:01, 29 April 2023
YouTube ... Quora ...Google search ...Google News ...Bing News
- Multimodal Language Models
- Large Language Model (LLM) ... Natural Language Processing (NLP) ...Generation ... Classification ... Understanding ... Translation ... Tools & Services
- Assistants ... Agents ... Negotiation ... HuggingGPT ... LangChain
- Attention Mechanism ...Transformer Model ...Generative Pre-trained Transformer (GPT)
- Generative AI ... Conversational AI ... OpenAI's ChatGPT ... Perplexity ... Microsoft's Bing ... You ...Google's Bard ... Baidu's Ernie
- Capabilities
- Development ...AI Pair Programming Tools ... Analytics ... Visualization ... Diagrams for Business Analysis
- Prompt Engineering (PE)
- Foundation Models (FM)
- Singularity ... Moonshots ... Emergence ... Explainable / Interpretable AI ... AGI ... Inside Out - Curious Optimistic Reasoning ... Automated Learning
- 8 Potentially Surprising Things To Know About Large Language Models LLMs | Dhanshree Shripad Shenwai - Marketechpost
- This AI Paper Introduces SELF-REFINE: A Framework For Improving Initial Outputs From LLMs Through Iterative Feedback And Refinement | Aneesh Tickoo - MarkTechPost
- Meet LMQL: An Open Source Programming Language and Platform for Large Language Model (LLM) Interaction | Tanya Malhotra - MarkTechPost
One of the more interesting, but seemingly academic, concerns of the new era of AI sucking up everything on the web was that AIs will eventually start to absorb other AI-generated content and regurgitate it in a self-reinforcing loop. Not so academic after all, it appears, because Bing just did it! When asked, it produced verbatim a COVID-19 conspiracy coaxed out of ChatGPT by disinformation researchers just last month AI is eating itself: Bing’s AI quotes COVID disinfo sourced from ChatGPT | Devin Coldewey, Frederic Lardinois - TechCrunch
Multimodal
Multimodal Language Models; Multimodal Language Model (MLM)/Multimodal Large Language Model (MLLM) are is a type of Large Language Model (LLM) that combines text with other kinds of information, such as images, videos, audio, and other sensory data1. This allows MLLMs to solve some of the problems of the current generation of LLMs and unlock new applications that were impossible with text-only models What you need to know about multimodal language models | Ben Dickson - TechTalks
- GPT-4GPT-4 | OpenAI ... can accept prompts of both text and images1. This means that it can take images as well as text as input, giving it the ability to describe the humor in unusual images, summarize text from screenshots, and answer exam questions that contain diagrams. rumored to be more than 1 trillion parameters.
- Kosmos-1Kosmos-1 | Microsoft ... can perceive general modalities, learn in context (i.e., few-shot), and follow instructions (i.e., zero-shot). It can analyze images for content, solve visual puzzles, perform visual text recognition, and pass visual IQ tests. 1.6B
- PaLM-EPaLM-E | Google ... an Embodied Multimodal Language Model that directly incorporates real-world continuous sensor modalities into language models and thereby establishes the link between words and percepts. It was developed by Google to be a model for robotics and can solve a variety of tasks on multiple types of robots and for multiple modalities (images, robot states, and neural scene representations). PaLM-E is also a generally-capable vision-and-language model. It can perform visual tasks, such as describing images, detecting objects, or classifying scenes, and is also proficient at language tasks, like quoting poetry, solving math equations or generating code. 562B
- Multimodal-CoT (Multimodal Chain-of-Thought Reasoning) GitHub ... incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. Under 1B