Difference between revisions of "PaLM"

From
Jump to: navigation, search
m
m
Line 23: Line 23:
 
* [[Foundation Models (FM)]]
 
* [[Foundation Models (FM)]]
 
* [[Singularity]] ... [[Moonshots]] ... [[Emergence]] ... [[Explainable / Interpretable AI]] ... [[Artificial General Intelligence (AGI)| AGI]] ... [[Inside Out - Curious Optimistic Reasoning]] ... [[Algorithm Administration#Automated Learning|Automated Learning]]
 
* [[Singularity]] ... [[Moonshots]] ... [[Emergence]] ... [[Explainable / Interpretable AI]] ... [[Artificial General Intelligence (AGI)| AGI]] ... [[Inside Out - Curious Optimistic Reasoning]] ... [[Algorithm Administration#Automated Learning|Automated Learning]]
* [https://www.marktechpost.com/2023/04/06/8-potentially-surprising-things-to-know-about-large-language-models-llms/ 8 Potentially Surprising Things To Know About Large Language Models LLMs | Dhanshree Shripad Shenwai - Marketechpost]
 
* [https://www.marktechpost.com/2023/04/07/this-ai-paper-introduce-self-refine-a-framework-for-improving-initial-outputs-from-llms-through-iterative-feedback-and-refinement/ This AI Paper Introduces SELF-REFINE: A Framework For Improving Initial Outputs From LLMs Through Iterative Feedback And Refinement | Aneesh Tickoo - MarkTechPost]
 
* [https://www.marktechpost.com/2023/04/11/meet-lmql-an-open-source-programming-language-and-platform-for-large-language-model-llm-interaction/ Meet LMQL: An Open Source Programming Language and Platform for Large Language Model (LLM) Interaction | Tanya Malhotra - MarkTechPost]
 
  
 
+
* [https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html PaLM-E] | [[Google]] ... an Embodied Multimodal Language Model that directly incorporates real-world continuous sensor modalities into language models and thereby establishes the link between words and percepts. It was developed by Google to be a model for robotics and can solve a variety of tasks on multiple types of robots and for multiple modalities (images, robot states, and neural scene representations). PaLM-E is also a generally-capable vision-and-language model. It can perform visual tasks, such as describing images, detecting objects, or classifying scenes, and is also proficient at language tasks, like quoting poetry, solving math equations or generating code. 562B
<hr>
 
 
 
One of the more interesting, but seemingly academic, concerns of the new era of AI sucking up everything on the web was that AIs will eventually start to absorb other AI-generated content and regurgitate it in a self-reinforcing loop. Not so academic after all, it appears, because [[Bing]] just did it! When asked, it produced verbatim a [[COVID-19]] conspiracy coaxed out of [[ChatGPT]] by disinformation researchers just last month [https://techcrunch.com/2023/02/08/ai-is-eating-itself-bings-ai-quotes-covid-disinfo-sourced-from-chatgpt/ AI is eating itself: Bing’s AI quotes COVID disinfo sourced from ChatGPT | Devin Coldewey, Frederic Lardinois - TechCrunch]
 
 
 
<hr>
 
 
 
[[Large Language Model (LLM)#Multimodal|Multimodal Language Model]]
 
= <span id="Multimodal"></span>Multimodal =
 
Multimodal Language Models; Multimodal Language Model (MLM)/Multimodal Large Language Model (MLLM) are is a type of Large Language Model (LLM) that combines text with other kinds of information, such as images, videos, audio, and other sensory data1. This allows MLLMs to solve some of the problems of the current generation of LLMs and unlock new applications that were impossible with text-only models [https://bdtechtalks.com/2023/03/13/multimodal-large-language-models/ What you need to know about multimodal language models | Ben Dickson - TechTalks]
 
 
 
* [[GPT-4]][https://openai.com/product/gpt-4 GPT-4 |] [[OpenAI]]  ... can accept prompts of both text and images1. This means that it can take images as well as text as input, giving it the ability to describe the humor in unusual images, summarize text from screenshots, and answer exam questions that contain diagrams. rumored to be more than 1 trillion parameters.
 
* [[Kosmos-1]][https://arxiv.org/abs/2302.14045 Kosmos-1] | [[Microsoft]]  ... can perceive general modalities, learn in context (i.e., few-shot), and follow instructions (i.e., zero-shot). It can analyze images for content, solve visual puzzles, perform visual text recognition, and pass visual IQ tests. 1.6B
 
* [[PaLM-E]][https://ai.googleblog.com/2023/03/palm-e-embodied-multimodal-language.html PaLM-E] | [[Google]] ... an Embodied Multimodal Language Model that directly incorporates real-world continuous sensor modalities into language models and thereby establishes the link between words and percepts. It was developed by Google to be a model for robotics and can solve a variety of tasks on multiple types of robots and for multiple modalities (images, robot states, and neural scene representations). PaLM-E is also a generally-capable vision-and-language model. It can perform visual tasks, such as describing images, detecting objects, or classifying scenes, and is also proficient at language tasks, like quoting poetry, solving math equations or generating code. 562B
 
* [https://arxiv.org/abs/2302.00923 Multimodal-CoT (Multimodal Chain-of-Thought Reasoning)] [https://github.com/amazon-science/mm-cot GitHub] ... incorporates language (text) and vision (images) modalities into a two-stage framework that separates rationale generation and answer inference. Under 1B
 

Revision as of 09:04, 29 April 2023

YouTube ... Quora ...Google search ...Google News ...Bing News

  • PaLM-E | Google ... an Embodied Multimodal Language Model that directly incorporates real-world continuous sensor modalities into language models and thereby establishes the link between words and percepts. It was developed by Google to be a model for robotics and can solve a variety of tasks on multiple types of robots and for multiple modalities (images, robot states, and neural scene representations). PaLM-E is also a generally-capable vision-and-language model. It can perform visual tasks, such as describing images, detecting objects, or classifying scenes, and is also proficient at language tasks, like quoting poetry, solving math equations or generating code. 562B