PaLM

From
Revision as of 09:33, 29 April 2023 by BPeat (talk | contribs)
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News


An Embodied Multimodal Language Model that directly incorporates real-world continuous sensor modalities into language models and thereby establishes the link between words and percepts. It was developed by Google to be a model for robotics and can solve a variety of tasks on multiple types of robots and for multiple modalities (images, robot states, and neural scene representations). PaLM-E is also a generally-capable vision-and-language model. It can perform visual tasks, such as describing images, detecting objects, or classifying scenes, and is also proficient at language tasks, like quoting poetry, solving math equations or generating code. 562B


https://palm-e.github.io/videos/palm-e-teaser.mp4