PaLM

From
Revision as of 19:52, 5 July 2023 by BPeat (talk | contribs)
Jump to: navigation, search

YouTube ... Quora ...Google search ...Google News ...Bing News


An Embodied Multimodal Language Model that directly incorporates real-world continuous sensor modalities into language models and thereby establishes the link between words and percepts. It was developed by Google to be a model for robotics and can solve a variety of tasks on multiple types of robots and for multiple modalities (images, robot states, and neural scene representations). PaLM-E is also a generally-capable vision-and-language model. It can perform visual tasks, such as describing images, detecting objects, or classifying scenes, and is also proficient at language tasks, like quoting poetry, solving math equations or generating code. 562B

PaLM has been trained using a training system developed by Google for Pathways, which was used to train PaLM on 6144 chips in parallel on two Cloud TPU v4 pods. PaLM has demonstrated "breakthrough capabilities" in numerous particularly challenging language tasks such as language comprehension and generation, reasoning, and code-related tasks. PaLM can even generate explicit explanations for scenarios that require a complex combination of multi-step logical inference, world knowledge, and deep language understanding, such as providing high-quality explanations for novel jokes not found on the web. PaLM's ability to understand humor and make logical inferences is helping Google solve novel challenges that before would have taken someone with specific expertise. The ability to understand the nuances of human language will lead to better and more natural interactions with machines.