Difference between revisions of "ALFRED"
m |
m |
||
| (One intermediate revision by the same user not shown) | |||
| Line 22: | Line 22: | ||
* [[Embodied AI]] | * [[Embodied AI]] | ||
* [[AlfWorld]] | * [[AlfWorld]] | ||
| − | * [[Robotics]] | + | * [[Robotics]] ... [[Transportation (Autonomous Vehicles)|Vehicles]] ... [[Autonomous Drones|Drones]] ... [[3D Model]] ... [[Point Cloud]] |
| + | * [[Simulation]] ... [[Simulated Environment Learning]] ... [[World Models]] ... [[Minecraft]]: [[Minecraft#Voyager|Voyager]] | ||
<b>ALFRED; Action Learning From Realistic Environments and Directives</b> is a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. It includes long, compositional tasks with non-reversible state changes to shrink the gap between research benchmarks and real-world applications. a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. It includes long, compositional tasks with non-reversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like “Rinse off a mug and place it in the coffee maker” and low-level language instructions like "Walk to the coffee maker on the right." | <b>ALFRED; Action Learning From Realistic Environments and Directives</b> is a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. It includes long, compositional tasks with non-reversible state changes to shrink the gap between research benchmarks and real-world applications. a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. It includes long, compositional tasks with non-reversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like “Rinse off a mug and place it in the coffee maker” and low-level language instructions like "Walk to the coffee maker on the right." | ||
Latest revision as of 08:00, 16 June 2024
Youtube ... Quora ...Google search ...Google News ...Bing News
- Data Science ... Governance ... Preprocessing ... Exploration ... Interoperability ... Master Data Management (MDM) ... Bias and Variances ... Benchmarks ... Datasets
- Embodied AI
- AlfWorld
- Robotics ... Vehicles ... Drones ... 3D Model ... Point Cloud
- Simulation ... Simulated Environment Learning ... World Models ... Minecraft: Voyager
ALFRED; Action Learning From Realistic Environments and Directives is a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. It includes long, compositional tasks with non-reversible state changes to shrink the gap between research benchmarks and real-world applications. a benchmark for learning a mapping from natural language instructions and egocentric vision to sequences of actions for household tasks. It includes long, compositional tasks with non-reversible state changes to shrink the gap between research benchmarks and real-world applications. ALFRED consists of expert demonstrations in interactive visual environments for 25k natural language directives. These directives contain both high-level goals like “Rinse off a mug and place it in the coffee maker” and low-level language instructions like "Walk to the coffee maker on the right."