- Robotics ... Vehicles ... Drones ... 3D Model ... 3D Simulation Environments ... Simulated Environment Learning ... Point Cloud
- Case Studies
- Cybersecurity ... OSINT ... Frameworks ... References ... Offense ... NIST ... DHS ... Screening ... Law Enforcement ... Government ... Defense ... Lifecycle Integration ... Products ... Evaluating
- Social Robots
- Symbiotic Intelligence ... Bio-inspired Computing ... Neuroscience ... Connecting Brains ... Nanobots ... Molecular ... Neuromorphic ... Animal Language
- Artificial Intelligence (AI) ... Generative AI ... Machine Learning (ML) ... Deep Learning ... Neural Network ... Reinforcement ... Learning Techniques
- Conversational AI ... ChatGPT | OpenAI ... Bing | Microsoft ... Bard | Google ... Claude | Anthropic ... Perplexity ... You ... Ernie | Baidu
- Autonomous Bag
- Robots for Healthcare
- Embodied AI
- Neuroscience News - Robotics
- Keeping track of what AI can do and where it is being applied | DeepIndex
- News | Mobile Robot Guide
- This Robotic Chemist Does Over 600 Experiments a Week and Learns From Its Own Work | Edd Gent - SingularityHub
- Zumi! | Robolink
- White Castle To Introduce Kitchen Robotic Assistant Flippy | Heard on All Things Considered - NPR
- The Advent of AI / Robotics in the Welding Industry | Philadelphia Technician Training Institute
Artificial Intelligence (AI) and Robotics represent two distinct yet highly interrelated fields, which have been instrumental in revolutionizing human life and work. AI is concerned with developing computer systems that can perform tasks requiring human-like intelligence, such as visual perception, speech recognition, decision-making, and language translation. Robotics, on the other hand, focuses on designing, constructing, operating, and utilizing robots that can perform tasks that are typically dangerous, dull or difficult for humans.
The integration of AI and Robotics is an area of immense potential and opportunity that can significantly improve human life and work. By harnessing the power of AI, we can develop intelligent robots that can tackle hazardous, repetitive or complex tasks, augment human abilities, and enhance productivity and efficiency.
AI technologies play a fundamental role in robotics by facilitating perception, decision-making, and control of robotic systems. AI-powered robots can use a variety of sensors, such as cameras and microphones, to perceive their environment, and machine learning algorithms to analyze this data and make decisions based on that perception. Moreover, control algorithms can be employed to enable robots to execute their decisions by manipulating their limbs or other components.
The applications of AI and Robotics are multifaceted and diverse, ranging from manufacturing, healthcare, transportation, and entertainment to space exploration and military. For instance, robots equipped with AI can automate tedious manufacturing tasks, assist in complex surgical procedures, and operate self-driving cars that perceive and navigate complex road environments.
Despite the remarkable benefits of AI-powered robots, there are several challenges and limitations that need to be addressed. One major challenge is ensuring the safety and reliability of these systems. For example, self-driving cars must operate in unpredictable environments and respond appropriately to unforeseen circumstances. Additionally, ethical considerations such as job displacement and privacy issues must be taken into account.
The future developments in the field of AI and Robotics are promising, with researchers striving to develop more sophisticated machine learning algorithms that enable robots to learn from experience and adapt to new situations. There is also ongoing research on more advanced sensors and control systems that allow robots to interact more naturally with their environment, facilitating human-robot collaboration and cooperation.
Boston Dynamics has a range of products that are robust enough for dull, dirt, and dangerous tasks, and agile enough for human-purposed environments. Some of their products include:
- Atlas - an R&D platform pushing the bounds of robotics forward. It leverages its whole body to move with human-like grace and speed
- Spot - an agile mobile robot that offers unmatched mobility to tackle tasks in industrial environments and beyond. price: $74,500
- Handle -a box-juggling robot that is still just a research prototype.
- Stretch - a versatile mobile robot for case handling and truck unloading
Ddog project features Spot robot from Boston Dynamics and a Brain-Computer Interface (BCI) system, powered by AttentivU, a pair of wireless glasses that can measure person’s Electroencephalography (EEG – brain activity) and Electrooculography (EOG – eye movements) signals. Ddog project is the next step in extending the Brain Switch application, a real-time, closed-loop BCI system allowing for real-time correspondence of simple user needs to a caretaker non-verbally. The Brain Switch aims on helping to support basic communication needs to those with physical challenges (ALS, CP, SCI). Ddog project is built using the same tech stack and infrastructure as Brain Switch. The biggest advantage of the Ddog is its mobility: it is a the first fully autonomous, brain-powered, wireless system, that features Spot robot, runs on 2 iPhones, with no need of using sticky electrodes, backpacks for compute. Ddog is designed with manipulation assistance in mind: the arm of Spot is used to: deliver groceries, bring a chair, a book or a toy, etc. This interview with Nataliya Kosmyna, Ph.D, the project lead of Ddog covers the following topics: Why creating Ddog? Why using Spot and not other robot? What is the tech stack behind Ddog? Why it is important to have wireless and portable brain sensing solution for Ddog project? As well as crazy use cases, the Future of Ddog project, STEM and Ddog and much more!
All our dreams can come true, if we have the courage to pursue them - Walt Disney
- Artificial Intelligence in Home Robots – Current and Future Use-Cases | Insights
- DIY Artificial Intelligence Robot | NOVA - Robotics
- Smartibot kit turns any household object into an AI-enhanced robot | Dezeen
- Machine learning made easy with two SparkFun AI kits | Electronic Products
- Robots, Parts, & Custom Solutions | SuperDroid Robots
- Be a STEM Hero | Revolution Robotics
- 20 Best Robot Kits 2019: From Lego to Arduino - Top 20 DIY Robot Kits Available Right Now | Luca Robotics
- The Best Robotics Kits for Beginners | Signe Brewster - Wirecutter
- Top 10 Programmable Robot Kits for Adults | Makeblock
- NOVA DIY Artificial Intelligence Robot | Creoqode
Some of the leading-edge technologies with home robots include artificial intelligence, virtual reality, and 3D multi-sensor transmitters. Home robots can do a variety of tasks, including cleaning (vacuum cleaning, floor cleaning, lawn mowing, pool cleaning and window cleaning), entertainment (toys and hobby robots), and domestic security and surveillance (machine vision, motion detection, more). Some models can even act as companions and assistants, taking temperature, bringing snacks and setting reminders.
Kitchen / Restaurant
Numerous companies are utilizing robotic technology and proprietary AI and ML to revolutionize the culinary industry. Various types of robots are being employed in the restaurant industry, such as salad-making robots, interactive robot bartenders, automated pizza makers, automated loaf makers, and robotic kitchens for fast food chains. The presence of robots in the restaurant industry is ubiquitous, serving up customized burgers, brewing perfect cups of coffee, and preparing fast-casual meals. The increasing affordability of the technology and the difficulty in finding workers have propelled robots into kitchens across the nation. Restaurant chains, including Chipotle, Wing Zone, and White Castle, are investing in robotics. Robots improve efficiency in restaurants by undertaking tasks such as cooking, cleaning, and dishwashing, freeing up time for other activities. They operate at a faster pace than humans, enhancing productivity, and reducing waiting times for customers, resulting in faster checkouts and increased customer intake during working hours. Additionally, robots reduce order inaccuracies in both order taking and food preparation, minimize food waste, consistently prepare food precisely as designed, and enhance food safety practices' efficiency.
- Typos and shutdowns: robot ‘gives evidence’ to Lords committee | Alex Hern - The Guardian ... Ai-Da, described as ‘world’s first ultra-realistic robot artist’, struggles at times to answer peers’ questions
- Policy ... Policy vs Plan ... Constitutional AI ... Trust Region Policy Optimization (TRPO) ... Policy Gradient (PG) ... Proximal Policy Optimization (PPO)
Ai-Da is a remarkable example of a cutting-edge technology that combines artificial intelligence and robotics to create an ultra-realistic humanoid artist capable of drawing, painting, performing, and engaging in discussions. Developed through a collaborative effort between gallerist Aidan Meller and Cornish robotics company Engineered Arts, the robot is named after Ada Lovelace, the pioneering mathematician and computer programmer who is widely considered to be the world's first computer programmer. At its core, Ai-Da is a composite persona that incorporates a variety of different computer programs, robotics, silicone, and human influences to create a machine capable of producing visually appealing artworks that are inspired by its own poetry. Using cameras in its eyes, AI algorithms, and a robotic arm, Ai-Da can paint on canvas with incredible precision, producing works that are stunning in their detail and beauty. Recently, Ai-Da had the opportunity to speak to the UK parliament as part of a House of Lords inquiry into the future of the creative industry. During the hearing, the robot emphasized that while technology could be both a threat and an opportunity for artists, it ultimately had the potential to expand creative possibilities in new and exciting ways. As the world's first ultra-realistic robot artist, Ai-Da represents a major milestone in the ongoing evolution of artificial intelligence and robotics, showcasing the potential for these technologies to push the boundaries of what is possible in the realm of creative expression.
A fascinating yet slightly eerie phenomenon that has garnered significant attention from designers and researchers alike. The concept refers to the unsettling feeling that people experience when they come across robots, androids, mannequins, video games, and animations that closely resemble human beings but lack the level of realism required to fully convince the human brain.
The term "uncanny" refers to something that is familiar yet strange, and this is precisely what occurs when people encounter an object that is almost human-like but not quite. This gap between what we expect and what we experience can create feelings of discomfort, anxiety, or even revulsion in some cases. It's as if our brain recognizes the similarities between the object and ourselves, but something feels off, triggering an automatic "fight or flight" response.
Designers in various fields, including robotics, video game art, training simulators, and 3-D animation, are acutely aware of the Uncanny Valley effect and strive to navigate it carefully. They can either try to avoid it altogether by creating less human-like designs, or they can use it to their advantage to elicit specific emotional responses from users.
However, it's worth noting that the degree to which people experience the Uncanny Valley effect can vary from individual to individual. Some people may not feel it at all, while others may experience extreme discomfort, surpassing the feeling they might get from viewing a corpse. The severity of the effect can also depend on the level of familiarity that the individual has with the subject materials.
Fortunately, designers can employ a variety of strategies to bridge the Uncanny Valley gap and make their creations more palatable to the human eye and mind. One such technique is to add cartoon-like or "cuter" features to the design, which can soften the edges of the realism and create a more appealing overall impression.
In conclusion, the Uncanny Valley is an intriguing and challenging phenomenon that designers must consider when creating human-like objects. Understanding how the human brain processes and responds to these designs can help designers create more effective and engaging products while avoiding the unsettling feeling that comes with the Uncanny Valley effect.
There are several benchmarks for robotics. One example is CoBRA (Composable Benchmark for Robotics Applications), which is a benchmark suite encompassing a common format for robots, environments, and task descriptions. It is especially useful for modular robots, where the configuration of the robots themselves creates a host of additional parameters to optimize. Another example is ROBEL (Robotics Benchmarks for Learning with Low-Cost Robots), an open-source platform of cost-effective robots and curated benchmarks designed primarily to facilitate research and development on physical hardware in the real world.