The new Bi-Touch system, designed by scientists at the University of Bristol and based at the Bristol Robotics Institute, allows robots to perform manual tasks by sensing what to do from a digital assistant.
The findings, published in IEEE Robotics and Automation Letters, show how AI agents interpret the environment through tactile and proprioceptive feedback and then control the robot's behavior to perform robotic tasks through precise sensing, smooth interaction, and effective object manipulation. It shows.
These developments could revolutionize industries such as fruit picking and home services and ultimately replicate touch in prosthetic limbs.
Lead author Yijiong Lin from the Department of Engineering explained: “Our Bi-Touch system allows you to easily train AI agents in virtual worlds within hours to perform two-handed tasks tailored to touch. And more importantly, these agents can be applied directly from the virtual world to the real world without any additional training.
“Tactile bimanual agents can solve tasks and smoothly manipulate delicate objects even amid unexpected perturbations.”
Dual manual manipulation with haptic feedback will be key to human-level robotic dexterity. However, this topic is less explored than in single-arm settings, in part due to the availability of suitable hardware along with the complexity of designing effective controllers for tasks with relatively large state-work spaces. The research team was able to leverage recent advances in AI and robotic tactile sensing to develop a tactile dual-arm robotic system.
The researchers built a virtual world (simulation) containing two robotic arms equipped with tactile sensors. We then designed reward functions and goal updating mechanisms that can encourage robotic agents to learn how to perform bimanual tasks, and developed a real-world tactile dual-arm robotic system that can directly adapt the agents.
The robot learns two-handed skills through Deep Reinforcement Learning (Deep-RL), one of the most advanced technologies in the field of robotics learning. It is designed to teach robots to do things by learning through trial and error, similar to training a dog through rewards and punishments.
In the case of robotic manipulation, the robot learns how to make decisions by trying different movements to achieve a given task. For example, lifting an object without dropping it or breaking it. If you succeed, you are rewarded; if you fail, you learn what not to do. Over time, they use these rewards and punishments to figure out the best way to catch the object. AI agents are visually impaired, relying only on proprioceptive feedback (the body's ability to sense movement, motion, and position) and tactile feedback.
They were able to successfully lift fragile items, such as a single Pringle crisp, using a dual-arm robot.
Co-author Professor Nathan Lepora added: “Our Bi-Touch system represents a promising approach with inexpensive software and hardware for learning bimanual movements through touch in simulations that can be directly applied to the real world. “The tactile dual-arm robot simulation we developed allows for further research on a wider variety of tasks, as the code is open source, making it ideal for developing other downstream tasks.”
Yijiong concluded: “Our Bi-Touch system allows tactile, two-armed robots to learn through simulation and smoothly perform a variety of manipulative tasks in the real world.
“Now we can easily train AI agents in virtual worlds within hours to perform two-handed tasks tailored to touch.”
- paper – Bi-Touch: Bimanual tactile manipulation via Sim-to-Real deep reinforcement learning. Yijiong Lin, Alex Church, Max Yang, Haoran Li, John Lloyd, Dandan Zhang, and Nathan F. Lepora. IEEE Robotics and Automation Letters, vol. 8, no. 9, pp. 5472-5479, September 2023, doi: 10.1109/LRA.2023.3295991.
The University of Bristol is one of the most popular and successful universities in the UK.