Researchers at the University of California-Berkeley have developed algorithms that enable robots to learn motor tasks through trial and error — much like humans learn things.
They demonstrated their technique, a type of reinforcement learning, by having a robot complete various tasks — putting a clothes hanger on a rack, assembling a toy plane, screwing a cap on a water bottle, and more — without pre-programmed details about its surroundings.
“What we’re reporting on here is a new approach to empowering a robot to learn,” said Pieter Abbeel.
“The challenge of putting robots into real-life settings, like homes or offices, is that those environments are constantly changing. The robot must be able to perceive and adapt to its surroundings,” added co-researcher Trevor Darrell.
The researchers turned to a new branch of artificial intelligence known as deep learning.
In the world of artificial intelligence, deep learning programmes create “neural nets” in which layers of artificial neurons process overlapping raw sensory data, whether it be sound waves or image pixels.
This helps the robot recognise patterns and categories among the data it is receiving.
In the experiment, the researchers worked with a Willow Garage Personal Robot 2 (PR2) which they nicknamed BRETT.
They presented BRETT with a series of motor tasks, such as placing blocks into matching openings or stacking Lego blocks.
When given the relevant coordinates for the beginning and end of the task, the PR2 could master a typical assignment in about 10 minutes.
When the robot is not given the location for the objects in the scene and needs to learn vision and control together, the learning process takes about three hours.
The findings are scheduled to be presented at the International Conference on Robotics and Automation (ICRA) in Seattle on May 28.