It is 2015 already, so why can’t robots clean our houses and prepare our meals? Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory say that despite extensive research, Artificial Intelligence (AI) engineers still struggle to get household robots to understand which objects they should manipulate. Although object recognition is one of the most widely studied topics in AI, even the best object detectors still pick the wrong items too often.
In a forthcoming paper for the International Journal of Robotics Research, lead author Lawson Wong and his team present a new algorithm that should help household robots better identify objects in cluttered environments such as kitchens which are filled with hundreds of utensils and ingredients. Wong feels that household robots should take advantage of their mobility and static environments by imaging objects from multiple perspectives before making judgments about the objects’ identities.
If you just took the output of looking at it from one viewpoint, there’s a lot of stuff that might be missing, or it might be the angle of illumination or something blocking the object that causes a systematic error in the detector.
In their quest for better object recognition, the MIT team first took an approach that used a standard algorithm to evaluate many perspectives. The new system was able to recognize four times as many objects with better accuracy vs. a one that uses single perspectives. Based on their research (after much trial and error), the authors then came up with a new algorithm. Their new clustering-based approach is equally accurate and identifies objects as much as ten times faster than standard tracking-based algorithms, making it more practical for real use with household robots.