Anyone who has used Apple’s Siri would likely agree that today’s voice-activated technology is far from perfect. While most people are good at focusing on a single voice and ignoring background noise, computers struggle with this skill. Engineers at Duke University have now developed a sensor that will improve computers’ listening skills by determining the direction of a sound and separating it from the background noise.
The new device uses metamaterials and compressive sensing and looks like a thick, plastic, pie-shaped honeycomb split into dozens of slices. The depth of each honeycomb opening varies from hole to hole, giving each slice of the “pie” a unique pattern. When a sound wave gets to the device, it gets slightly distorted by the holes, in a specific signature depending on which slice it passes over. A microphone on the other side of the device picks up the sound, which is then transmitted to a computer that identifies the noises based on the particular distortion.
The researchers confirmed the device’s accuracy in laboratory tests and published their results in the Proceedings of the National Academy of Sciences earlier this month. While the proof-of-concept prototype is six inches wide, in the future the technology could be miniaturized so that it could be used in consumer electronics and medical devices. “I think it could be combined with any medical imaging device that uses waves, such as ultrasound, to not only improve current sensing methods, but to create entirely new ones,” said lead author Abel Xie. “With the extra information, it should also be possible to improve the sound fidelity and increase functionalities for applications like hearing aids and cochlear implants. One obvious challenge is to make the system physically small. It is challenging, but not impossible, and we are working toward that goal.”