Smart speakers may be cool, but the technology on sale now is just the start. Current offerings are limited only to what they hear, and then they don’t know if it’s an actual human or a radio. Maybe even a parrot. Next generation smart speakers will respond to your deeds as well as your words because they’ll be watching you.
In May researchers at the Carnegie Mellon University in Pittsburgh demonstrated an idea they call SurfaceSight, which is intended to give smart speakers vision as well as hearing. The researchers, led by Chris Harrison and Gierad Laput, fitted SurfaceSight to an Amazon Echo speaker. This prototype bounced revolving electromagnetic waves around its vicinity and, by measuring how long they took to return, built up a 360-degree image of what was around it. It can thus be trained to recognise hand gestures and respond to them.
The LG WK7 talks to Google and can play music over Wi-Fi, but its Blutooth performance leaves a lot to be desired.
But the software doesn’t just cover gestures, it can also identify common household objects. This means, for example, that the device can detect cooking utensils and ingredients laid out on a work surface and check that everything is available to prepare a specific dish. If it detects a smartphone it can pair with it to access music and data.
Swipe the air and it can skip a music track or change an image on a powerpoint presentation. And while it cannot (yet) recognise individual people, it can recognise how many people are in the room and in which direction they’re facing, which is handy if you’re worried that your presentation may not be getting the attention it deserves. It can even tell those rude rear-facing people to pay attention.









Add Category