Imagine silently mouthing something to yourself, and your AI assistant knows what you’re trying to say. It could be via your glasses or your earbuds or your phone’s camera. Apple just purchased a company, called Q.ai, that’s trying to do this exact thing. That sounds weird and sci-fi, and yet to me, as someone who’s been looking at smart glasses and wearables for a long time, it sounds very familiar, too.
Apple’s investment in this Israeli startup isn’t small at all. The acquisition cost around $2 billion, according to the original report by the Financial Times and news outlets like Reuters. It’s more than any other Apple move since its splashy acquisition of Beats a decade ago. Unlike with Beats, though, nobody knows about Q.ai. At least, not yet. Still, the possibilities for new interfaces could be very strong. Another key piece is being added to the ever-expanding puzzle of future personal tech interfaces.
Q.ai isn’t a company I’ve ever met with or gotten a demo from, but one of its founders, Aviad Maizels, also created PrimeSense, the infrared-based technology that powered the room-scanning 3D capabilities of Microsoft’s Kinect camera for the Xbox years ago. PrimeSense was acquired by Apple in 2013. That tech became the TrueDepth camera array for Face ID, and it also lives in Apple’s Vision Pro for near-range hand tracking.
Judging by what’s been reported about its patents, Q.ai allows tracking of small facial movements and emotional expressions with optical sensors and could enable silent command input to an AI interface or the recognition of other subtle facial cues. The Israeli site GeekTime goes into a bit more detail, saying the tech would measure muscle and lip movements and might need to be near your mouth.
CNET reached out to Apple and Q.ai for comment, but neither responded immediately.
Part of a new interface system for wearables and glasses?
I just wrote about how Apple is already showing signs of moving toward an ecosystem of connected AI wearables: pins, glasses, earbuds, watches or some combination thereof. Any of these wearables could potentially use what Q.ai is developing. It sounds like headphones and glasses are the two most likely areas, though, and with reports that the next generation of AirPods will have infrared cameras onboard, the pieces look even more ready to connect.
Even mixed-reality headsets like the Vision Pro could leverage Q.ai’s tech. The Vision Pro can already recognize facial expressions with its eye-tracking cameras, downward-facing cameras and infrared sensors. But interacting with Vision Pro is still a little awkward to me. I use my eyes to gaze and my hands to pinch things, but I have to say «Hey Siri» to make audio requests. I’d rather my interactions feel more natural and subtle. Maybe this new acquisition could help.
As noted augmented reality artist and researcher Helen Papagiannis notes in her recent newsletter, «Apple’s rumoured AI pin makes sense less as a standalone product and more as a node in Apple’s ecosystem, drawing on shared sensing, intelligence, and context across devices working in concert with AirPods and, eventually, glasses.»
Existing smart glasses like Meta’s and upcoming ones from Google rely mostly on voice for interaction. Doing that silently could be a huge advantage, but other aspects beyond voice are also emerging. Meta has a wrist-worn neural band, with an eventual goal of adding eye tracking to glasses as well. Google’s glasses will also work with watch-based gestures.
I’m also more than a little concerned about privacy. Any technology that can lip-read and recognize subtle expressions could be used to track and listen in on your intent from a distance. How would this tech be used privately and reliably? Or would being able to just quietly mouth requests be more private than the voice commands I use now?
More to come beyond lip-reading?
I still want interfaces that use no speaking at all. Meta’s electromyography-based neural band tech points to more complex ways that wrist gestures could evolve to work with glasses and earbuds. Another Israeli company, Wearable Devices, has its own neural band, called Mudra, and is aiming to expand its subtle input capabilities, derived from the electrical impulses of motor neurons.
Electroencephalography, which measures brain signals, is another direction. While some companies are exploring EEG for brain-computer interfaces, it’s still primarily a sensor system focused on health and medical applications.
Count Q.ai’s tech among the interfaces that might make the wearable computers we use feel more connected to us. That’s weird and creepy, but it’s also where I think most of the glasses, wearables and VR/AR companies are already headed. This isn’t an outlier. Apple’s move is another part of the trend.
