This AI technology is capable of comprehending words that are not articulated verbally.
This innovative technology enables silent communication without sound.
Researchers at Pohang University of Science and Technology (POSTECH) have created a groundbreaking wearable device that translates silent speech into audible voice by analyzing subtle movements in neck muscles. The research, led by Professor Sung-Min Park and Dr. Sunguk Hong, was published in Cyborg and Bionic Systems, marking a major advancement in communication between humans and machines.
From Muscle Movements to Spoken Words
This technology is based on a fundamental concept: speech extends beyond sound. When someone speaks – or even tries to speak silently – small muscular and skin movements occur around the neck, forming an "invisible map" of the intended speech.
To capture these movements, the researchers developed a device known as a multiaxial strain mapping sensor. It integrates a miniaturized camera with flexible silicone embedded with reference markers, enabling it to detect even minimal changes in skin deformation. This sensor is designed for everyday use, easily worn around the neck, and can recalibrate automatically when shifted.
The data collected is processed through artificial intelligence, which interprets the strain patterns and reconstructs the intended words or phrases. By aligning this information with voice synthesis customized to the individual’s vocal characteristics, the system can produce speech that closely mirrors the person’s actual voice, even in the absence of sound.
A Practical Leap Over Existing Systems
Conventional voice restoration techniques rely on devices like Electromyography (EMG) or Electroencephalography (EEG), which often involve cumbersome equipment and can be uncomfortable for prolonged use.
The approach taken by the POSTECH team removes these obstacles by providing a lightweight, wearable option. During trials, the system showed impressive accuracy in reconstructing speech, even in noisy environments like industrial workplaces where traditional microphones tend to falter.
Real-World Impact and Future Potential
The potential benefits of this technology are extensive. It could offer a new way for individuals who have lost their voices from vocal cord damage or laryngeal surgery to communicate again using their unique voice profiles.
Beyond healthcare, this system could facilitate silent communication in settings where talking aloud is not practical – such as libraries, meetings, or loud work environments. It also paves the way for more natural interactions between humans and AI, allowing for intentions to be converted into speech without physical vocalization.
Looking Ahead
The researchers aim to refine the technology for wider real-world use, enhancing accuracy and broadening language capabilities. Future versions may integrate more smoothly with consumer devices, potentially revolutionizing communication in both personal and professional contexts.
As AI and wearable technology continue to converge, innovations like this represent a shift toward more intuitive and subtle forms of interaction – where even unvoiced thoughts can be conveyed.
Moinak Pal has been involved in the technology sector, focusing on both consumer technology and automotive advancements.
Google's next wearable might be the screen-less Fitbit Air, aimed at challenging Whoop’s dominance.
The Fitbit Air could be Google’s most significant wearable since the Pixel Watch.
Google has been hinting at a screen-less fitness band since March, and it has now been revealed that it will be called the "Google Fitbit Air." NBA player Stephen Curry has already been seen using it since early 2026, giving a first glimpse of Google's competitor to Whoop.
Concerning the Fitbit Air and its strategy against Whoop.
AirPods Pro 3 might allow users to interact with Siri without verbalizing.
The future of human-device interaction may not rely on spoken words, and Apple seems to have secured the necessary technology to validate this.
There have been ongoing rumors suggesting that Apple is working on a version of the AirPods Pro that includes infrared (IR) cameras (according to MacRumors). However, the specific purpose of these cameras has been unclear until recently.
What prompted Apple to invest $2 billion in a company earlier this year?
Google is partnering with Gucci to create smart glasses, set to debut next year.
Google's return to smart glasses may be getting a high-end upgrade.
According to Kering CEO Luca de Meo, the owner of Gucci plans to release AI-powered smart glasses in collaboration with Google in 2027. If true, this would make Gucci the first significant luxury brand to enter the burgeoning market of AI eyewear. Reports suggest that these smart glasses could be released as soon as next year.
Other articles
This AI technology is capable of comprehending words that are not articulated verbally.
Scientists have created a wearable AI sensor that interprets neck movements to transform unvoiced speech into audible sounds, paving the way for innovative communication methods.
