Hearables Will Monitor Your Brain and Body to Augment Your Life

Hearables Will Monitor Your Brain and Body to Augment Your Life
Click here to view original web page at spectrum.ieee.org
Illustration of a woamn with a hearing aid.
Illustration: Anders Wenngren

The eyes, it’s been said, are windows to the soul. I’d argue that the real portals are the ears.

Consider that, at this very moment, a cacophony of biological conversations is blasting through dime-size patches of skin just inside and outside the openings to your ear canals. There, blood is coursing through your veins, its pressure rising and falling as you react to stress and excitement, its levels of oxygen changing in response to the air around you and the way your body is using the air you breathe. Here we can also detect the electrical signals that zip through the cortex as it responds to the sensory information around us. And in that patch of skin itself, changing electrical conductivity signals moments of anticipation and emotional intensity.

The ear is like a biological equivalent of a USB port. It is unparalleled not only as a point for “writing” to the brain, as happens when our earbuds transmit the sounds of our favorite music, but also for “reading” from the brain. Soon, wearable devices that tuck into our ears—I call them hearables—will monitor our biological signals to reveal when we are emotionally stressed and when our brains are being overtaxed. When we are struggling to hear or understand, these gadgets will proactively help us focus on the sounds we want to hear. They’ll also reduce the sounds that cause us stress, and even connect to other devices around us, like thermostats and lighting controls, to let us feel more at ease in our surroundings. They will be a technology that is truly empathetic—a goal I have been working toward as chief scientist at Dolby Laboratories and an adjunct professor at Stanford University.

How will future generations of hearables make life better? I’ve envisioned many scenarios, but here are four to get things started.

Scenario 1: Say you’re trying to follow a basketball game on TV while cooking in your kitchen. You’re having trouble following the action, and your hearables know there’s a problem because they’ve detected an increase in your mental stress, based on changes in your blood pressure and brain waves. They can figure out exactly where you’re trying to direct your attention by pairing that stress increase with variations in the electrical signals created in your ear as you move your eyes. Then the hearables will automatically increase the volume of sounds coming from that direction. That’s a pretty simple fix that could make you a little more relaxed when you finally get to the dinner table.

Illustration of a pair of unhappy people with loud noises and people nearby.
Illustrations: Anders Wenngren
Illustration showing comparison of signal versus the noise.

Scenario 2: You find yourself at a popular new restaurant, where loud music and reverberant acoustics make it difficult to hold conversations (even if you don’t have hearing loss). Your hearables are monitoring your mental effort, again by tracking your brain waves, determining when you are struggling to hear. They then appropriately adjust the signal-to-noise ratio and directionality of their built-in mics to make it easier for you to understand what people nearby are saying. These hearables can distinguish your friends from the patrons you wish to ignore, based on audio fingerprints that the device previously collected.

Illustration of a pair of happy people because the loud noises and people are muted.

They can even figure out exactly whom you are trying to hear by tracking your attention, even if you can’t see the person directly. We’ve all been at a party where we heard our names in a conversation across the room and wanted to be able to teleport into the conversation. Soon we’ll be able to do just that.

Illustration of unhappy parents driving with loud children in the back.

Scenario 3: You’re driving your car, with your partner next to you and your two children in the back seat. You’re tired from hours of driving and the children are getting noisy, which makes it hard for you to pay attention to the directions coming from your smartphone. And then your daughter announces she needs to go potty—now. Your hearables detect the increasing noisy unrest from children in the back seat, as well as the mental stress it’s putting on you. The devices increase the smartphone volume while also instructing the car to adjust seat temperature and airflow—inspired by previously tracked biometrics—and perhaps start softly playing your favorite music. They even understand your child’s “I need to go potty” as a bathroom request, analyze it to determine its urgency, and decide whether the navigation app should immediately search for the closest rest stop, or look for one near a restaurant—because they also heard your other daughter say she’s hungry.

llustration of a baby crying and an Illustration of an ear, a smartphone and a thermometer.

Scenario 4: You’ve been wearing a hearable for a few years now. Recently, the device has been detecting specific changes in the spectral quality and patterns of sounds when you speak. After tracking this trend for several months, your hearable suggests that you schedule an appointment with your physician because these changes can correlate with heart disease.

Nothing in these scenarios is beyond what we’ll be able to accomplish in the next five years. For all of them, the necessary hardware or software is available now or well on its way in laboratories.

Start with the earpieces themselves. Wireless earbuds such as Apple’s AirPods, which partially occlude the ear canal, and fully occlusive ones, such as Nuheara’s IQbuds or Bose’s Sleepbuds, show how advances in miniaturization and battery technology have enabled small, lightweight form factors that weren’t possible just a decade ago. Nonetheless, small form factors mean limits on battery size, and battery life is still a make-or-break feature that will require innovation to power a large number of sensors and machine-learning algorithms for days at a time, without recharging. Some near-term solutions will better optimize computation efficiency by distributing processing among the hearable, a user’s mobile, and the cloud. But look out for some transformative innovation in energy storage over the next few years. A personal favorite from the past year was Widex’s debut of a fuel cell for in-ear devices capable of reaching a full day’s charge in seconds.

As for sensors, we already have ones that can monitor heart rate optically, as done today by many wearable fitness trackers, and others that can measure heart rate activity electrically, as done today by the Apple Watch Series 4. We have sensors that monitor blood oxygenation, familiar to many from the finger-clip monitor used in doctors’ offices but already migrating into fitness trackers. We have rings, watches, and patches that track physical and emotional stress by measuring galvanic skin response; blood pressure monitors that look like bracelets; headbands that track brain waves; and patches that monitor attention and drowsiness by tracking eye movement. All of these devices are steadily shrinking and getting more reliable every day, on a trajectory that will soon make them appropriate for use in hearables. Indeed, positioning these kinds of monitors in the ear, where blood vessels and nerves run close to the surface, can give them a sensitivity advantage compared with their placement in wrist-worn or other types of wearables.

There are still quite a few challenges in getting the ergonomics of hearables right. The main one is contact—that is, how designers can ensure adequate continuous contact between an earbud and the skin of the outer ear canal. Such contact is essential if the devices are going to do any kind of accurate biological sensing. Natural behaviors like sneezing, chewing, or yawning can briefly break contact or, at a minimum, change the impedance of the earbud-skin connection. That can give a hearable’s algorithms an incomplete or inaccurate assessment of the wearer’s mental or physical states. I don’t think this will be a huge obstacle. Ultimately, the solutions will depend as much on insightful algorithms that take into account the variability caused by movement as they do on hardware design.

Speaking of software, the abilities of AI-based virtual assistants have, of course, blossomed in recent years. Your smart speaker is much more useful than it was even six months ago. And by no means has its ability to understand your commands finished improving. In coming years, it will become adept at anticipating your needs and wants, and this capability will transfer directly to hearables.

Today’s virtual assistants, such as Amazon’s Alexa and Apple’s Siri, rely on the cloud for the powerful processing needed to respond to requests. But artificial neural network chips coming soon from IBM, Mythic, and other companies will often allow such intensive processing to be carried out in a hearable itself, eliminating the need for an Internet connection and allowing near-instantaneous reaction times. Occasionally the hearable will have to resort to cloud processing because of the complexity of an algorithm. In these cases, fifth-generation (5G) cellular, which promises median speeds in the hundreds of megabits per second and latencies in single-digit milliseconds, will potentially reduce the latency to as little as tens of milliseconds.

These hearables won’t have to be as tiny as earbuds that fit fully inside the entrances to the ear canals. Today, some people wear their Apple AirPods for several hours every day, and it wasn’t so long ago that Bluetooth earpieces for making voice calls were fashion accessories.

In terms of form factor, designers are already starting to move beyond the classic in-ear monitor. Bose recently introduced Bose Frames, which have directional speakers integrated into a sunglasses frame. Such a frame could house many hearable features in a design most of us are already comfortable wearing for extended periods. Most likely, designers will come up with other options as well. The point is that with the right balance of form, battery life, and user benefits, an AI-powered hearable can become something that some people will think nothing about wearing for hours at a time, and perhaps most of the day.

What might we expect from early offerings? Much of the advanced research in hearables right now is focusing on cognitive control of a hearing aid. The point is to distinguish where the sounds people are paying attention to are coming from—independently of the position of their heads or where their eyes are focused—and determine whether their brains are working unusually hard, most likely because they’re struggling to hear someone. Today, hearing aids generally just amplify all sounds, making them unpleasant for users in noisy environments. The most expensive hearing aids today do have some smarts—some use machine learning along with GPS mapping to determine which volume and noise reduction settings are best for a certain location, applying those when the wearer enters that area.

This kind of device will be attractive to pretty much all of us, not just people struggling with some degree of hearing loss. The sounds and demands of our environments are constantly changing and introducing different types of competing noise, reverberant acoustics, and attention distractors. A device that helps us create a “cone of silence” (remember the 1960s TV comedy “Get Smart”?) or gives us superhuman hearing and the ability to direct our attention to any point in a room will transform how we interact with one another and our environments.

Within five years, a new wave of smart hearing aids will be able to recognize stress, both current and anticipatory. These intelligent devices will do this by collecting and combining several kinds of physiological data and then using deep-learning tools to tune the analysis to individuals, getting better and better at spotting and predicting rising stress levels. The data they use will most likely include pulse rate, gathered using optical or electrical sensors, given that a rising heart rate and shifts in heart rate variability are basic indicators of stress.

These hearables will likely also use miniature electrodes, placed on their surfaces, to sense the weak electric fields around the brain—what we sometimes refer to as brain waves. Future hearables will use software to translate fluctuations in these fields at different frequencies into electroencephalograms (EEGs) with millisecond resolution. Decades of research have helped scientists draw insights into a person’s state of mind from changes in EEGs. For example, devices often look for waves in the frequency range between 7.5 and 14 hertz, known as alpha waves (the exact boundaries are debatable), to track stress and relaxation. Fluctuations in the beta waves—between approximately 14 and 30 Hz—can indicate increased or reduced anxiety. And the interaction between the alpha and beta ranges is a good overall marker of stress.

These hearables may also measure sweat, both its quantity and composition. We all know we tend to sweat more when we’re under stress. Sensors that measure variations in the conductance of our skin due to changes in our sweat glands—known as the galvanic skin response—are starting to show up in wrist wearables and will be easily adapted to the ear. (You may be familiar with this technology for its role in polygraph machines.) Microfluidic devices that gather sweat from the skin can also help measure the rate of sweating. And researchers recently demonstrated that a sensor can detect cortisol—a steroid hormone released when a person is under stress—in sweat.

These hearables will combine this physiological picture of user stress with the sounds picked up by their microphones in order to understand the context in which the user is operating. They will then automatically adjust their settings to the environment and the situation, and amplify the sounds the user most cares about while they screen out other sounds that could interfere with comprehension.

A team from Columbia University, Hofstra University, and the Neurological Institute recently took the idea of decoding the electrical activity in the brain to help reduce stress a step further, demonstrating the effectiveness of using brain-wave detection with machine learning to help people cope with multiple people talking simultaneously and devising an algorithm that can reliably identify the person whom a listener is most interested in hearing. The experiment involved recording brain waves while a research subject listened to someone speaking. Then the researchers mixed the sounds of the target speaker with a random interfering speaker. While the subject struggled to follow the story being narrated by the target speaker in the mix of sound, the researchers compared the current brain-wave signals with those of the previous recording to identify the target of attention. Using an algorithm to separate the voices into separate channels, the system was then able to amplify the voice of the target speaker. While the researchers used implanted electrodes to conduct this experiment, a version of this technology could migrate into a noninvasive brain-monitoring hearable that increases the volume of a target speaker’s voice and dampens other sounds.

Tomorrow’s hearables could also improve our physical as well as mental well-being. For example, a hearable could diagnose and treat tinnitus, the so-called ringing in the ears often caused by the loss of sensory cells in the cochlea or damage to neural cells along the route from the cochlea to the brain. To diagnose the problem, the hearable could play a sound that the wearer adjusts to match the tone of the ringing. Then, to treat the condition, the hearable can take advantage of its proximity to a vagus nerve branch to train the brain to stop the neural activity that caused the annoying ringing sound in the first place.

Vagus nerve applications like this are in the early research phase, but they hold much promise for hearables. The vagus nerves are among a dozen major cranial nerve pairs. They run from the brain to the stomach, one on each side of the body, with branches that go through the skin near the nerve that carries the sensory information from the inner ear to the brain. Doctors stimulate these nerves electrically to treat epilepsy and depression, and researchers are testing such stimulation for the treatment of heart problems, digestive disorders, inflammation, and other mental and physical maladies.

In the case of tinnitus, researchers are stimulating the vagus nerve to boost the activity of neurotransmitters, which are helpful in learning, memory, and bootstrapping the brain to essentially rewire itself when necessary, a phenomenon called neuroplasticity. Such stimulation through hearables could also be used to address post-traumatic stress disorder (PTSD), to treat addiction, or to enhance learning of physical movements.

Privacy and security are two big interrelated challenges facing developers of hearables on the path to widespread adoption. While there are many benefits that may come from having wearables constantly monitoring our internal states and our reactions to our surroundings, people may be reluctant to submit to that kind of constant scrutiny. Businesses, too, will have concerns—for example, that hearables could be hacked to eavesdrop on meetings.

But for a hearable to provide maximum value, it must spend as much time as possible monitoring its owner’s daily life. Otherwise, the machine-learning systems that support it won’t have the detailed history and constant fresh supply of information that any AI device needs to keep improving.

There are also some thorny legal issues to consider. A couple of years ago, prosecutors in Arkansas subpoenaed Amazon to release information collected by an Echo device that was in a home where a murder occurred. Will people and businesses shy away from using smart hearables because the data they collect could one day be used against them?

Various strategies already exist to address such concerns. For example, to protect the voiceprints that voice-based biometric systems use for authentication, some systems dice the audio data into chunks that are encrypted with keys that change constantly. Using this technique, a breach involving one chunk won’t give a hacker access to the rest of that user’s audio data. The technique also ensures that a hacker won’t be able to obtain enough audio data to create a virtual copy of the user’s voice to fool a bank’s voice-authentication system. Additionally, developments in cryptography that allow algorithms to work with fully encrypted data without decrypting it will most likely be important for the security of future hearables.

Another, perhaps thornier, challenge involves mental health. Because hearables can monitor our internal states, tomorrow’s hearables will learn things about us that we may not know—or even be prepared to know.

For example, research by IBM found that an algorithm could predict the likelihood of the onset of psychosis within five years and diagnose schizophrenia with up to 83 percent accuracy simply by analyzing certain components of someone’s speech. But what would a hearable—perhaps one bought simply to improve hearing in restaurants and keep an eye on the user’s blood pressure—do with that kind of information? A hearable that listens to its owner for weeks, months, or years will have deep insight into that person’s mind. If it concludes that someone needs help, should it alert a family member, physician, or even law enforcement? Clearly, there are ethical questions to answer. Early intervention and knowledge is an enormous benefit, but it will come at a cost: Either we’ll have to be willing to give up our privacy or invest in a better infrastructure to protect it. Some questions raised by AI-powered hearables aren’t easily answered.

I am confident that we can meet these challenges. The benefits of hearables will far outweigh the negatives—and that’s a strong motivation for doing the work needed to sort out the issues of privacy and security. When we do, hearables will constantly and silently assess and anticipate our needs and state of mind while helping us cope with the world around us. They will be our true life partners.

This article appears in the May 2019 print issue as “Here come the Hearables.”

About the Author

Poppy Crum is chief scientist at Dolby Laboratories and an adjunct professor at Stanford University.

Spread the love

Leave a Reply

Nature Knows Nootropics