Sounds of depression
Artificial intelligence could predict depression by analyzing speech
Full access isn’t far.
We can’t release more of our sound journalism without a subscription, but we can make it easy for you to come aboard.
Get started for as low as $3.99 per month.
Current WORLD subscribers can log in to access content. Just go to "SIGN IN" at the top right.
LET'S GOAlready a member? Sign in.
Mental health clinicians typically use a patient’s answers to specific questions about lifestyle, mood, or past mental illness to diagnose depression. But in a September conference paper, researchers at the Massachusetts Institute of Technology described a new artificial intelligence model they say can predict whether a person is depressed, based solely on raw text and audio from patient interviews, regardless of the topic of conversation.
“The first hints we have that a person is happy, excited, sad, or has some serious cognitive condition, such as depression, is through their speech,” said study co-author Tuka Alhanai, a researcher in MIT’s Computer Science and Artificial Intelligence Laboratory, in a press release.
The researchers trained the artificial intelligence model on a series of 142 audio, text, and video interviews of patients, not all of whom were depressed. The model gradually learned to associate certain speech patterns for people with depression. Its key innovation, according to the researchers, is the ability to detect patterns associated with depression in new individuals without any other diagnostic information.
“The model sees sequences of words or speaking style, and determines that these patterns are more likely to be seen in people who are depressed or not depressed,” Alhanai said. “Then, if it sees the same sequences in new subjects, it can predict if they’re depressed too.”
The MIT scientists hope technology like theirs could lead to computer apps that help people monitor their own mental health. But they believe it would also be effective helping doctors identify mental distress in regular conversations with patients.
“Every patient will talk differently, and if the model sees changes maybe it will be a flag to the doctors,” said co-author James Glass, a senior research scientist at MIT.
Equipped to the teeth
The U.S. Air Force is testing a two-way wireless communication system that gives new meaning to the term “talking head.”
Dubbed the “Molar Mic,” the system consists of a tiny microphone and speaker set in a custom-fitted mouthpiece that clips onto the user’s back teeth. The user can then talk and hear without any external microphone or earpiece. The technology transforms the teeth and jawbone into an alternative auditory path for hearing.
In September, the Pentagon awarded a $10 million contract to Sonitus Technologies, developer of the Molar Mic, to provide the Air Force with secure, invisible communications gear usable in all kinds of tactical environments, according to military news website Defense One.
Air Force pararescue teams tested the device last year during Hurricane Harvey rescue operations in Houston. —M.C.
Scammer surge
A new report predicts nearly half of all calls to mobile phones in 2019 will be fraudulent. First Orion, a company that provides call screening and blocking services, analyzed more than 50 billion cell phone calls over an 18-month period. The portion of calls considered mobile scams rose from 3.7 percent of total calls in 2017 to 29.2 percent in 2018. The company predicts scam calls will likely reach 44.6 percent by early next year.
First Orion says many scammers are using “neighborhood spoofing,” a technique that disguises their actual phone number and displays it as a local number on a call recipient’s caller ID. Call-blocking apps can only block known numbers, not legitimate numbers that scammers temporarily hijack. —M.C.
Please wait while we load the latest comments...
Comments
Please register, subscribe, or log in to comment on this article.