Chatterbaby is a free app equipped with artificial intelligence that analyses changes in frequency and patterns in the sound to silence ratio in the cries of babies. The AI-powered translator, developed by UCLA computational neuropsychologist Ariana Anderson, compares a baby’s sounds to those in its database. Its signal processing and machine learning algorithms analyse the information and can determine with over 90% accuracy the needs of the baby. Because it is AI, the more information it collects the better it gets, so to use the app parents have to sign a research form allowing UCLA to collect and store audio files.
To develop the algorithm, thousands of cries were recorded. First, cries as a result of pain, like when a baby receives a vaccine or an ear-piercing, were registered. Then came other cries, like those reflecting hunger, anxiety, fear, etc. They were collected and meticulously labeled by a panel which included veteran mothers.
Reseachers believe in the future the app would also be able to detect whether irregularities in cry patterns could be early signals of autism in children, becoming a useful tool to diagnose this condition. There is some evidence that cries could carry neurological clues -for example in acoustic features like pitch, energy, and resonance- which can be visualized and quantified through specialized software to detect specific conditions. This could be especially relevant in certain communities that are usually neglected by medical research. Children of colour, for example, are typically diagnosed with autism one to two years after white children. Earlier diagnosis means they will be able to receive better tratment and better chances at integration.
Ariana Anderson and her team will combine data from periodical parent surveys with their audio files to build one powerful machine learning model.
Send this to a friend