In the 21st century, visual technology has advanced tremendously. Compact cameras now fit in our pockets, and satellite imagery assists in everything from guiding people through unfamiliar cities to gathering intelligence. Yet, just as we rely on our other senses when sight fails us, scientists and technology companies have developed ways to use sound to tackle global problems.
A promising field of innovation is sound-based technologies. These have evolved significantly and are now being deployed to combat challenges like wildlife poaching, natural disasters, and crime. Soon, these cutting-edge sound technologies could even be available in everyday people's pockets. Below are ten significant problems being solved through the power of sound.
10. Poaching

Stretching across 900 square miles (2,330 square kilometers), this region of South America’s Atlantic Forest is crucial for the survival of the endangered jaguar. Despite the vastness, it represents only a small part of the jaguar's habitat. Poaching and deforestation have led to a dramatic decline, with only about 300 jaguars remaining in the entire forest, a third of which inhabit this protected zone.
To safeguard the jaguars from poachers, a conservation initiative in Brazil tested a new mapping technology to predict where poachers might strike. This innovative system relied on audio data, captured by recorders strategically placed high in trees, hidden from view. These devices could detect the sound of gunshots from up to 1.2 miles (1.9 kilometers) away.
After seven months of recording, the collected data was used to create a predictive map with 82% accuracy, forecasting where poachers were likely to target next. This technology enables park rangers to adjust their patrol routes, ensuring they focus on areas with a higher likelihood of poaching activity.
9. Gun Crime

SoundThinking, a U.S.-based company, also utilizes gunshot detection technology, but its goal is to tackle urban gun crimes rather than poaching in remote jungles. The technology, known as ShotSpotter, uses a network of acoustic sensors positioned throughout a city to identify gunshots. By calculating the time it takes for the sound to reach multiple sensors, it pinpoints the location of the shot.
This information is relayed to emergency services nearly in real-time, enabling faster response times and improving the chances of preventing further gun-related incidents. Although there could be challenges if the sound doesn't travel in a direct path to the sensors, the company's website asserts that the system can guide authorities to within 82 feet (25 meters) of the shooting location, increasing the likelihood of finding critical evidence.
Despite its widespread use in 150 U.S. cities, ShotSpotter has faced its share of controversy. Some critics have questioned the technology’s effectiveness, while others have raised concerns about potential biases in its detection system.
8. Cave Mapping

While ShotSpotter focuses on detecting gunfire in urban areas, another innovative sound-based technology from 2011 took a different approach. A Massachusetts company called Acentech developed a system that mapped cave interiors using the echoes produced by gunshots.
To use this technique, a gun had to be fired four or five times into the cave, with about five seconds between each shot. After 15 to 20 seconds, the sound would echo back into two microphones positioned at the cave entrance, and the collected data would then be displayed on a laptop. This method of cave mapping is reminiscent of how bats use sonar for navigation.
Known as echolocation, this technique is also similar to the one used by Batman and Lucius Fox in the final scenes of the film Dark Knight. However, the real-world version did not produce the same 3D imagery as seen in the movie. Instead, the cave-mapping technology generated basic graphs and written descriptions of the cave’s interior.
7. Room Mapping and Design

Echolocation doesn’t always require something as loud as a gunshot. In 2013, researchers in Switzerland developed a system that could accurately map a room to the millimeter using just a finger snap and four microphones. What’s even more impressive is that the placement of the microphones doesn't impact the system’s effectiveness.
The algorithm behind the system processes the data from the microphones, factoring in both the distance between them and the distance from the sound to the room’s walls. Despite the tiny time gaps between each sound, the algorithm is able to filter and process them to generate a precise 3D map of the room.
Rather than testing the technology in a simple environment, the researchers used it to map the interior of Lausanne Cathedral. This was more of a demonstration of the algorithm’s capabilities. The researchers envisioned real-world applications, such as designing new buildings like concert halls or auditoriums, where the system could help architects predict and fine-tune a room’s acoustics.
6. Tsunamis and Earthquakes

Echolocation, commonly used to map caves and cathedrals, works by emitting sound waves that bounce back after hitting surfaces. A similar method is employed beneath the ocean to trace faults in the earth’s plates, located several meters below the seafloor. This task is crucial because understanding where these faults are positioned can significantly help save lives.
For instance, the Palos Verdes Fault Zone in California presents the risk of a sudden large-scale shift between plates that could trigger a tsunami. By identifying areas in the fault zone where such movements are frequent, scientists can track how quickly and often the fault shifts, offering a clearer understanding of the potential risks to coastal regions and offshore oil platforms.
Seismic reflection is a technique that uses seismic waves, generated by earthquakes or explosions, to produce profiles of the earth’s subsurface layers. The frequency of these waves determines the depth at which scientists can observe. To map fault lines that lie just meters beneath the surface, higher frequencies are utilized. While these seismic waves are not exactly sound waves, some of them resemble sound waves when traveling through the air.
5. Volcanic Eruptions

Volcanic eruptions are another type of natural disaster where sound technology plays a key role in safeguarding lives. In fact, it has been offering protection for over a decade. A sound-based warning system, developed by geophysicist Maurizio Ripepe, predicted 57 out of 59 eruptions that occurred between 2010 and 2018 at Mount Etna, the largest active volcano in Europe.
The system operates by detecting infrasound waves, which are low-frequency vibrations that humans cannot hear. Despite their inaudibility, they exist and are known to be generated by volcanic activity before eruptions occur.
These waves are produced when gases rising from magma stir the air inside a volcano's chambers, similar to how blowing into a musical instrument creates sound. Though inaudible to humans, the sound is still there. By detecting infrasound from volcanoes, authorities can be alerted to potential eruptions, giving them time to take action.
4. Sunspots and Solar Flares

Sound has not only assisted scientists in predicting major events on Earth but also in tracking some of the most intense phenomena within the entire solar system. Sunspots, dark patches on the Sun's surface, are believed to be caused by fluctuations in the Sun's magnetic field. When these sunspots grow large, they can precede solar storms and flares, which have the potential to affect Earth, making it essential to monitor them in advance.
Due to their potential impact on GPS systems, communications, and possibly even electrical grids, six telescopes positioned around the world ensure the Sun's activity is constantly monitored. One method employed by these telescopes is called 'helioseismology,' which involves listening for sound wave changes coming from within the Sun.
Normally, these waves travel freely, but intense magnetic fields can alter their path. By detecting these alterations, scientists can predict the appearance of a sunspot before it becomes visible. This method indicates the sunspot is already present, though it is on the side of the Sun not facing Earth, and will typically be observable a few days later.
3. Predicting the Stock Market

Most people are not skilled at hiding their true thoughts, knowledge, or emotions when speaking. This applies even to CEOs and managers, despite their polished public speaking. Researchers from Germany have leveraged this human tendency to predict the future earnings of companies, with their findings suggesting that analyzing vocal cues may be even more accurate than analyzing financial data.
While some analysts have long valued vocal cues, modern software can analyze them in ways far beyond human capability. For instance, a seemingly dull, routine presentation might actually carry an urgent warning when examined through the lens of sound structure—such as frequency, amplitude, and more.
The German researchers tested their system using real recordings of calls between analysts and managers prior to major earnings announcements. Unfortunately, they couldn't profit from these recordings as they were from 2019 to 2022. However, if they had acted on the predictions at the time, they would have outperformed the market by nearly 9%.
2. Identifying Health Issues

A person’s voice conveys far more information than just the words they speak. You can often tell their origin, emotional state, or even if they’re intoxicated or ill. While these may be the more obvious indicators, they are just the surface. Subtle nuances, such as the nasal sounds caused by a cold, are only the beginning of what can be uncovered from speech when it comes to identifying health conditions.
Conditions like depression, Parkinson’s, and even cancer can leave noticeable marks on how someone communicates. Over $100 million has been invested in AI initiatives aiming to identify these changes in speech to enhance and accelerate the process of diagnosing serious health conditions. One proposed method is using personal devices, such as phones or smart assistants like Alexa, to detect concerning shifts in speech patterns.
Among the most promising uses for this technology is in diagnosing Parkinson’s disease, where it has been shown to identify the illness with up to 98.6% accuracy from a simple “aaah” sound made by the person’s voice.
1. Mechanical Malfunctions

Today, many factories across the globe are highly efficient, fast, and mostly operate without human workers. These factories are often massive and filled with complex machinery. It begs the question: what happens when one of these machines malfunctions? It could take a long time just to pinpoint the issue, let alone fix it.
While this was once the case, AI has provided a solution. Even more impressively, AI is working to prevent machine failures or to alert workers about potential breakdowns by listening to the machines. The concept of listening to machines is not new, with a few places employing people for this task, though it’s rare. Now, factories can install sensors that can listen beyond the human ear.
The sensors capture the sounds produced by the machines, and these recordings help train machine learning algorithms to recognize what normal sounds should be, and what sounds indicate a potential failure. This technology even has the capability to predict the sound of a failure that has never been heard before. The hope is that this innovation will help factories avoid expensive downtime in the future.
