Digital Bioacoustics: Will AI Allow People To Talk With Their Pets In The Future?

Written by Jillian Green


Introduction

 Have you ever been envious of the show Martha Speaks? Or Binx talking with Dani in Hocus Pocus? Imagine a reality where all of this is possible! Digital bioacoustics is an emerging field concerned with the understanding and production of sounds, specifically within wild animals. Scientists across the globe have developed minuscule microphones that can attach to animals, perpetually record sounds, and sense bodily movements (Bushwick, 2023). While this technology is stunning, the radical aspects of this come from newly developed AI that can identify patterns, schemes, and structures within animal sounds. Recognizing and associating repeated or congruous sounds paired with the animal’s activity is the key to understanding their language. However, various ethical concerns arise as we teeter over the edge of cross-species communication. While learning the language of other species could help us protect them, as we could better understand factors leading to species' demise, this technology could also allow society to manipulate animals more efficiently, through domestication, poaching, and other controlling exploits. While digital bioacoustics has allowed us to decode the vernacular of bees, whales, and other creatures, it possesses various ethical dilemmas. This AI wields the potential to help biodiversity flourish and yet concurrently could be used to interfere with nature. 


Mechanics 

Digital bioacoustics is constructed from two main methods: a recording device and a computational AI system, specifically a Large Language Model (LLM). This model can recognize, decode, summarize, translate, or generate new information based on the inputted data. LLMs are composed of multiple Neural Networks, computational structures based on the human brain’s framework. The Neural Networks all have slightly different functions and operate in conjunction with one another to assess inputs and produce outputs. Researchers then use discrete, wireless microphones to attach to various animals, including deer, bees, and deep sea whales. The versatility of these microphones allows us to record a wide variety of animal phenomics in an array of unorthodox environments. Once the audio has been acquired, scientists transfer them to AI technology, which can implement language processing algorithms to detect patterns or rhythms within the creature’s vocalizations. The software can then match behaviors to phenomics, successfully decoding the non-human language of the designated organism. 

(2018) Introducing Large Language Models (LLMs): Empowering AI with Language Understanding. [Infographic] Mad Creative Inc. https://www.mad.co/insights/introducing-large-language-models-llms

Applying Digital Bioacoustics 

While robotic bee replicants sound suspiciously like components of a futuristic horror film, digital bioacoustics are currently playing a critical role in the assembly of a robotic bee species; fortunately, one that will have positive effects on society. Bees are detrimentally affected by climate change, habitat loss, pathogens, and many other factors. Despite their hoards of struggles, researchers are utilizing computational AI systems to combat these variables. Researcher Tim Landgraf of Freie Universität Berlin has designed an artificial bee replica that can interact with its fellow species through sounds or movement. Bees are notorious for communicating through a “waggle dance,” in which they communicate through a particular movement and “buzzing” noise regarding factors such as food, predation, and miscellaneous natural occurrences. Landgraf utilized AI natural image processing, matching specific combinations of sounds and maneuvers to predict what behavior these amalgamations would elicit. Diagnostics found that bees have a designated “waggle” to gesture to others to be silent, a “whoop” sound to signify danger, and other diverse noises like “toot” and “hiss.” With these bioacoustics, Landgraf successfully programmed his robots to efficiently reproduce these transmissions, allowing us to deliver commands to entire communities of bees directly. Scientists can now relay warnings about unsafe feeding sites to bees that human or environmental pathogens could have potentially contaminated.

(2016) BEESBOOK: ANALYSIS OF SOCIAL NETWORKS IN HONEYBEE COLONIES [Photograph] Website of Tim Landgraff https://dcmlr.inf.fu-berlin.de/landgraf/index.html%3Fp=49.html#respond

In addition to bees, whales offer another opportunity for bioacoustics to delve deeper into their vernacular. Researcher  Gašper Beguš, founder of the Project Cetacean Translation Initiative (CETI), is attempting to decode the clicks of sperm whale communication. Begus has devised a computational AI split into two parts: one that listens to the microphone recording and identifies whale clicks and another that spontaneously generates random clicks. The first model then produced feedback to the secondary model based on how realistic the whale sound was, and over time, the secondary model provided accurate whale sounds. This AI system allowed Beguš to understand the timing and number of clicks needed to communicate proficiently with a whale. With this newfound intelligence, scientists can hear whales relay messages about migration patterns or potential environmental hazards in their habitats. 


Ethics

Interspecies communication is a topic humans have been craving for centuries. Chatting with pets or picking up a conversation with a creature in the forest is a fundamental desire, something many people have dreamed of since childhood. However, realistically, while digital bioacoustics could potentially cultivate that pathway, would animals actually want to speak with us? What could organisms possibly gain from speaking to us? The driving factor behind digital bioacoustics is not solely to “talk to animals” but to better understand the organisms surrounding us so that we can protect them. Bioacoustics opens a gateway to another dimension of animal conservation, one that would allow us to relay information to organisms about safety threats such as poachers, environmental dangers, or even positive factors such as food and water sources. The restricting variable in this scheme is that animal communication most often only occurs in extremely niche contexts, such as signaling for danger, mating, defending territories, and so on. This limits what circumstances we would even be able to interact in and what we would say. Animals may not possess the ability to fully comprehend anything besides their contextual communication, therefore preventing us from continuing any further interspecies conversation. This predicament calls for drastic amounts of research and the contemplation of if we should even attempt to break the language barrier. Is it ethical to insert ourselves into an animal society, disrupt their carefully curated lifestyle, and engage in dialogue with them?  



Conclusion 

Digital bioacoustics is making major headway in the AI world, breaking barriers between humans and non-humans and cultivating a world where interspecies communication could become the everyday standard. While speaking with animals regarding any average topic will take years of extensive research, the ability to understand what organisms say to each other and mimic those sounds so that we can engage in their dialogue is already in our hands. We now wield the power to assist animals through direct communication, and as time passes, the power of digital bioacoustics seems to only heighten.

Bushwick, S. (2023, February 7). How Scientists Are Using AI To Talk To Animals. The Scientific American. https://www.scientificamerican.com/article/how-scientists-are-using-ai-to-talk-to-animals/

Welz, A. (2019, November 5). Listening to Nature: The Emerging Field of Bioacoustics. 

Yale E360. https://e360.yale.edu/features/listening-to-nature-the-emerging-field-of-bioacoustics

Jones, N. (2022, November 1). How Digital Technology Is Helping Decode the Sounds of Nature. Yale E360. https://e360.yale.edu/features/bioacoustics-nature-sounds-digital-technology

Bushwick, S. Harper, K. DelViscio, J. (2023, September 11). Scientists Are Beginning to Learn the Language of Bats and Bees Using AI. The Scientific American. https://www.scientificamerican.com/podcast/episode/scientists-are-beginning-to-learn-the-language-of-bats-and-bees-using-ai/

Mcloughlin, M. Stewart, R. McElligott, A. (2019, June 19). Automated bioacoustics: methods in ecology and conservation and their potential for animal welfare monitoring. Journal of the Royal Society Interface. https://royalsocietypublishing.org/doi/10.1098/rsif.2019.0225

Bakker, K. (2022, November 2). Listening to the tree of life. Princeton University Press. https://press.princeton.edu/ideas/listening-to-the-tree-of-life 

Bakker, K. (2022, October 19). Nature’s Hidden Sounds. Terrain. https://www.terrain.org/2022/currents/natures-hidden-sounds/

Hawkins, C. Lucas, T. Mroz, K. Collen, A. (2013, March). Challenges of Using Bioacoustics to Globally Monitor Bats. Research Gate. https://www.researchgate.net/publication/267212641_Challenges_of_Using_Bioacoustics_to_Globally_Monitor_Bats

Roopalatha, H. Can Humans Really Talk to Animals through Artificial Intelligence? CIO Insider. https://www.cioinsiderindia.com/tech-buzz/can-humans-really-talk-to-animals-through-artificial-intelligence-tbid-4808.html

Willmer, G. (2023, February 24). Robotic bees and roots offer hope of healthier environment and sufficient food. Horizon, The EU Research and Innovation Magazine.  https://ec.europa.eu/research-and-innovation/en/horizon-magazine/robotic-bees-and-roots-offer-hope-healthier-environment-and-sufficient-food

Next
Next

Racist Robots: Racial Bias in Healthcare AI and Patient Distrust