Animals are able to communicate in the realm of sound with much more complexity and nuance than scientists had suspected.
Using new digital tools, researchers have begun to decipher auditory communications from many non-human life forms, such as elephants, birds, bees, bats, and even plants. A few examples:
• Strawberry rootlets will grow toward the sound of running water, even where there is no moisture gradient.
• There are distinctive patterns in elephant vocalizations that correspond whether the elephant is in the presence of a friendly or unfriendly human, an adult human or a child, or a male or female.
• Owl hoots contain markers identifying both the speaker and the listener.
• A researcher using a voice-recognition program for bats vocalizations found that adults lower the pitch of their squeaks when they're communicating with babies, the opposite of us humans. Our baby talk is typically higher in pitch.
• Most of us have assumed that male peacock displays were sending primarily visual signals, but in fact there's also strong acoustic messaging in the infrasound range, to which females are strongly attuned. What precise information is contained in the feather-shaking display is just beginning to be deciphered.
This work raises profound philosophical and ethical issues, arising from the fact that machine systems can generate sounds that animals clearly understand and respond to, even though we can't recognize those nuances with our own ears. Should we use machine-generated animal sounds to send specific messages to animals, such as warnings, greetings, or invitations?
----Google blog: "Separating Birdsong in the Wild for Classification"