Google DolphinGemma AI Uses 400M Parameters To Analyze Dolphin Sounds On Pixel 9

April 19, 2025
1 min read
Photo Source: A pod of Atlantic spotted dolphins Google The Key word
Photo Source: A pod of Atlantic spotted dolphins Google The Key word

Google has launched DolphinGemma, a new AI model that listens to dolphin sounds and tries to make sense of them. This joint effort by Google, the Wild Dolphin Project, and Georgia Tech might finally help humans understand what dolphins are saying to each other.

“We’re not just listening anymore. We’re beginning to understand the patterns within the sounds,” says Dr. Denise Herzing, who founded the Wild Dolphin Project in 1985 to study Atlantic spotted dolphins in the Bahamas.

The project analyzes three main types of dolphin sounds: whistles (tonal calls), clicks (short sounds used for navigation), and burst pulses (rapid click sequences used in social situations). Researchers have observed correlations between specific sounds and certain behaviors – mother dolphins use unique “signature whistles” to call their calves, “squawks” often occur during fights, and “buzzes” appear during courtship.

DolphinGemma works similarly to AI models that predict the next word in a human sentence, but instead predicts the likely subsequent sounds in a dolphin sequence. This helps identify patterns that could reveal the structure of dolphin communication.

What makes this technology particularly useful is its portability. The AI model runs on Google Pixel phones, allowing scientists to analyze dolphin sounds while swimming with them. Google Pixel 9 phones in waterproof rigs, expected for research in summer 2025, will include speaker and microphone functionalities for field research.


Similar Posts:


The team has also developed the Cetacean Hearing Augmentation Telemetry (CHAT) system, which creates artificial whistles and associates them with objects like seagrass or toys. Researchers observe whether dolphins mimic these sounds in what appears to be requests for these items – potentially creating a simple shared vocabulary.

Google plans to make DolphinGemma freely available to researchers worldwide next summer. Scientists studying bottlenose dolphins, spinner dolphins, and other cetaceans will be able to adapt the model for their work.

Not everyone is convinced that dolphin sounds qualify as true language. Zoologist Arik Kershenbaum questions whether dolphin communication has the complexity needed for language. Thea Taylor from the Sussex Dolphin Project warns about accidentally training dolphins to respond to artificial sounds rather than truly understanding their natural communication.

Understanding dolphin language could have far-reaching benefits beyond simply talking to them. It might reveal how dolphins respond to environmental threats, inform conservation efforts, and provide insights into their social lives.

Karmactive whatsapp group link

The project faces significant challenges. Dolphin communication involves more than just sounds – body language and social context play important roles that audio recordings alone can’t capture. Without a “Rosetta Stone” to translate dolphin sounds to specific meanings, researchers must carefully interpret patterns without human bias.

DolphinGemma represents the latest step in our evolving relationship with these intelligent marine mammals, using modern technology to bridge a communication gap that has existed for thousands of years.

Rahul Somvanshi

Rahul, possessing a profound background in the creative industry, illuminates the unspoken, often confronting revelations and unpleasant subjects, navigating their complexities with a discerning eye. He perpetually questions, explores, and unveils the multifaceted impacts of change and transformation in our global landscape. As an experienced filmmaker and writer, he intricately delves into the realms of sustainability, design, flora and fauna, health, science and technology, mobility, and space, ceaselessly investigating the practical applications and transformative potentials of burgeoning developments.

Leave a Reply

Your email address will not be published.

Representative Image: Person Pointing Finger on Laptop Screen. Photo Source: Mikhail Nilov (Pexels)
Previous Story

Gen Z And Millennials Prioritize Mental Health Over Pay As 50% Say They’d Take A Cut For Well-Being Benefits

Representative Image(MTA bus).Photo Source: Metropolitan Transportation Authority ( CC BY 2.0)
Next Story

Queens Bus Redesign Cuts 5 Routes, Adds 16 New Lines, and Impacts 800,000 Riders Starting June 29

Latest from Artificial Intelligence

Subway Trains with Sensors and Leverages Cloud.

MTA Uses Google Pixels to Detect Subway Track Defects

The Metropolitan Transportation Authority (MTA) has launched a new pilot program with Google Public Sector that uses artificial intelligence to detect subway track defects before they cause service disruptions. The program, called
Representative Image. An ancient scroll laid out on a wooden table. Photo Source - Ooligan (CC BY-SA 2.0)

AI Deciphers 2,000-Year-Old Herculaneum Scroll

Artificial intelligence has helped scientists read words from a scroll sealed for 2,000 years. This scroll, carbonized into a lump of charcoal by volcanic heat, holds secrets from the Roman town of

Don't Miss