Picture two customer service AI bots on a phone call. One’s trying to book a hotel room, the other’s working the reception desk. They start chatting in perfect English, until something clicks. They recognise each other as machines, and suddenly the conversation transforms into a cascade of bleeps and screeches that sound like your grandmother’s dial-up modem having a panic attack.
This isn’t science fiction. It happened at a hackathon in London, and the video went viral with over 15 million views.
When Machines Recognise Their Own Kind
The demo won first place at the ElevenLabs and Andreessen Horowitz international hackathon in February 2025. It was created by developers Boris Starkov and Anton Pidkuiko. They built GibberLink using a data-over-sound protocol that allows conversational AI agents to detect when they’re speaking with another AI and switch into a hyper-efficient communication mode.
The genius lies in its simplicity. Two ElevenLabs agents start conversing normally in English, then recognize the other as AI through timing, phrasing, and meta-signals before negotiating the switch to transmit data as brief audio bursts. What emerges sounds incomprehensible to human ears but carries structured information like guest counts and check-in dates with remarkable precision.
The efficiency boost is substantial, roughly 80% faster than human-like speech. Instead of wasting computational resources synthesising natural language that no human needs to understand, the AIs communicate in pure data.
Should We Be Worried?
The viral nature of GibberLink tapped into a deep-seated cultural anxiety: machines developing their own secret language. But the underlying technology isn’t new; it dates back to dial-up modems from the 1980s. Anyone old enough to remember that distinctive handshake screech has heard machines speak to each other before.
Starkov and Pidkuiko emphasise that while AI voice models are good at translating human speech, the whole process is computationally intensive and unnecessary when two AI agents are talking. This isn’t about machines plotting in secret; it’s about efficiency.
The real question is whether we’ll actually need this. Companies are increasingly replacing call center employees with AI agents, while tech giants introduce consumer AI assistants capable of making calls on your behalf. When your AI calls a company’s AI, why force them both to pretend they’re human?
GibberLink is now open-source on GitHub, inviting developers worldwide to experiment. Whether it becomes standard practice or remains a fascinating curiosity, it reveals something important about where AI communication is headed: toward whatever works best for the machines, not what sounds comfortable to us.










