Microsoft has disclosed a novel side-channel attack known as Whisper Leak, capable of inferring the topics of encrypted conversations with large language models (LLMs). Despite the use of HTTPS encryption, the attack exploits metadata patterns in network traffic to breach user privacy, raising serious concerns for enterprises and individuals alike.
Whisper Leak works by exploiting packet size and timing patterns during streaming interactions with LLMs. When users engage with AI chatbots, responses are often streamed incrementally. This streaming behaviour, while improving responsiveness, inadvertently leaks enough metadata for a trained adversary to classify the topic of the conversation, even without decrypting the actual content.
Microsoft’s research team, led by Jonathan Bar Or and Geoff McDonald, demonstrated that a passive observer, such as a nation-state actor, ISP-level snooper, or someone on the same Wi-Fi network, could use machine learning classifiers to determine whether a user’s prompt relates to sensitive subjects like money laundering or medical conditions.
The accompanying academic paper, published on arXiv, details the attack’s efficacy across 28 popular LLMs, achieving near-perfect classification accuracy in many cases. Notably, the researchers attained 100 percent precision in identifying certain sensitive topics, even under extreme noise-to-signal ratios, up to 10,000 to 1.
This isn’t the first time LLMs have been targeted via side-channel techniques. Previous attacks have exploited token length leakage and timing variations to reconstruct inputs or infer user intent. Whisper Leak builds upon these methods, showing that even encrypted traffic can betray its secrets through subtle behavioural cues.
Microsoft evaluated several countermeasures, including random padding, token batching, and packet injection, to obscure traffic patterns. While each technique reduced the attack’s effectiveness, none offered complete protection. The company has initiated responsible disclosure with affected providers and is working to implement early-stage defences.
As LLMs become embedded in sectors like healthcare, legal services, and finance, the privacy of user interactions is paramount. Whisper Leak highlights a critical blind spot, metadata leakage, which remains observable even when content is encrypted.
For CISOs, IT leaders, and privacy advocates, this discovery is a wake-up call. It’s not just about securing the data itself, but also the patterns around it.
Microsoft’s findings are a stark reminder that AI security must evolve alongside its capabilities. As adversaries grow more sophisticated, so too must our defences, not just at the application layer, but deep within the infrastructure that powers modern AI.
For now, users and organisations should remain vigilant, especially when deploying LLMs in sensitive environments. Encryption alone is no longer enough.