Microsoft this week detailed a previously invisible backdoor called “SesameOp,” which abuses OpenAI’s Assistants API as a command-and-control channel to pass instructions between infected systems and the attackers pulling the strings. First spotted in July during a months-long breach, the campaign hid in plain sight by mixing network chatter with legitimate AI traffic — an ingenious way to remain invisible to anyone who assumed “api.openai.com” meant business as usual.
AI browsers are facing a security flaw that is as inevitable as death and taxes
READ MORE
According to Microsoft’s Incident Response team, the attack chain starts with a loader that uses a trick known as “.NET AppDomainManager injection” to plant the backdoor. The malware does not talk to ChatGPT or do anything remotely; it simply hijacks OpenAI’s infrastructure as a data courier. Commands come in, results go out, all through the same channels that millions of users rely on every day.
By piggybacking on a legitimate cloud service, SesameOp avoids the usual giveaways: no sketchy domains, no dodgy IPs, and no obvious C2 infrastructure to block.
“Rather than relying on more traditional methods, the threat actor behind this backdoor exploits OpenAI as a C2 channel as a way to covertly communicate and orchestrate malicious activity within the compromised environment,” Microsoft said. “This threat does not represent a vulnerability or misconfiguration, but rather a way to abuse the built-in capabilities of the OpenAI Assistants API.”
Microsoft’s analysis shows that the implant uses payload compression and layered encryption to hide commands and exfiltrated results; the DLL is heavily obfuscated with Eazfuscator.NET and is loaded at runtime via .NET AppDomainManager injection, after which the backdoor fetches encrypted commands from the Assistants API, decrypts them and executes them locally, then returns the results – techniques that Microsoft describes as advanced and designed for stealth.
This is where things get messy for defenders. Seeing a connection to OpenAI’s API on your network doesn’t exactly scream “compromise.” Microsoft has even published a hunting query to help analysts detect unusual connections to OpenAI endpoints based on process name – a first step toward distinguishing real chatbot activity from malicious use.
The Assistants API itself is scheduled for termination in August 2026that could close this particular loophole. But the pattern holds: if it’s hosted and trusted in the cloud, it’s fair game.
Microsoft did not say who is behind the campaign, but noted that it shared its findings with OpenAI, which identified and disabled an API key and account believed to have been used by the attackers.
OpenAI did not respond The Register‘s request for comment.
In an age where everything from HR chatbots to help desk scripts talk to an API, this won’t be the last time a threat actor turns your favorite cloud tool into his getaway car. ®
#Microsoft #OpenAI #API #serves #malware #headquarters


