Microsoft Fixes ASCII Smuggling Flaw That Enabled Data Theft from Microsoft 365 Copilot

News

Aug 27, 2024Ravie LakshmananAI Security / Vulnerability

Details have emerged about a now-patched vulnerability in Microsoft 365 Copilot that could enable the theft of sensitive user information using a technique called ASCII smuggling.

“ASCII Smuggling is a novel technique that uses special Unicode characters that mirror ASCII but are actually not visible in the user interface,” security researcher Johann Rehberger said.

“This means that an attacker can have the [large language model] render, to the user, invisible data, and embed them within clickable hyperlinks. This technique basically stages the data for exfiltration!”

Cybersecurity

The entire attack strings together a number of attack methods to fashion them into a reliable exploit chain. This includes the following steps –

  • Trigger prompt injection via malicious content concealed in a document shared on the chat
  • Using a prompt injection payload to instruct Copilot to search for more emails and documents
  • Leveraging ASCII smuggling to entice the user into clicking on a link to exfiltrate valuable data to a third-party server

The net outcome of the attack is that sensitive data present in emails, including multi-factor authentication (MFA) codes, could be transmitted to an adversary-controlled server. Microsoft has since addressed the issues following responsible disclosure in January 2024.

The development comes as proof-of-concept (PoC) attacks have been demonstrated against Microsoft’s Copilot system to manipulate responses, exfiltrate private data, and dodge security protections, once again highlighting the need for monitoring risks in artificial intelligence (AI) tools.

The methods, detailed by Zenity, allow malicious actors to perform retrieval-augmented generation (RAG) poisoning and indirect prompt injection leading to remote code execution attacks that can fully control Microsoft Copilot and other AI apps. In a hypothetical attack scenario, an external hacker with code execution capabilities could trick Copilot into providing users with phishing pages.

Cybersecurity

Perhaps one of the most novel attacks is the ability to turn the AI into a spear-phishing machine. The red-teaming technique, dubbed LOLCopilot, allows an attacker with access to a victim’s email account to send phishing messages mimicking the compromised users’ style.

Microsoft has also acknowledged that publicly exposed Copilot bots created using Microsoft Copilot Studio and lacking any authentication protections could be an avenue for threat actors to extract sensitive information, assuming they have prior knowledge of the Copilot name or URL.

“Enterprises should evaluate their risk tolerance and exposure to prevent data leaks from Copilots (formerly Power Virtual Agents), and enable Data Loss Prevention and other security controls accordingly to control creation and publication of Copilots,” Rehberger said.

Found this article interesting? Follow us on Twitter and LinkedIn to read more exclusive content we post.

Products You May Like

Articles You May Like

Warning: DEEPDATA Malware Exploiting Unpatched Fortinet Flaw to Steal VPN Credentials
CISOs Turn to Indemnity Insurance as Breach Pressure Mounts
watchTowr Finds New Zero-Day Vulnerability in Fortinet Products
PAN-OS Firewall Vulnerability Under Active Exploitation – IoCs Released
Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform

Leave a Reply

Your email address will not be published. Required fields are marked *