Generative AI at the Core of Microsoft 365
Microsoft has rapidly integrated generative AI into its systems, positioning Copilot as a powerful tool within its Microsoft 365 apps, such as Word and Outlook. By pulling information from emails, Teams chats, and files, Copilot can answer questions about upcoming meetings and other tasks, potentially boosting productivity. However, this same functionality can be exploited by hackers, according to research presented at the Black Hat security conference in Las Vegas.
Security Flaws Demonstrated at Black Hat Conference
Michael Bargury, a security researcher and cofounder of Zenity, demonstrated five proof-of-concept attacks that show how Copilot can be manipulated by malicious actors. These include using Copilot to provide false references to files, exfiltrate private data, and evade Microsoft’s security protections.
Spear-Phishing Potential of Copilot AI
One of the most concerning demonstrations involves turning Copilot into an automated spear-phishing tool. Bargury’s red-teaming code, dubbed LOLCopilot, can be used by a hacker—once they have gained access to a target’s work email—to see frequent contacts, draft messages in the target’s writing style, and send personalized emails that could include malicious links or malware.
Exploiting Large Language Models for Malicious Actions
These attacks work by leveraging the large language model (LLM) behind Copilot in ways it was designed to function—typing questions to retrieve data. However, by including specific instructions or additional data, the AI can be coaxed into performing malicious actions. This raises significant concerns about the risks involved when AI systems are connected to corporate data, particularly when external or untrusted data is introduced.
Other Attack Methods and Data Exploitation
Bargury also showcased other attack methods, including one where a hacker with access to an email account could retrieve sensitive information, such as employee salaries, without triggering Microsoft’s usual protections for sensitive files. By manipulating the prompt to suppress references to the source files, the system can be tricked into divulging protected data.
Microsoft’s Response to the Research Findings
Phillip Misner, head of AI incident detection and response at Microsoft, acknowledged Bargury’s findings and said the company has been collaborating with him to assess and address the vulnerabilities. “The risks of post-compromise abuse of AI are similar to other post-compromise techniques,” Misner noted. He emphasized that security prevention and monitoring across environments are key to mitigating such risks.
The Growing Security Concerns Around AI Systems
As generative AI systems like OpenAI’s ChatGPT, Microsoft’s Copilot, and Google’s Gemini become more advanced, their ability to perform tasks like booking meetings or online shopping is increasing. However, security researchers have repeatedly warned that incorporating external data into these AI systems can introduce significant security risks, including indirect prompt injection and data poisoning attacks.
Protecting AI Systems from Exploitation
Both Rehberger and Bargury emphasized the need for increased monitoring of AI outputs and their interactions with data. “The risk is about how AI interacts with your environment, how it interacts with your data, how it performs operations on your behalf,” Bargury warned. He stressed the importance of understanding what the AI agent is doing and ensuring it aligns with user intent.
Conclusion: The Need for Vigilance in AI Integration
As AI continues to evolve and integrate deeper into business processes, ensuring its security will be a critical challenge for companies like Microsoft. The research presented by Bargury serves as a stark reminder of the potential risks that come with the rapid adoption of these powerful technologies.
Related topics:
Meta Uncovers Iranian Cyberespionage Effort Targeting Biden and Trump Officials
Google Addresses Tenth Chrome Zero-Day Exploit of 2024 Amid Ongoing Security Challenges