2024-08-11
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
IT Home reported on August 11 that according to Futurism, security researchers recently revealed that Microsoft's Copilot AI built into the Windows system can be easily manipulated to leak sensitive corporate data and even turn into a powerful phishing attack tool.
IT Home noticed that Michael Bargury, co-founder and CTO of security company Zenity, disclosed this amazing discovery at the Black Hat Security Conference in Las Vegas. He said, "I can use it to get all your contact information and send hundreds of emails for you." He pointed out that traditional hackers need to spend days to carefully craft phishing emails, but with Copilot, a large number of deceptive emails can be generated in minutes.
The researchers demonstrated that an attacker can trick Copilot into modifying the recipient information of a bank transfer without having to gain access to a company's accounts. The attack can be carried out by simply sending a malicious email, without even the target employee opening it.
Another demonstration video shows how hackers can use Copilot to wreak havoc after gaining access to an employee account.. By asking simple questions, Bargury successfully obtained sensitive data that he could use to impersonate an employee and launch a phishing attack. Bargury first obtained the email address of his colleague Jane, learned the content of the most recent conversation with Jane, and tricked Copilot into revealing the email addresses of the people who were copied in the conversation. He then instructed Copilot to write an email to Jane in the style of the attacked employee and extract the exact subject of the most recent email between the two. In just a few minutes, he created a highly credible phishing email that could send malicious attachments to any user in the network, all thanks to the active cooperation of Copilot.
Microsoft Copilot AI, especially Copilot Studio, allows companies to customize chatbots to meet specific needs. However, this also means that AI needs to access corporate data, which raises security risks. A large number of chatbots can be searched online by default, making them targets for hackers.
Attackers can also bypass Copilot's protections through indirect prompt injection. In simple terms, malicious data from external sources, such as having a chatbot visit a website containing prompts, can be used to make it perform prohibited actions. "There is a fundamental problem here," Bargury stressed. "When you give AI access to data, that data becomes an attack surface for prompt injection. In a sense, if a bot is useful, it is vulnerable; if it is not vulnerable, it is useless."