2024-09-14
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
zhidongxi (official account: zhidxcom)
compiled by | vendii
editor | mo ying
according to wired magazine's report on september 13, six computer scientists have found a new attack method for apple's mr headset vision pro and named it gazeploit. this attack method uses the device's eye tracking technology to crack what users enter on the virtual keyboard, including passwords, pin codes and other sensitive information.
eye tracking is one of the user interaction methods of vision pro. when using vision pro, your eyes are your mouse. when entering text, you will see a movable and resizable virtual keyboard. when you look at a letter, tap with two fingers to complete the input operation.
however, this advanced eye tracking technology can also be a source of security vulnerabilities. by analyzing the user's eye tracking data, attackers can crack what the user types on the virtual keyboard and successfully reproduce the passwords that people enter through eye movements.pinand other sensitive information.
gazeploit researchers notified apple of the vulnerability in april.applea patch was released at the end of july to fix the potential risk of data leakage.
when using apple vision pro, users are likely to use persona when doing live broadcasts or video conferences.
persona is a digital avatar created by apple that enables users to appear as a virtual image during video calls.the feature uses multiple cameras and sensors in the headset to capture a scan of the user's face and 3d measurements to create a digital doppelganger that looks and moves like the user.
during a video call, the user's persona, including head, shoulders and hands, will be displayed in a floating box, providing a more natural communication experience.
"these technologies ... could inadvertently expose a user's facial biometric data, including eye-tracking data, during a video call. the digital doppelganger would reflect the user's eye movements."the researchers wrote in a preprint paper detailing their findings.
according to the researchers, during the gazeploit attack, they did not gain access to vision pro and therefore could not see the user's view.
the gazeploit attack relies on only two biometric features that can be extracted from persona footage: eye aspect ratio and gaze estimation.
“knowing where the user is looking is a very powerful capability,” says alexandra papoutsaki, an associate professor of computer science at pompeii college who has long studied eye tracking and reviewed the gazeploit study for wired.
papoutsaki said the research is remarkable in that it relied solely on users’ persona video streams.in contrast, it would be much more difficult for a hacker to gain access to vision pro and try to exploit the eye-tracking data.
“right now, users can potentially expose their actions simply by sharing their persona through streaming,” she said.
according to researcher zihao zhan,the gazeploit attack is divided into two parts.
first, the researchers created a way to identify whether a user was typing text while wearing vision pro by analyzing a 3d avatar shared by the user.
they recorded the performance of 30 human avatars on various text-entry tasks and used the data to train a recurrent neural network (a type of deep learning model).
according to the researchers, when someone uses vision pro to enter text, their eyes focus on the key they are about to press and then quickly move to the next key.
"when we input text, our eye movements show some regular patterns," zhan said.
hanqiu wang, another researcher, added that these patterns were more common during text input than when browsing the web or watching videos. "during tasks such as text input, the frequency of blinking decreases due to higher concentration," he explained.
the second part of the research used geometric calculations to infer where the user placed the keyboard in virtual space and its size.
“the only requirement is that as long as we get enough eye movement information to accurately restore the keyboard, then subsequent key inputs can be detected,” zhan explained.
by combining these two elements, they were able to predict the keys a user might type. in a series of lab tests, the researchers were able to predict the keys a user might type, even though they had no knowledge of the victim’s text-typing habits, speed, or the exact location of the keyboard.up to 5 guessesmiddle:
by92.1%the accuracy of predictingletters in text messages;by77%the accuracy of guessingletters in password;by73%the accuracy of guessingpin;by86.1%the accuracy of guessingletters in emails, urls, and web pages。
the gazeploit attack was studied in a lab environment and has not yet been implemented in the real world against users using persona.in actual use, hackers may exploit data leaks to launch attacks.
in theory, a hacker could share a file with a victim during a zoom call, who would then potentially log into a google or microsoft account. at this point, the attacker could record the victim's persona and recover the victim's password through a gazeploit attack, giving them access to their account.
gazeploit researchers reported their findings to apple in april and subsequently sent the company proof-of-concept code.
apple fixed the vulnerability in a software update for vision pro at the end of july by automatically pausing the persona feature when the user uses the virtual keyboard.
apple said the issue has been resolved in visionos 1.3. although apple did not mention the fix for this vulnerability in the software update notes, it was detailed in the company's security notes.
apple assigned this vulnerability the vulnerability number cve-2024-40865 and recommended that vision pro users download the latest software update.
as wearable devices become lighter, cheaper, and able to capture more and more personal biometric data, the issue of user privacy protection becomes increasingly prominent. the data collected by these devices not only involves personal health information, but may also include sensitive information such as location and activity habits. once leaked or abused, it will pose a serious threat to personal privacy.
"along withsmart glasses, xr andsmartwatchas wearable devices become part of everyday life, people often don’t fully realize the large amounts of data these devices can collect about their behaviors and preferences, and the privacy and security implications this may bring.”cheng zhang, an assistant professor at cornell university whose research involves the development of wearable devices to help interpret human behavior.
"this paper clearly demonstrates a specific risk of gaze typing, but this is just the tip of the iceberg," zhang said. "while these technologies are developed for positive purposes, we also need to be aware of the potential privacy implications and start taking steps to reduce the potential risks that future wearable devices may pose."
source: wired magazine