2024-10-04
한어한어Русский языкРусский языкEnglishEnglishFrançaisFrançaisIndonesianSanskritIndonesian日本語SanskritDeutsch日本語PortuguêsDeutschΕλληνικάPortuguêsespañolΕλληνικάItalianoespañolSuomalainenItalianoLatinaSuomalainenLatina
google glass, released at the google i/o conference in 2012, is known as the most failed product in google's history, but it also left a mark in the history of technology.
there are many reasons why it failed, privacy risks being one of them.
it's easy to ask google glass to take pictures, using voice commands, or pressing and holding the button on the top, but there are no bright signs such as leds to remind others that they are in shooting mode.
therefore, from the perspective of passers-by, google glass is an unethical "candid camera device". some users were even kicked out by security guards in movie theaters.
today, similar things are still happening, and they are even getting worse - ai glasses can identify your personal information just because you look at you in the crowd one more time.
a face, a pair of glasses, meeting a stranger
is your name lee? did you graduate from bergen county college? is your korean name joo-oon? do you live in atlanta? did we meet at the cambridge community foundation? are your parents john and susan?
if you were approached by someone on the street and you had never met them, but they seemed to know you, called your name enthusiastically, and told you one or two personal information, how would you react?
two harvard students, anhphu nguyen and caine ardayfio, conducted such an experiment.
wearing meta's smart glasses meta ray-ban, they randomly identified dozens of strangers on campus, in the subway, etc. take a photo of someone and within seconds, their information will appear on your phone.
it’s just that the smart glasses themselves cannot achieve such a program effect. they have made some technical changes, but the principle is not complicated.
first, the video is transmitted to instagram in real time through the live broadcast function of meta smart glasses, and then a computer program is used to monitor the video stream and use ai for face recognition.
then, more photos of a certain person are searched on the internet, and the person's name, address, phone number, and even relative information are found based on public databases.
finally, the information is sent to a mobile app they wrote for easy viewing. everything is ready, except for randomly scaring a passerby.
to elaborate, the two students combined various existing and mature technologies, including generative ai.
smart glasses: smart glasses equipped with cameras to capture images of faces in public places.
reverse facial recognition: match facial images with public images on the internet through face search engines such as pimeyes and return web links to these images.
crawler tool: use firecrawl crawler tool to crawl the required data from these web links.
big language models: big language models infer details such as names, occupations, etc. from scraped, messy data.
databases: enter names into sites like fastpeoplesearch to find personal information such as home addresses, phone numbers and names of relatives from public records and social media。
among them, the big language model plays a very subtle role. it can understand, process, and compile a large amount of information from different sources. for example, it can associate the same name in different articles, infer someone's identity through contextual logic, and let the data the extraction process is automated.
regarding the reasoning ability of large language models, we have previously reported on an interesting study. interested friends can review this article:"chatting with gpt-4, a new way of privacy leakage"。
privacy leaks are a commonplace, facial recognition is not new, and the problem of secret photography does not appear every day or two. in the past two years, large models have become a productivity tool that many people cannot live without.
however, the chemical reaction of the powerful alliance between them still leads to a very scary result - just by a chance encounter on the street, our personal information may be extracted by someone who is interested.
the two students did not disclose the technical details to the outside world. the purpose of doing this experiment was to remind people to stay vigilant.
so how can we protect ourselves? the response they propose is to delete their data from data sources such as face search engines, but it is difficult to say whether this is complete or not.
although i reminded you, you may not know that you were secretly photographed
someone joked at the time that the greatest use of google glass was to allow prince charles of the british royal family to remember everyone's name.
perhaps, through smart glasses that support face recognition, we will usher in a world without strangers. the happiest person may be yagami yue.
you may ask, with the database and facial recognition, it should be possible to take secret photos with a mobile phone. why did they choose smart glasses meta ray-ban?
the reason is simple. they look like ordinary sunglasses, which are not as cyberpunk as google glass. they are more convenient for taking secret photos, and the glasses are suitable for recording, hands-free, and seeing what you see.
meta ray-ban is not without reminders. it has an led indicator. when the user records a video, it will automatically turn on to remind passers-by. but, it's better than nothing.
a previous review by the verge found that the meta ray-ban’s led and shutter sound were not noticeable in bright outdoors. in crowded and noisy public places, many people tend not to notice such details.
the led is above the right eye, did you notice?
when your hands are on the buttons of the temples, others may think that you are just holding your glasses.
therefore, it is not difficult to understand that privacy has always been a concern for smart glasses. when meta's first ar glasses orion was released some time ago, some people were worried about whether it would repeat the mistakes of google glass.
meta also emphasized "how to wear smart glasses responsibly" in meta ray-ban's privacy policy and wrote many warm reminders.
however, everything is optional. whether you respect others or use your voice or gestures to remind others before filming or live broadcasting, it all depends on your own consciousness.
not to mention that the form of glasses is more convenient for candid photography. in fact, technology companies can directly design smart glasses that support facial recognition. the obstacle is not the technology itself.
in 2021, foreign media reported that meta had considered building facial recognition functions into smart glasses. at that time, meta cto andrew bosworth also gave an example that such smart glasses could help users who are face blind or cannot remember their names to recognize someone at a dinner party.
in addition, facial recognition startup clearview ai has developed its own ar glasses and applications, which it claims can connect to a database of 30 billion faces, but it is not publicly sold.
to a certain extent, how to use facial recognition and how to prevent secret photos are restricted by law and morality.
for example, facebook's facial recognition technology once allowed users to tag friends in photos. questions about privacy were naturally unavoidable. in a class action lawsuit in 2015, facebook paid us$650 million in compensation.
in 2021, facebook announced that it would stop using facial recognition technology to identify people in photos and videos and delete related data of more than 1 billion people.
facial recognition, what a cliché. but the most mundane technology, because of its maturity, popularity and wide application, makes people feel like they are facing a formidable enemy.
in the face of ai, there are fewer and fewer secrets
in the experiments of harvard students, the role of large language models is to help process data, but now using various generative ai products, we often actively provide our own data.
because many times, paying for privacy is a must for using services, such as handing over your face to ai face-changing p-picture software.
furthermore, not only faces, but also ai hardware and software are increasingly emphasizing the concept of personal data.
for example, use an ai recording product to seamlessly record your day, reveal your daily routine and hobbies to an ai diary, or just let chatgpt remember who we are through the memory function.
wearable ai recording device limitless
ai will slowly learn more about you, analyze you, organize the information around you, provide you with more emotional value, and make up for your limited brain capacity.
at the same time, these products will also emphasize privacy and security. they either say that your data is yours and will not use your data to train models, or they use end-side models that run locally, or private clouds, which increase the risk of privacy leaks. low.
privacy and convenience are difficult to have at the same time. when we enjoy the fun and personalization of various ai products, risks also accompany us.
just like the electronic brain in "ghost in the shell", people directly connect their brains to the internet or interconnect their brains. they can communicate with each other quickly, but there is also a risk of the brain being invaded, and even memories can be forged. .
of course, privacy leakage may be like facial recognition. it is a boring and unoriginal topic. you leak it, i leak it, and he leaks it too. it's like, it doesn't matter, it's all the same.
but if someone walks up to you wearing "sunglasses" and calls your name, that scene is still very impactful, right?
perhaps even more concerning is the invisible power over information that those who possess the technology and tools first exercise over other unsuspecting individuals.
after the rise of smartphones, vertical screen short videos and live broadcasts have also developed. we have become more and more accustomed to shooting and being photographed. we are innocent backgrounds, or in other words, we don’t care and cannot notice.
at this time, we are a drop of water in the vast ocean, but in the future, this drop of water may be focused by the ai behind the lens, and then refracted into a more concrete appearance.