news

south korea's nth room reappears! a large number of women whose faces were replaced by ai asked for help on weibo, involving 500 schools

2024-09-01

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina

in the late 20th century work "ghost in the shell", motoko, who was fully cyborgized, doubted whether she still existed. the relationship between the body, memory and other people, when these things can be copied, can no longer be an argument for physical life.
when the ai ​​singer became popular, stefanie sun also made a similar point in her response: you are not special, you are already predictable, and unfortunately you are also customizable.
we can add that anyone can be described and generated by ai, even if you have never done something.
lu xun really said that when you see short sleeves, you immediately think of white arms. human imagination is universal, so it is no surprise that every time there is a new development in technology, a certain vertical track will be developed more and more maturely: pornography.
south korea, which once shocked the world with room n, is now performing version 2.0.
violence of ordinary people against ordinary people
in the previous nth room incident, the perpetrators set up multiple chat rooms on the encrypted instant messaging software telegram and posted sexually exploitative content. this incident also mainly occurred on telegram.
the main difference between the two is the means of crime: one is candid photography, and the other is deepfake.
we are already familiar with deepfake, which uses ai to generate seemingly real videos, audios or images to simulate things that did not actually happen.
deepfakes are commonly used in the entertainment industry and among politicians, but they are also being controlled by ordinary people and used to harm other ordinary people.
the perpetrators of n room 2.0 have extended their claws to their family members, colleagues and classmates. many telegram chat rooms are organized according to schools or regions, so group members have common acquaintances and similar topics.
chat rooms by university
in addition to the women around them, they also target female celebrities. some chat rooms are even divided into different occupations, including teachers, nurses, soldiers, etc.
the hankook ilbo reported that a telegram chat room with 227,000 participants could generate deepfake content in 5 to 7 seconds with only photos of women.
what does 220,000 mean? in 2023, the number of newborns in south korea was only 230,000, and the total population was only over 50 million.
this chat room has a robot that can synthesize female photos into nude photos and adjust breasts. when users enter the chat room, a message will pop up in the chat window immediately: "send the photos of women you like now."
screenshot of a chat room illustrating how deepfake is used
the outrageous number of participants may be related to the low "entry barrier": links can be found by searching for specific keywords on x (formerly twitter).
this chat room has also established a monetization model. the first two photos are free, and after that, each photo costs 1 diamond (0.49 us dollars, about 3.47 rmb). payment can only be made with virtual currency to ensure anonymity. if you invite friends, you can also get some free quota.
but some chat rooms require a "letter of allegiance" - if you want to join the group, you must first submit 10 photos of people you know and pass an interview.
avatars from the chat software kakaotalk and photos from instagram can all be used as "raw materials."
what is even more terrifying is that a large proportion of both victims and perpetrators are teenagers.
volunteers have created a real-time updated map showing which schools the crimes occurred in. even girls' schools can have victims, as the perpetrators are not necessarily classmates.
there is no definitive answer to how many schools have been affected, but one blogger said more than 70% of schools were affected.
on august 26, the korea joongang ilbo pointed out that at least 300 schools across the country were affected, including elementary schools. on august 28, the wsj reported that the number was expanded to about 500.
one netizen lamented in the comment section: "this is basically the whole of south korea..."
although there is no clear investigation result on this incident yet, past data can also illustrate the seriousness of the situation.
according to statistics from the korean women's human rights institute, from january to august this year, a total of 781 deepfake victims sought help, of which 288 were minors, accounting for 36.9%. the real number may be much higher than this.
separately, the national police agency of korea said that of the approximately 300 people accused of producing and distributing fake nude photos since the beginning of 2023, about 70% were teenagers.
many korean women took to weibo to ask for help. they do not speak chinese and could only translate the content by machine, expressing their helplessness and fear. "room n 2.0" once became a hot search on weibo.
some netizens wondered why korean women turned to the chinese internet for help. in fact, it was not just chinese. korean women also spoke out in other languages. in addition to south korea, media in singapore, turkey and other countries also reported on the incident.
they believe that if they receive attention and criticism from foreigners, the media will report more actively and the relevant departments will investigate more seriously, rather than turning a deaf ear and keeping quiet.
some of the criminal evidence and even the identity of the instigator were investigated by korean women themselves, similar to the nth room. fortunately, the south korean president and the leader of the opposition party have expressed their views. south korean president yoon seok-yeol proposed:
deepfakes are a clear digital sex crime and we will eradicate them.
deepfakes may be seen as a prank, but they are clearly criminal acts that use technology under the cover of anonymity, and anyone can become a victim.
telegram's servers are located overseas and its ceo is detained in paris, making it difficult to investigate. the korea communications standards commission said it has sent a letter asking the french government to cooperate in investigating telegram.
after attracting public attention, related behaviors have been restrained, but the hankook ilbo followed up and found that some users with evil intentions would continue to deepfake in more private chat rooms through stricter "identity authentication."
screenshot of a chat room where users discuss joining a more private chat room
false content, real harm
deepfakes are nothing new, but their harm is rarely taken seriously.
some south korean women have tried to make amends by setting their social media accounts to private or deleting photos posted online.
they are in pain and doubt. on the one hand, they don't know where their photos have been shared and how far they have spread. on the other hand, they don't understand why they are asking victims to be careful when uploading photos instead of educating the perpetrators.
when female students called on people on instagram to "take down all the photos you uploaded" in their stories, boys from the same school could make ridiculous remarks like "you are too ugly to be used in those things."
the perpetrators' remarks that women are standing too high
there are also many voices like this online: "i don't know why this kind of crime can cause so much harm." "if it was done by a few people themselves, the damage should be small."
but victims experience more than just seeing their faces deepfake. the perpetrators also insult them, spread their personal information such as addresses, phone numbers, and student ids, spread rumors about their private lives, and approach and harass them.
even more terrifying is encountering "revenge porn" - the perpetrators threaten to spread deepfake materials to blackmail and harm women, causing more serious secondary harm.
a korean youtuber said women are making a fuss, but he knows to wear a mask to protect himself
the korea herald reported that song, a 17-year-old high school student in gyeonggi province, used to share some dancing photos and short videos online. one day, she received an anonymous message on instagram with three explicit photos attached: "do your friends and parents know about this side of your life?"
these photos are all deepfakes, but they are almost indistinguishable from real images. the nightmare is not over, and the messages she replied to only made the other party more excited and made more requests.
screenshots of text messages between song and the perpetrator, edited and translated into english as requested by song
no one can share the pain. one victim even said: "the world i knew has collapsed."
this is not commensurate with the price paid by the perpetrators.
screenshots of the chat room, some obscene remarks, such as "you can set the pose you want to make a photo, super cool"
the dust has not yet settled on this incident, but south korea has had judgments on deepfakes before, one of which was heard in the first instance on august 28.
from july 2020 to april this year, park used facial photos of female victims, including college alumni, to create 419 deepfake pornographic videos and disseminated 1,735 of them. he was sentenced to five years in prison.
the victims began to campaign in july 2021 and finally succeeded in bringing the perpetrator to trial, with park being prosecuted in may this year.
because of this large-scale deepfake incident, relevant korean authorities are considering increasing the maximum sentence from 5 years to 7 years.
south korean women speak out against sex crimes
considering that deepfake juvenile crimes are common, but there are loopholes in the law, south korea is weighing the maximum punishment for perpetrators during compulsory education.
to this day, deepfakes remain a grey area in many places, with protections failing to keep up with the pace of the threat.
for example, in the united states, if the victim is an adult, each state has different laws, either criminalizing it or filing a civil lawsuit, but there is currently no federal law prohibiting the production of deepfake pornographic content.
screenshot of a chat room where members chat with common acquaintances
one reason why legislation is difficult is that some people believe that even if the subject in a deepfake image looks like you, it is not actually you, so your privacy is not actually violated.
however, everyone knows that although the pictures are fake, the harm is real.
the law is advancing slowly, and at the same time, the perpetrators who have never appeared have temporarily "laid back their flags" and are waiting to "make a comeback."
it’s so easy to do evil, deepfake is related to everyone
south korea is not an isolated case; deepfakes occur across no national borders.
in september 2023, in the small town of almendralejo, spain, a group of boys uploaded photos of their female classmates posted on social media to an ai tool that can strip with one click. there are five middle schools in the town, and the "nude photos" of the female classmates were circulated in at least four of them.
the tool, which can be used via a mobile app or telegram, has at least 30 victims, mainly female students aged 12 to 14.
most of the perpetrators were acquainted with them and were minors, at least 10 of them, some of whom were even under the age of 14 and could not face criminal charges.
a mother urges more victims to come forward
they created group chats on whatsapp and telegram to spread these "nude photos", and threatened victims through instagram, extorting "ransom" and real-life nude photos.
a similar incident occurred in a high school in new jersey, usa, with about 30 victims whose male classmates made "nude photos" during the summer vacation.
the principal assured that all the pictures had been deleted and would no longer be circulated. the culprit was suspended from school for a few days and then returned to the "scene of the crime" as if nothing had happened.
deepfake first emerged on reddit, the "american version of tieba" in 2017. its main form was to replace the faces of celebrities with the protagonists of pornographic videos or to spoof political figures.
from a technical perspective, there are two main paths: one is the encoder-decoder path, which replaces one face with another by compressing and reconstructing the image; the other is the generator-discriminator path (i.e., generative adversarial network, gan), which generates realistic images through adversarial training.
GAN
today, deepfake is a broader concept that is no longer limited to the original face replacement. we use it to refer to all acts of falsifying reality through technical means.
complex technical principles are hidden behind the scenes, and what is in front of users are "out-of-the-box" interfaces, allowing teenagers to easily create false information. deepfake has degenerated from a technology to a tool with almost no threshold.
"one-click undressing" apps only require a photo, an email address and a few dollars to strip off the "clothes" of celebrities, classmates, and strangers in batches. the pictures used for "undressing" are often obtained from social media without the consent of the publisher, and then spread without their knowledge.
based on an open source diffusion model that has been trained on a large number of images, users can generate fake explicit photos of celebrities by entering prompt words.
ai model of hollywood actresses has been downloaded thousands of times
open source github projects like deep-live-cam allow you to swap faces in video chats with just a photo.
it may be difficult to fool young people, but it may not be easy for the elders. a tragic example has already happened in reality - an 82-year-old american lost $690,000 in retirement funds because he blindly believed in the ai ​​musk who deceived people in the video.
ai musk's live broadcast
in july 2023, deutsche telekom released an advertisement about children's data security, calling on parents to share children's privacy on the internet as little as possible.
images, videos, and audio are all being deepfakes. although we know rationally that “seeing is believing” is a thing of the past, our minds have not yet fully accepted it, and we do not have the corresponding ability to distinguish. everyone may become a victim.
technology may be neutral, but the information produced by people using technology is not just information, it is also a weapon used to humiliate, stigmatize and gain a sense of superiority. the more vulgar and bizarre the content, the easier it is to spread, and this has been the case since ancient times.
what can ordinary people do? at least we can decide how to use technology, choose what information to produce and spread, pay attention to the victims, sneer at the perpetrators, and use our tiny power to promote the progress of laws and social concepts.
why do victims delete their photos when their faces are swapped? why do victims feel ashamed when they are secretly photographed? this seems to be a question that even technology cannot answer.