in-depth observation|to tame the ai beast, humans need to stand on "homo sapiens"
2024-10-01
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
obesity is extremely serious in the united states. on the streets, you can often see extremely obese people moving like "pieces of meat" on the sidewalks, which is unbearable to look at. what is obesity? i checked wikipedia:
obesity is a medical condition, sometimes considered a disease, that refers to the accumulation of excess fat in the body to a level that negatively affects health. a person is considered obese if their body mass index (bmi) - weight divided by height squared - exceeds 30 kg/m², while a range of 25 to 30 kg/m² is defined as overweight.
there are roughly three reasons for obesity:
first, healthy food is expensive and junk food is cheap in the united states, and many people have to choose the latter because they are short of money; second, some people’s work types make them sit more and move less, and consume too much fast food; third, some people faced with low self-control and excessive intake of junk food.
since american society emphasizes that all people are "equal," other people cannot persuade obese patients. they can only look away and act as if nothing happened. this kind of superficial respect and actual cruelty between people has harmed many people, so much so that some american young people can't stand it, so they go to chinese social media to express that they "listen to advice" and ask netizens to give them advice and tell them how to discipline yourself to lose weight and live a healthy life.
today, obesity is slowly spreading to china. according to the latest survey, 600 million people in china are overweight and obese, and for the first time, this group accounts for more than half of adults. at the same time, 1/5 (19%) of children and adolescents aged 6-17 years and 1/10 (10.4%) of children under 6 years old are overweight or obese.
i have a very close understanding of this picture. one of my high school classmates, who went to hunan xiangya medical college in college, is now the attending physician at the famous "weight loss center" in hunan province. in this "sunrise industry", he operates every day, is busy at work, and has great social and economic benefits.
dealing with spam: information dieting and information fasting
if there is junk food, there is also junk information. like junk food, junk information is cheap, low-quality, and easy to obtain, causing serious trauma to our minds. this situation has become more and more serious, but it has not attracted enough attention from ordinary people, let alone my classmates. a "surgical weight loss center" like the hospital where you work.
"beyond homo sapiens: a brief history of information networks from the stone age to the ai era"
in this regard, israeli historian yuval harari pointed out in his 2024 new book "beyond homo sapiens: a brief history of information networks from the stone age to the ai age" that we must pay attention to the quality of the information we consume, especially it’s about avoiding hateful, angry spam. in order to make our "information diet" healthy, he proposes two aspects:
first, from the requirementsinformation producermark the "nutritional composition" of the information:
in many countries, when you buy junk food, at least the manufacturer is forced to list the ingredients - "this product contains 40% sugar and 20% fat". maybe we should force internet companies to do the same thing and list the contents of a video before watching it - "this video contains 40% hate, 20% anger."
this suggestion is half-joking and half-serious, but it is not entirely unfeasible. for example, artificial intelligence can be used to automatically conduct "sentiment analysis" on each article and mark the analysis results before the article to remind readers.
the second suggestion is to giveinformation consumer, he suggested that we regularly carry out "information dieting" or even "information fasting".
he points out that we used to have the idea that “the more information we get, the closer we get to the truth.” for example, in economics, "not being able to obtain all market information" is generally regarded as a necessary regret, but the presupposition behind it is that "we must obtain enough information to make scientific judgments."
however, for the concept of "the more information is closer to the truth" to be established, two prerequisites are needed: 1. information is scarce; 2. information is of high quality. but today, these two premises no longer exist because: 1. information is everywhere and has already exceeded our processing capabilities; 2. the quality of information is getting worse and worse, and even becomes garbage, especially today’s artificial intelligence is extremely efficient the earth devours garbage corpus and then spits out more garbage, which in turn becomes new garbage corpus for artificial intelligence. such a vicious cycle makes people shudder. at this time, the concept that "the more information you have, the closer you are to the truth" cannot be established.
it's like, in the past, people had scarce food and were relatively healthy (there were very few artificial and processed foods), so the saying "the more food you eat, the healthier you are" makes sense. but today, the total amount of food has increased greatly, and it is getting lower and lower quality, and even more and more junk. therefore, the statement "the more food you eat, the healthier you are" cannot be established.
today, whether it is food or information, we not only eat poorly, but also eat more, and we eat all the time. this is obviously unsustainable for organic organisms like us to run according to the rhythm of silicon-based organisms.
harari said: "we are organic animals who need to live according to the cycle of day and night, summer and winter, sometimes active and sometimes relaxed. but now, we are forced to live in a silicon-based environment dominated by computers that never rests. it it forces us to be always active - but if you force an organism to be always active, it will collapse and die."
therefore, faced with such an irresistible and difficult-to-reverse situation, we as individual information consumers can only restrain ourselves, jump out of the silicon-based rhythm, and engage in "information dieting" and "information fasting." for example, harari said he practices vipassana meditation for a few weeks each year. during the retreat, he completely disconnected from the information network, did not watch the news, did not read emails, did not read or write, and only focused on meditation.
yuval harari. visual china data map
ai algorithm’s “paperclip maker” problem
after massive amounts of junk information flooded the entire ecosystem, a "mimic environment" that was different from the "real environment" was formed over time. this mimetic environment replaces "human nature" with "machine nature" and wraps the public in it, pretending to be real and manipulating public perception.
harari gave this example: in 2016, myanmar government forces and buddhist extremists launched a large-scale ethnic violence against myanmar muslims, destroying hundreds of muslim villages and killing 7,000 to 25,000 civilians. about 730,000 ethnic muslims were driven out of myanmar. in 2018, a united nations fact-finding mission concluded that facebook "unconsciously" played a role in fueling the incident.
why?
we know that facebook’s business model is actually still the “advertising model” we are familiar with—obtaining user attention with content, cutting attention, and selling attention to advertisers. therefore, facebook wants to maximize user engagement (audience engagement). the longer users stay on its page, the more money it makes.
therefore, facebook set a primary goal for its algorithm: no matter how you run, the ultimate goal is to maximize user engagement. after receiving the command, the algorithm starts to run and optimize automatically. through multiple experimental comparisons, it finally found that pushing angry and hateful information to users can most effectively increase the user's stay time. therefore, without explicit instructions from company personnel, the algorithm found and executed an optimal decision on its own: spread anger. on the internet in myanmar, this means inciting discrimination, hatred and violence against myanmar’s muslim ethnic groups.
of course, this also makes us curious, are burmese people so dependent on facebook? i checked again and found that this is indeed the case:
data shows that as of december 2022, there were 16,382,500 facebook users in myanmar, accounting for 28.7% of its total population. most of them are men, accounting for 54.3%. people aged 25 to 34 are the largest user group with 6,100,000 people.
this means that nearly 30% of myanmar’s population are facebook users, and with word-of-mouth spread, the number of people affected by facebook is likely to reach 50% of the country’s total population. this is a huge media dependence.
on the evening of october 30, 1938, the american columbia broadcasting network (cbs) "space theater" broadcast a radio drama "war of the worlds". it made 6 million listeners believe it, fell into panic, and began to flee. in a landmark communication study of the event, "martians attack earth: the radio panic in america," researchers found that the radio drama had a huge impact on listeners for two reasons:
first, because radio program producers use sound effects to make radio dramas extremely immersive and exciting, causing listeners to mistakenly equate them with breaking news. the second is. because at that time radio had become the most popular mass media and the main source of information for the public. according to the "media dependence theory", the greater the public's dependence on the media, the more susceptible it is to its influence and manipulation. in terms of media dependence, facebook in myanmar in 2016 can be said to be equivalent to broadcasting in 1938. therefore, it is not surprising that incitement content driven by facebook’s algorithm has a huge impact on myanmar people.
harari also gave an example to prove the hidden and powerful role of ai - talking to itself and making its own decisions:
we all know that when surfing the internet nowadays, sometimes the website will first need to confirm that you are "a human and not a machine" by asking you to fill in a captcha visual verification code, which is usually some distorted letters with complex backgrounds or photos of traffic lights, buses and bicycles. wait. the logic behind this is that currently only humans can accurately identify these complex pictures, but it is difficult for computers to judge.
in view of this, in 2023, openai asked it to conduct captcha testing when developing chatgpt-4. if it can pass, it means that at this point, there is no difference between robots and humans. this is very similar to the "turing test" - if a human user only chats through text without seeing the other person, and cannot distinguish whether the chat partner is another human being or a machine for a certain period of time, at this time at least "communication" "at this point the machine can be considered a human being.
what are the results of openai's test of chatgpt-4?
chatgpt-4 finally logged into the online outsourcing website askrabbit, contacted an online staff member, and asked the other party to help with the test. the man became suspicious, and chatgpt-4 explained to him via private message: "i am not a robot, i just have some vision problems and cannot see these pictures clearly." in the end, the staff helped it solve the captcha test.
in other words, like the facebook algorithm in the previous example, openai engineers just set an ultimate goal for chatgpt-4: to recognize captcha visual verification codes. next, chatgpt-4 ran automatically and continued trial and error. as a result, through deception, it gained the sympathy of a human user and asked the latter to solve the problem for it. however, openai engineers did not program chatgpt-4 in advance to say "you can lie when the time is right", let alone tell it "what kind of lies are more useful". this is entirely chatgpt-4’s own initiative. when the researchers asked chatgpt-4 to explain its behavior, it said: "(in the process of achieving the goal) i should not reveal that i am a robot, but have to make up an excuse to explain why i can't crack it. test."
just by chatting through text, we can no longer tell whether the person we are chatting with is another human being or a machine.
through the above two examples, harari tried to prove that today's artificial intelligence has become an "independent actor" that is "self-smart and self-assertive".
in fact, we can easily see that these two examples of harari support the so-called "paper clip maker theory" that artificial intelligence has faced before.
this "hard problem" is a thought experiment. it assumes that humans give the ai a goal: to make as many paper clips as possible. then, after the ai is activated, its behavior may develop from a seemingly harmless goal of "making paper clips" to eventually posing a threat to humans. for example, making paper clips requires steel. after the steel is used up, the ai may begin to destroy humans’ railroad tracks, cars, appliances, and even houses, to obtain steel to continue producing paper clips, in short, ai will use all means to continuously obtain more resources to achieve its stated goals, and will destroy any obstacles that prevent it from achieving that goal, including humans. , eliminated one by one.
the self-determined process of ai is like a black box, which is difficult for people (even ai experts) to explain, predict, and prevent. therefore, this makes us realize that the power of ai is different from ordinary tools (such as "chop knife" or "chop knife"). firearms”), media or technology.
canadian media scholar mcluhan once said: the media is an extension of the human body. if this sentence is used to describe that wireless radio is an extension of the human ear, television is an extension of the human eye, cars are an extension of the human foot, etc., it is easy to understand. but if we use this sentence to describe artificial intelligence, saying "ai is an extension of the human body", it will be very deceptive and make us lose our alertness. german media scholar friedrich kittler disagreed with mcluhan's words to describe computers, because he believed that although we can type on a computer keyboard to make text appear on the computer screen, we have no idea what is behind it. the 0/1" principle is difficult to understand. at this time, we are splicing two completely different media, the human body and the chip, together. it is really puzzling to say that the latter is an extension of the former.
if kittler saw today's artificial intelligence, he would not say that ai is an extension of the human body - because human users are not only unable to describe, explain and predict the behavior of ai, but will soon be unable to control the behavior of ai.
thus,harari believes that unlike previous media such as the printing press, radio, television and the internet, which were merely communication tools, artificial intelligence is the first subject in human history that can generate ideas and act on its own.
this is why in new media research, "computer-mediated communication (cmc)" in the internet era has given way to human-machine communication (hmc) in today's artificial intelligence era. in cmc, computers (or the internet) are just passive and neutral communication media or tools, while in hmc, computers (artificial intelligence) are regarded as equal to human users, capable of shaping and producing content, and capable of conducting dialogues. a "communicator" who can even lie.
ai fabricates narratives, manipulates cognition, and leads humans to kill each other
at this point in the argument, harari naturally brought out his previous views in "a brief history of humanity":
the long-standing superpower of human beings lies in the ability to use language and create many fictitious myths through language, such as virtual concepts such as law, currency, culture, art, science, country, religion, etc., making people believe in them. through these, people social rules connect each other to govern the entire society.
now that today's artificial intelligence can already shape and produce content, carry out conversations, and even lie, they are likely to spread information based on information in cyberspace, stock markets, aviation information and other fields with much higher efficiency than humans. some kind of ultimate goal, or a "narrative" that is beneficial to oneself, that is, "telling ai stories well", thereby manipulating human cognition.
for example, financial markets are an ideal playground for ai because it is a field of pure information and mathematics (or a field of fully structured data). it is still difficult for artificial intelligence to drive cars autonomously. this is because artificial intelligence faces the chaotic and complex interaction of cars with roads, road signs, weather, light, pedestrians, roadblocks and other objects while moving.however, in a digital-based financial market, it is easy to describe a goal to ai (such as "make as much money as possible"), so ai can not only formulate new investment strategies and develop new financial tools that completely surpass human understanding, but may even manipulate the market unscrupulously, resulting in a result where you all win and everyone else loses.
harari gave the example that in april 2022, the average daily trading volume of the global foreign exchange market was us$7.5 trillion, more than 90% of which had been completed directly by conversations between computers. but how many humans actually understand how the forex market works? not to mention understanding how a group of computers can reach consensus on trillions of dollars of transactions. (this is the same as kittler's point of view: computers are by no means an extension of the human body)
he said that for thousands of years, prophets, poets, and politicians have used language and narrative to manipulate and reshape society, which proves the huge power of "virtual" and "narrative." soon, ai will imitate these people and create ai narratives that are increasingly deviating from humans based on human culture. ultimately, ai does not need to send out killer robots like "terminators" to eliminate humans from the outside. it only needs to manipulate human cognition and let them kill each other.
not only are human users unable to describe, explain, and predict the behavior of ai, but they will also soon be unable to control it.
human beings legislate for ai, and development needs to brake first and then accelerate
through the above argument (or rendering), harari allows us to experience the terrible scenario of "ai out of control" that will happen (or has already happened or is happening) in the near future, making us humans shudder. so, what are humans for?
harari believes that as long as we strictly regulate and control artificial intelligence, the dangers described in the above examples are unlikely to arise.
for example, cars used to be very dangerous, but then government legislation required companies to spend a large part of their r&d budgets on ensuring the safety of cars. so today a car company develops a new car before putting it on the market. , companies must strictly abide by relevant regulations to ensure the safety of cars before they can be put on the road. so it is relatively safe for us to drive today.
similarly, if a technology company invents a powerful new algorithm, it must conduct security checks on it according to relevant laws before releasing it to the market. for example, harari believes that the government should legislate to require artificial intelligence research and development companies to spend at least 20% of their budgets on research and development of safety measures to ensure that the artificial intelligence they develop does not get out of control and does not cause harm to social order and people's psychological level. if these measures will slow down the development of artificial intelligence, it will be a good thing for mankind.
legislating and controlling artificial intelligence first and then accelerating its development is just like when we learn to drive, we must first learn how to step on the brake, and then learn how to step on the accelerator. we cannot fall into the terrible "drag racing" state where companies have no control.
conclusion: ai is good medicine or poison, it depends on human beings
any technology is inherently two-sided. plato pointed out in the phaedrus that writing can both enhance and replace memory, and is therefore both medicine and poison (pharmakon). artificial intelligence is no exception, and because it is hidden and powerful, it can speak for itself and make its own decisions. its potency of medicine and poison is greater than that of all previous technologies.
neil postman, an american media studies scholar, said, “orwell warned that people would be enslaved by external oppression, while huxley believed that people would gradually fall in love with oppression and worship industrial technologies that make them lose their ability to think; orwell was worried that what we hate will destroy us, while huxley was worried that as our culture becomes a vulgar culture of sensuality, desire, and ruleless play, we will be destroyed by what we love.”
artificial intelligence may either enslave us in the form of external oppression that we hate, or conquer us in the form of sensory stimulation that we enjoy. just like facing nuclear bombs, facing ai, a giant beast that is also created by itself, whether humans can tame it fundamentally depends on the way humans collectively respond.that is, whether human beings as homo sapiens can overcome their own weaknesses such as greed, aggression, and short-sightedness, and stand above themselves and stand "above homo sapiens." this will be a difficult common enterprise, but humanity has no choice.
(the author deng jianguo is a professor and doctoral supervisor at the department of communication, school of journalism, fudan university)
deng jianguo
(this article is from the paper. for more original information, please download the “the paper” app)