news

The singularity is getting closer, what should humans do?

2024-07-22

한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina



observe

Will this round of technological unemployment caused by AI replacing people be easily overcome by the spontaneous adjustment of the market as in the past? I am not so optimistic about this issue. Fundamentally, whether a society can smoothly overcome the wave of technological unemployment depends on two points: first, whether there are a large number of employed people in the occupations impacted by new technologies. Second, when new technologies eliminate old employment opportunities, they can create new occupations that are easier to get started with in a timely manner.

——Chen Yongwei






The singularity is getting closer, what should humans do?

arts/Chen Yongwei



Singularity: From Science Fiction to Reality


After two years of delays, Ray Kurzweil's new book The Singularity is Nearer was finally released at the end of June. As a fan of Kurzweil's books, I immediately found the electronic version of the new book and read it all in one go.


In the book, Kurzweil shows readers an important empirical law: the development speed of information technology is exponential. At this rate, people's technical ability to process information is doubling every year. As the most typical representative of information technology, the development of artificial intelligence (AI) is even more amazing. According to this trend, before 2029, AI will surpass humans in all tasks, and artificial general intelligence (AGI) will be fully realized. After AI technology has made breakthroughs first, it will be able to empower many fields and help them achieve rapid development. As a result, within 5 to 10 years, humans are expected to achieve "Longevity Escape Velocity". At that time, although people will continue to age, their risk of death will not increase with age due to the improvement of medical technology. With the help of nanorobots the size of red blood cells, people will be able to kill viruses and cancer cells directly at the molecular level, thereby solving a large number of diseases that plague humans, and human life expectancy will increase significantly. Not only that, nanorobots are also expected to enter the human brain non-invasively through capillaries. Together with other digital neurons hosted in the cloud, they will raise human intelligence to a higher level. In this way, human thinking ability, memory level and problem-solving ability will no longer be limited by brain capacity, and human intelligence will increase thousands of times. After all of the above happens, many problems that currently plague people will be solved: cheaper energy will be discovered and used, agricultural efficiency will be greatly improved, public education will be significantly improved, and violent incidents will be greatly reduced... In short, before 2045, mankind will pass the "Singularity" and usher in a new era that is completely different from the previous one.


For old readers like me, Kurzweil's views are not new. In fact, in his book The Singularity is Near, published in 2005, he has discussed almost all of the above views in detail. In this sense, this new book is just old wine in a new bottle. Nevertheless, when I read these views again this time, my mood is very different from that of the past. More than a decade ago, when I read The Singularity is Near, I just regarded it as a science fiction novel. Although Kurzweil uses a lot of data in the book to show people that the technology in this world is growing at an exponential rate, many people, including me, are highly skeptical about it.


The Singularity Is Near: When We Merge with Artificial Intelligence
(The Singularity is Near:When We Merge With AI)
Ray Kurzweil
Viking Press
June 2024

After all, from that point of view, although Internet technology was experiencing rapid growth, it seemed difficult to have a fundamental impact on people's lifestyles except for bringing more convenience to people. At the same time, under the guidance of symbolism, the AI ​​field, which was once highly anticipated, has entered a dead end, and it seems difficult to see a possible breakthrough for a while. Under such conditions, it is almost like a fantasy to say that the intelligence level of AI will surpass that of humans in 2029.


Miraculously, the subsequent historical development trend is surprisingly similar to Kurzweil's prediction. Just two years after the release of "The Singularity Is Near", the "deep learning revolution" triggered a new round of growth in the field of AI. Soon after, AI's capabilities developed to the level of being able to defeat the top human Go players, crack hundreds of millions of protein structures, and help design computer chips with hundreds of thousands of components. And after ChatGPT (chat-style artificial intelligence program) came out in October 2022, AI mastered skills that only humans could master, such as conversation, writing, painting, and video production, in just over a year. According to relevant research, the latest AI models have demonstrated superhuman capabilities in hundreds of tasks. In this case, the prediction that AI will surpass humans in 2029 is not only no longer radical, but also slightly conservative. In fact, many professionals believe that AGI will arrive earlier. For example, Shane Legg, one of the founders of DeepMind (the artificial intelligence company that developed AlphaGo), believes that AGI can be achieved before 2028; and Tesla CEO Elon Musk is even more radical, believing that people will usher in AGI in 2025.


Not only that, many technologies including nanorobots and brain-computer interfaces are also developing rapidly as Kurzweil predicted. For example, in January 2023, the journal Nature Nanotechnology reported that researchers from the Barcelona Institute of Science and Technology used nanorobots to carry drugs to treat bladder cancer. Studies have shown that this treatment method can reduce tumors in experimental mice by 90%. This success is a good example of Kurzweil's idea of ​​using nanorobots to treat cancer and thus prolong human life. It is completely feasible. For another example, just a few days ago, Musk announced that the second brain-computer interface surgery would be performed in a few days, and predicted that within a few years, thousands of patients would have interface devices implanted in their brains. Although this technology still has many shortcomings at present, at the current rate of development, it should not be a dream for humans to interact with computers through brain-computer interfaces in the near future. If the two "black technologies" of nanotechnology and brain-computer interfaces are combined, it will be entirely possible to achieve Kurzweil's human-machine fusion and intelligence multiplication. Based on the above reasons, we have reason to believe that it is becoming more and more technically feasible to achieve the "singularity" by 2045.


However, after people pass the "singularity", can they really usher in an unprecedented good era as Kurzweil predicted? In my opinion, the answer to this question is actually uncertain. Although technological optimists, including Kurzweil himself, can cite a lot of historical evidence to prove that technological development to date has ultimately promoted the improvement of human well-being, if we simply use this law to predict the future, there may be huge risks. After all, in human history, no technology has the power of AI. Once used improperly, the risks it causes will be unimaginable.


Therefore, to ensure that we will usher in a beautiful new era after the "singularity", we need to comprehensively think about the relationship between people and technology, people and people, and people and human nature before the singularity arrives, and find ways to ensure that technology always develops in a direction that is beneficial to mankind.


When jobs start to die


According to Kurzweil's prediction, it will take about five years for AGI to arrive. Although AI's intelligence level has not yet surpassed that of humans, it has indeed surpassed that of humans in many aspects, which has aroused unprecedented concern about technological unemployment caused by AI.


From a historical perspective, technological unemployment is not a new topic. From the invention of the steam engine to the application of electricity to the popularization of the Internet, there have been significant "creative destruction" effects, resulting in the disappearance of a large number of jobs based on old technologies and the unemployment of many people in related professions. However, most of these waves of technological unemployment in history were temporary. With the popularization of new technologies, many new jobs will be created.


It is true that the impact of AI on the job market has not been significant so far, but this does not mean that the risk does not exist. When predicting the employment impact that AI may cause in the future, people often overlook an important condition, that is, the improvement of AI capabilities may be carried out according to the exponential law. In fact, if we take the advent of ChatGPT (artificial intelligence dialogue program) in 2022 as a node, it is not difficult to find that the development speed of AI after this node is much faster than before the node. Take the interaction capability as an example. Before the advent of ChatGPT, it took people decades to let AI learn to talk freely with people; after the advent of ChatGPT, AI achieved multimodal interaction capabilities in more than a year. In this sense, extrapolating the future growth rate of AI capabilities completely according to linear logic is likely to lead to very serious misjudgments. It is also important to note that while AI capabilities are greatly improved, its use costs are also greatly reduced. At present, the cost of calling AI models through APIs has dropped to almost zero.


This performance improvement and cost reduction combined make it not only technically possible to replace humans with AI, but also economically possible. In fact, if we pay more attention to relevant technology news, we will find that AI has quietly replaced many occupations when we are not paying attention. It is worth noting that only ten years ago, people believed that AI would only replace those jobs that were highly programmed and repetitive, and it was difficult for AI to replace those jobs that required more creativity and communication skills. However, the profession of illustrator was once loved by young people because of its flexible working hours and relatively high income. Now, if you want to use AI models to complete illustrations, you only need a few hundred yuan for an unlimited monthly subscription, and you can modify it at any time as needed. Obviously, in such a comparison, most customers will choose to use AI instead of human painters, and many illustrators will lose their jobs due to this choice of customers. In addition to illustrators, professions including translators, programmers, graphic designers, etc. are also experiencing a serious impact from AI. It's just that the proportion of this part of the population that is affected is relatively low in the overall labor force, so people's feelings are not very obvious.


So, can this round of technological unemployment caused by AI replacing people be easily overcome by the spontaneous adjustment of the market as in the past? I am not so optimistic about this issue. Fundamentally, whether a society can smoothly overcome the wave of technological unemployment depends on two points: first, whether there are a large number of employed people in the occupations impacted by new technologies. Second, when new technologies eliminate old employment opportunities, they can create new occupations that are easier to get started in a timely manner.


But this time, the impact of AI on the job market is completely different. On the one hand, this round of AI impact is not only very comprehensive in scope, but also very intensive in time. The so-called comprehensive scope means that many industries are affected at the same time. Unlike the special-purpose AI in the past, most of the newly launched AI models are general-purpose. In practice, people can use these models to complete many different tasks with just a little fine-tuning. In this case, the development of AI may have an impact on multiple professions at the same time. And the so-called intensive time means that after AI impacts one profession, it will immediately impact another profession. This intensive impact will greatly increase the difficulty of re-employment for the unemployed in the near future, and will also seriously undermine their confidence in re-employment through skills training. Imagine if an illustrator has just lost his job to Midjourney (an artificial intelligence painting tool), and finally learned to drive and became an online car-hailing driver, but soon lost his job due to the rise of driverless cars. In this case, will he still have the perseverance to continue to learn new skills and be sure that AI will not master this skill in a short time?


Therefore, this round of technological unemployment caused by AI may be completely different from all previous technological unemployment. If AI technology continues to grow exponentially, it will be difficult for society to achieve full employment purely by relying on the spontaneous regulation of the market. From a policy perspective, we certainly have many ways to alleviate the impact of AI on employment. For example, the government can provide more job search agency services and re-employment training to help those who lose their jobs due to AI find new jobs faster. However, if the development speed of AI continues to remain at a high level, then all these efforts can only have a temporary effect at best. The demise of human work may be a future that we find difficult to accept, but we have to face.


Rejecting “final product”


Considering that the development of our brain-computer interface, nanorobots and other technologies lags behind AI, using AI to directly enhance the brain may only remain at the conceptual level for at least the next decade. So, during this period, how should people deal with the various social contradictions caused by technological unemployment caused by AI?


Some scholars have proposed a solution: to tax AI users and use the tax revenue to issue Universal Basic Income (UBI). In this way, even if those who lose their jobs due to the impact of AI find it difficult to find new jobs, they can still obtain basic living security and avoid falling into difficulties.


However, this plan has been controversial since it was proposed. For example, some scholars believe that taxing new technologies such as AI will greatly hinder its development; others believe that the implementation of UBI may encourage people to get something for nothing.


In my opinion, the greater potential resistance to the implementation of AI tax and UBI actually comes from its impact on the distribution of benefits. As we can see, with the development of AI, a large number of AI-related companies have seen a surge in revenue and market value in a short period of time. Take OpenAI as an example. A few years ago, it was a company that was losing money year after year and had no money. However, with the popularity of models such as GPT, it quickly became a company with annual revenue of billions of dollars and a valuation of nearly 100 billion US dollars. Not to mention that giants such as Microsoft and Nvidia have taken advantage of the momentum of AI to expand their market value by more than one trillion US dollars in more than a year. It can be foreseen that with the further development of AI technology, this trend of huge wealth concentrating on a small number of companies and individuals will continue.


What consequences will this bring? One direct consequence is that the division and alienation of the entire society will become more serious. When the cost-effectiveness of AI is high enough, ordinary workers will no longer be worth being exploited, as described in Hao Jingfang's novel "Folding Beijing". In this case, the rich who control AI and wealth will not even want to live in the same city with them, and thus the social alienation and confrontation will become more serious.


This is not the scariest thing. If, as Kurzweil predicts, in the near future humans will be able to transform themselves at the molecular level through nanotechnology, then those who have more wealth will be the first to make themselves "mechanically evolved". After that, the advantage of the rich over the poor will not only be more wealth, but also in terms of intelligence, physical strength and other aspects. They will crush the latter. And this advantage will in turn allow them to further promote the concentration of wealth... Liu Cixin once imagined this situation in his novel "The Support of Humanity". According to his imagination, under similar trends, the wealth and power of the whole society will be monopolized by a "final producer", and the fate of all the rest will be controlled by him.


How to align AI?


If technological unemployment and distribution problems are old problems that humans have encountered many times and are the reappearance of them in the AI ​​era, then what we are going to discuss below are the brand new problems that arise when the "singularity" approaches.


Among all the new problems, the most prominent one may be the AI ​​alignment problem. In short, AI alignment is to ensure that AI can understand human norms and values, understand human will and intentions, and act in accordance with human will. On the surface, this does not seem to be a difficult task. After all, AI programs are fundamentally set by humans. Would humans set a goal for it that goes against their own interests? But in fact, the answer is not that simple, for two reasons:


On the one hand, when humans set behavioral goals and norms for AI, it is usually difficult to fully and correctly express their own interests and concerns, which leaves room for AI to violate human interests. For example, the scientific philosopher Bostrom once proposed a thought experiment called "Cosmic Paper Clips" in his famous work "Superintelligence". He imagined that humans created an AI with the goal of maximizing the production of paper clips, then it would use all means to achieve this goal, even in order to use more resources to produce paper clips, at the cost of destroying humans. In this thought experiment, the production of paper clips itself is in the interests of mankind, but its final result may seriously damage human interests.


On the other hand, in order to make AI more efficient, humans usually give them a lot of space for self-learning and improvement, which may cause AI to deviate from the originally set values. For example, many AI agents now allow themselves to continuously improve themselves based on their interactions with the environment and users. In this case, they may be affected by various bad values, causing their goals to deviate from the fundamental interests of humans.


In particular, with the arrival of AGI, AI will gradually change from a tool to an individual with capabilities comparable to or even superior to humans in all aspects. In this case, the inconsistency between AI interests and those of humans will lead to huge risks, and the dark future portrayed in films and TV series such as "Terminator" and "The Matrix" may actually come.


It is precisely to prevent such a situation from happening that AI alignment research has become a prominent subject in the field of AI. At this stage, people mainly use two methods to achieve AI alignment. One is "reinforcement learning with human feedback", the so-called RLHF method; the other is "constitutional artificial intelligence", the so-called CAI method. When using RLHF, the designer will first manually train a smaller AI model, implement reinforcement learning through the trainer's continuous feedback on AI behavior, and guide its values ​​to be consistent with the values ​​expected by the designer. Then, use this small model as a "coach" to train a larger AI model with reinforcement learning. When using the CAI method, the designer will first set a "constitution" that the AI ​​model must follow, and generate the code of conduct that AI needs to follow in various scenarios based on the "constitution". Then, the designer uses these criteria to judge the different results generated by the AI ​​model to see if they meet the criteria of the "constitution". Results that meet the "constitution" will be rewarded accordingly; and results that violate the "constitution" will be punished accordingly.


It is worth affirming that both methods have achieved certain results, but they still have great problems. For example, Geoffrey Hinton, the "father of deep learning", recently pointed out that these methods can only make AI's behavior appear to be in line with people's interests, but cannot guarantee that they are completely consistent with people in terms of values. In this case, it is difficult for people to guarantee that AI will betray humans in certain circumstances. Especially when AGI arrives and AI's capabilities surpass those of humans, the possibility of similar betrayals will become higher and higher, and the risks arising from them will also become greater and greater.


So, in this case, how should we further improve the AI ​​alignment work? In my opinion, what we need may be some changes in thinking. At present, almost everyone naturally equates AI alignment with value alignment, believing that AI's values ​​must be consistent with their own in order to make them always serve the interests of mankind, but this is obviously quite difficult. However, is the consistency of values ​​really necessary? Or we can change the question: In reality, we need someone to complete certain tasks in accordance with our interests. Do we have to make him consistent with our values? The answer is of course no. In more cases, we only need to design a good set of rules to guide people whose values ​​are not consistent with ours to achieve the goals we want. For example, suppose we want two self-interested people to fairly distribute a cake. If we want to achieve this goal by aligning values ​​first, then this work will be extremely difficult. However, we don't have to do this. We only need to design a mechanism to let one person cut the cake, but let another person be responsible for the distribution, and we can do this very easily. This inspires us that when we align AI, we can bypass the black box of values, which is difficult to crack, and directly complete these tasks from the perspective of mechanism design. Fortunately, some researchers have seen this alignment idea and have made a lot of achievements in this direction.


Who are you? Who am I?


In addition to the AI ​​alignment problem, another major challenge that people must face when the "singularity" approaches is the identification and recognition of identity. This problem includes two aspects: one is how to understand the identity of AI and our relationship with AI; the other is how to re-understand our own identity.


Let's look at the first question first. A few years ago, if you asked someone how they should view AI, they would most likely say without hesitation that it is just our tool. The reason is simple: from their performance, they are unlikely to have autonomous consciousness and can only perform related tasks under human control.


But after the emergence of large language models such as ChatGPT, the situation has changed a lot. AI's performance in interacting with people has gradually gotten rid of its original dullness. In conversations with us, it can always answer fluently. In some cases, it can even actively guess our psychology and predict our psychology and behavior. This makes us wonder whether they have their own consciousness. Perhaps some computer experts will comfort us that this is just it mechanically answering these questions according to a pre-designed model, which is essentially just a bunch of additions and subtractions of 0s and 1s. However, as the saying goes, "If you are not a fish, how can you know the joy of fish?" Who can guarantee that there is no consciousness and thinking behind this simple addition and subtraction? After all, even if we put aside our brains and observe carefully with a microscope, we can only see a bunch of neurons sending various electrical signals, but not even a cell with a soul. In this case, how can we be sure that the AI ​​in front of us that can communicate freely with us has not evolved a soul?


I think similar problems will become more and more prominent after the arrival of AGI. Perhaps one day soon, the bionic AI robots in "Westworld" will appear in front of us. All their behaviors are consistent with ours, and even the preset programs will tell them that they are human. When we encounter such AI robots, can we still pat our chests and say that what we see is just a tool created by us?


Let’s look at the second question. Compared with the identity problem of AI, human self-identification and recognition may be a more difficult problem.


On the one hand, as mentioned earlier, with the development of nanorobots and brain-computer interface technology, humans will have the ability to significantly modify their bodies. In the future, people are expected to use nanorobots to help them repair dead cells to prolong their lives, and they can also directly rely on them to expand their intelligence and physical strength. At first, this modification of the human body may be limited to a few cells, which will not cause us identity troubles-just as we don’t think that a person is no longer the same after he has a prosthesis or dentures. But if this modification process continues, one day people will replace most or even all the cells in their bodies. At this time, the classic "Ship of Theseus" problem will appear before us again: Is the current "I" still the "I" in the past?


On the other hand, with the development of AI technology, people will gradually master the ability to upload consciousness to the cloud - in fact, some people, including Musk, have already begun similar efforts. Assuming that one day in the future, technology has really developed to the point where this consciousness can think like the person himself, can this consciousness be regarded as human consciousness? If the answer is yes, then what is the relationship between it and the original consciousness? Furthermore, if we place this consciousness in a clone of the source of consciousness, then what is the relationship between this clone and the original person? Father and son? Brothers? Or something else?


It is important to emphasize that the issue of identity recognition and identification is by no means a simple philosophical topic. In reality, it involves many legal and ethical issues. For example, how should the labor-capital relationship between humans and AI be handled? Should AI enjoy the same rights as humans? Can a clone of my body and consciousness own my property? If the identity issue is not resolved, then these issues will be difficult to truly resolve.


But so far, people have not found definite answers to the above questions. In order to further promote the formation of relevant consensus, we still need to have open and in-depth discussions on these issues.






This article was first published onEconomic Observer Observer
July 22, 2024Page 25 and 26