2024-08-17
한어Русский языкEnglishFrançaisIndonesianSanskrit日本語DeutschPortuguêsΕλληνικάespañolItalianoSuomalainenLatina
By Liu Cheng Recently, driverless online ride-hailing vehicles, driverless delivery vehicles, driverless cruise ships, drones and other industries in Beijing, Wuhan and other places have accelerated, gaining first-mover advantages in the country and even the world. However, the social governance issues it has caused, especially the issue of responsibility, remain unsolved mysteries.
The innovation and development of the industry cannot be restricted due to the lack of social rules, but related technologies cannot be allowed to run rampant in an "anarchic state". Therefore, all sectors of society need to strengthen theoretical thinking and practical discussions in order to reach a certain social consensus on institutional rules.
Autonomous driving does not mean no one is responsible
The subjects of constraints in current laws and regulations are mostly people (natural persons or legal persons). Especially when it comes to major accountability mechanisms such as criminal liability, it is often necessary to point to specific natural persons.
In other words, the supervision of safety bottom lines such as major accidents and intentional crimes (although the incidence is not high compared to general safety accidents) is a necessary condition for maintaining traffic and social order, which relies heavily on criminal law. Among them, most road traffic crimes target motor vehicle drivers.
Therefore, determining the responsible party is the primary issue in the regulation of autonomous driving (and other similar products of artificial intelligence). Unlike normal vehicle driving, autonomous driving has no direct driver, resulting in the lack of a direct responsible party, which seems to be a "headless case" in reality or a paradox in theory.
Looking beyond the phenomenon to the essence, we can still find several possible responsible entities for autonomous driving, including: users of autonomous driving; remote operators, technicians or safety officers behind autonomous driving; vehicle owners, operators or beneficiaries. Of course, this is mainly theoretical speculation, and there are also a few foreign cases to support it. But as to who is the responsible entity for supervision, this issue still requires in-depth research and demonstration by academia, government and legislative departments.
It is clear that even if an unmanned vehicle (or other vehicles such as ships and aircraft) does not have a driver, it has a clear user. This is similar to the "user responsible for use" proposed by the UK Law Commission; in Georgia, USA this means "the person who causes the vehicle to move".
If the user chooses to use or start an unmanned vehicle knowing that it is unmanned, it is assumed that he is willing to bear the potential consequences. Of course, some people think that the user is just the object of service, the consumer. The general theory is that as long as the consumer does not use it improperly, he should not be responsible for the adverse consequences of the consumption process. Therefore, it is controversial in countries around the world whether the user should bear the main responsibility for unmanned driving.
A compromise is that the user of the driverless vehicle still has to play the role of driver. Due to the key position of the driver, some states in the United States that allow self-driving vehicles on the road require that a human driver must be in the driver's seat and remain alert, but in fact retain a backup driver. The consequence is that driverless vehicles need drivers to take responsibility, and users become drivers. Self-driving is only in name, which restricts the development of the industry.
The second possible responsible party is the remote operator. Remote operators, technicians, and safety officers can be described as the "drivers" of autonomous driving technology. Under normal circumstances, people cannot feel their presence, but in an emergency, they will appear in time and provide safety assistance.
California allows the application of autonomous driving test permits, but requires a remote operator with a corresponding driver's license to "continuously supervise the vehicle's dynamic driving task execution" when there is no human backup driver present. Arizona stipulates that penalties for traffic violations or vehicle violations will be attributed to those who "test or operate fully autonomous vehicles."
In other words, this practice expands the concept of driver to include remote operators who are considered to be in charge of the vehicle, even though they are not sitting in the car.
The third type of responsible party goes beyond the logical framework of the driver and looks for the vehicle owner, operator or beneficiary to be responsible from the perspective of interests.
The first two types of main users and remote operators respectively make the vehicle move and remotely control the vehicle. In terms of supervision, they are given the shadow identity of the driver, and then they are held accountable as drivers under the current legal framework. If the driver of the driverless vehicle is found, the current regulations will still apply, and all the supervision problems of driverless vehicles will be solved.
However, as mentioned above, the first two categories of subjects are not real drivers after all, and such arbitrary identification is highly controversial. Therefore, the third category of responsible subjects has emerged.
If users and remote operators cannot be scapegoats, who else can be held responsible for potential accidents? The biggest voice in society is for vehicle owners to take responsibility.
At present, vehicle owners, operators and beneficiaries are all the same entity - platform companies. They launched this new business model, provided services and earned profits, and should be responsible for it, just like a restaurant is responsible for its food.
But the reality is that the platform is a company, not a natural person, so it is difficult to bear criminal responsibility. Some people have proposed that the corporate legal person should be held responsible, but this goes beyond the normal scope of product quality or service quality responsibility. Generally speaking, when consumers use products or services, the supplier of the product or service is responsible for the product quality, but there is a clear scope and excessive accountability cannot be pursued.
The greater difficulty is that when the vehicle owner, operator, and beneficiary are separated, it is almost impossible to determine the attribution and sharing of responsibilities. For example, if individuals are allowed to rent driverless vehicles to third parties for operation in the future, there will be multiple entities such as vehicle owners, lessees, operators, and beneficiaries.
It should be noted that the above analysis only focuses on potential accident liability that may occur during the normal use of driverless vehicles. For intentional illegal acts during use, the specific person responsible must be identified on a case-by-case basis. For example, if the user's improper use leads to a malicious collision, the user should be the primary person responsible.
Do intelligent machines have personalities, and how can they be held accountable?
Unmanned vehicles, ships, aircraft and other artificial intelligence machines have no souls or minds; they are not human.
But if these machines have personality traits, they will have independent legal personality and can be responsible for their own actions. This sounds like science fiction, but it already exists in theory and reality.
There are two opposing views on whether machines, including self-driving cars, have personalities.
One view is that machines have no personality and are not responsible for their actions, and therefore it is necessary to pursue the principal responsibility of the people behind the machines.
Some scholars believe that although the "autonomy" of intelligent machines is often used to describe their ability to make decisions on their own, they do not have desires or values. When describing an artificial intelligence system as autonomous, people usually do not mean that it makes decisions "on its own". From this perspective, the issue of autonomy is not some mysterious quality inherent in artificial intelligence systems.
Even if machines have the ability to make decisions on their own, they are only acting "unconsciously" under the command of human programs, and are essentially executing human decisions. Therefore, humans should be responsible for the autonomous behavior of machines.
Of course, the process of humans empowering machines to make automatic decisions must be transparent, and humans cannot be held responsible for the machine's wrongful behavior without knowing it. In other words, the person operating the machine is responsible for the machine, and ordinary people have the right to know how the machine makes decisions, thereby protecting their right to know. Similarly, large companies such as platforms are responsible for intelligent machines and need to inform users of the algorithms of intelligent machines and the principles of automatic decision-making.
According to the EU General Data Protection Regulation (GDPR), users have the right to request an explanation of whether there is an automated decision-making mechanism, as well as a reasonable explanation of information such as "the logical principles involved in the automated decision, its importance to users, and the expected consequences". In the case of unmanned driving, the platform should be responsible for the vehicle and inform users of basic information about vehicle operations, such as speed, pedestrian avoidance, etc.
Another view is that machines, including driverless vehicles and their artificial intelligence systems, have "robotic autonomy" and will eventually have full legal personality.
Some scholars believe that machines have autonomous consciousness and can be responsible for their own behavior. The function of road traffic laws and the purpose of punishing illegal behaviors are to improve road safety and maintain road order. Fines, imprisonment and personal transformation of drivers are to force human behavior to comply with safety standards.
As an analogy, humans can directly impose rules on driverless vehicles and their intelligent systems, and regard law enforcement as a feedback link for the training system to further modify its algorithm and architectural design.
In more serious cases, revoking the authorization of certain driverless vehicles to drive on the road, delisting them or causing them to disappear, is similar to approving the death penalty for a person. In short, if machines have personality, they can be punished by transformation or elimination.
A utopian vision is that driverless vehicles may be so superior to human drivers in driving technology that there is no need to consider criminal liability at all, and only the scientific and technological ethics and market competition order need to be maintained. This will transform the social supervision of people into the economic supervision of the market.
Social rules need to be profoundly reformed
Artificial intelligence systems represented by driverless cars are complex and efficient. They can promote the overall evolution of civilization, affect social mechanisms, bring about changes in social division of labor, employment markets and opportunity patterns, affect cultural practices and social interactions, and change human production, life and survival methods, thereby possibly reshaping social norms.
Rapidly changing material technology always requires corresponding cultural adaptations such as systems and concepts to drive the development and innovation of the entire society. But nowadays, human beings tend to lag behind in culture, which is obviously not suitable for new technologies. Especially when the governance system and ethical concepts cannot adapt quickly, will there be a dilemma where cultural lag hinders technological change? These issues urgently need theoretical research and practical discussions in academia and industry.
In my opinion, driverless driving does not mean that no one is responsible. There are roughly two ideas behind it: one is to make the responsible party concrete, and find someone who is not the driver but can assume the driver's responsibilities; the other is to personify the robot and let it be responsible for itself.
(The author is an associate researcher at the Institute of Financial and Economic Strategy, Chinese Academy of Social Sciences)
Copyright Statement: The above content is the original work of the Economic Observer, and the copyright belongs to the Economic Observer. Reprinting or mirroring is strictly prohibited without the authorization of the Economic Observer, otherwise the legal liability of the relevant behavior subject will be pursued in accordance with the law. For copyright cooperation, please call: [010-60910566-1260].