Intelligent robots for good
2022-09-29 15:08:48
An intelligent robot shakes hands with audience members at the World Digital Economy Conference 2022 & The 12th Smart City and Intelligent Economy Expo on Sep. 2, 2022. Photo: CFP
Song Chunyan
As artificial intelligence’s level of imitating human cognitive ability increases, the concern of whether intelligent robots can become self-aware has gradually shifted nearer. In this regard, there are different views including optimism, pessimism, and neutrality in the academic community.
At present, intelligent robots with the potential to generate self-awareness are a form of advanced artificial intelligence with features such as interactivity, autonomy, and self-adaptability. Deep concern over its self-awareness reflects a fear of losing control over intelligent robots. The best way to eliminate this fear is to deeply understand and grasp robotic principles and its essence, open the black box of self-awareness for intelligent robots, and promote its development for good.
Different paths
In general, human self-awareness can be generated in two ways: self-activation and identification with others. Self-awareness and self-identity are inseparable, while self-identity cannot be separated from the approval of others in the group. Therefore, self-awareness usually arises when seeking the approval of other members of society. In exploring the self-awareness of intelligent robots, each of these two paths has been attempted, and different progresses have been made. According to the path of technological development, intelligent robots are mainly divided into data-derived intelligence and brain-like intelligence. Their self-awareness is generated by other people’s recognition and self-motivation respectively, and their nature is different.
First, data-intelligent robots gain its self-awareness through “design,” namely, through the path of other people’s recognition. This path is supported by algorithmic design and big data, allowing the robot to behave with “self-awareness.” Whether it is a digital computer or a quantum computer, the path to self-awareness is a data intelligence path. But this path has the risk of data compatibility and spillover.
It is clear that the “self-awareness” generated through this path is given to the intelligent robots by designers and users in a certain scenario, and the cognitive subject is human rather than machine. This is particularly common in affective or social robots. Users of social robots often have the illusion that the robot has emotions and self-awareness when they spend time with it. Psychologists refers to this phenomenon as “the empathy effect.” In general, the stronger the empathy effect, the more popular the robot is with consumers.
However, by examining the true content and regularity of the emotion, or self-awareness, inherent in a robot, it becomes clear that it is only the user who believes the robot has manifested self-awareness and emotion in a given situation. Therefore, it does not indicate that such robots are capable of self-motivated self-awareness or emotion.
Second, brain-like, or deep-learning, in intelligent robots “grows” self-awareness through a self-motivation path. This path is supported by brain-like technology, allowing robots to have a “body” that can sense and form an independent “self-awareness”. Although the “self-awareness” here originates from the machine, it is the machine that carries a biochip, so neither the techno-pessimists nor the optimists can deny the possibility of self-awareness.
For example, based on experiments performed by cognitive neuroscientists, Chinese Academy of Sciences’ academician Zeng Yi and his team successfully challenged the self-awareness mirror experiment with self-developed brain-like robots, establishing a biologically inspired model for robot self-awareness. Although the mirror experiment offers only one perspective for the verification of robot self-awareness, we can look into the future based on this experiment, and it is possible to create intelligent robots with human-like self-awareness of a biological nature. This path carries the risk of unmanageable biochip information. Most of this research is currently limited to the laboratory, and only the brain-machine interface technology for brain disorders has been used clinically. The use of such technology requires training the patient in advance, i.e., human assistance to demonstrate intelligence.
Human-machine integration
While the emergence of self-awareness in intelligent robots may increase human fears of loss of control, integrated human-machine development can reduce the likelihood of such harm. Integrated human-machine intelligence is a new form of intelligence that takes full advantage of the respective strengths of humans and machines. Big data artificial intelligence, internet group intelligence, perceptual and cross-media intelligence, human-machine hybrid intelligence, and autonomous intelligent systems are the five main types of integrated human-machine intelligence at present. To guide the development of intelligent robots for good and avoid adverse social impacts, human initiative is needed to respond and take measures.
First, we should carry out forward-looking ethical design. The technological entity theory holds that technology is an entity loaded with values. In particular, intelligent robots, which are now increasingly autonomous, are capable of adapting to the environment and making autonomous decisions based on their algorithmic design. The results they produce may be uncontrollable. This algorithmic complexity—and uncontrollability—supports the technological entity theory. The governance of technological risk thus requires a forward-looking model.
Although research on brain-like intelligence, consciousness, and self-awareness is still in its infancy, many developers share the goal of producing self-awareness autonomously. Therefore, it is important to propose an ethical framework when self-awareness research is still in its infancy, and propose a combined model based on the respective strengths of the “top-down” and “bottom-up” approaches: “do no harm, do no evil” as the bottom-line principle for value-sensitive design. Learned human behaviors will be filtered through the bottom-line principle in later learning. If the path of contextual learning is adopted exclusively, uncontrollable consequences will occur once it is put into market use. Therefore, the bottom-line principle must be combined with contextual learning to ensure safe and controllable intelligent robots.
Second, we should establish a human-led human-robot co-evolutionary relationship. The deep concern about the self-awareness of intelligent robots essentially reflects people’s fears of loss of control. A human-machine relationship formed on this basis is not conducive to either the healthy development of artificial intelligence or the free and comprehensive development for human beings. In this regard, we need to explore a healthy and sustainable human-machine relationship, or, a human-led human-machine co-evolutionary relationship. This can ensure that human beings do not lose their subjectivity in the co-evolution. Only by maintaining such subjectivity can humans rationally decide what robots can and should do and ensure that AI products evolve according to human values, so that they can take the initiative in co-evolution and realize the development of intelligent robots for good.
At the same time, the human-led approach is the only way to have human beings take the initiative and bear the responsibility for negligence in a co-evolution. If intelligent robots become self-aware, it will lead to a debate on their moral status and moral capacity. In fact, the purpose of allowing robots to have moral judgment is to better serve humans and minimize or possibly avoid negative impacts, rather than obliging robots to have rights or take responsibilities; the two can be separated. People are the bearers of responsibility. Human responsibility includes not only value-sensitive design in the risk prevention stage, but also the development of rules and regulations to reduce losses after risk has arisen.
Third, we should establish institutional mechanisms for an integrated and coordinated “social-technical” development. If intelligent robots become self-aware, the changes brought by this moment will not only be disruptive innovations in technology but also have an impact on the original social operation mechanism, the potential results are difficult to predict clearly. The advent of self-aware intelligent robots will require not only early intervention at the R&D stage, but also innovation in social institutions to adapt. To ensure that new intelligent robotic products are oriented toward goodness, it is necessary to regulate them in R&D and use. At the same time, the government should actively guide third-party organizations to conduct sensitivity assessments of the privacy, security, and social justice issues which may arise and require companies to establish compensation or recall mechanisms for products that have problems.
Fourth, we should actively guide public participation. While the progress and application of technology brings convenience to people’s lives, it also, to a certain extent, creates fear. This fear of technology stems from the complexity and uncertainty of technology itself, especially about intelligent robots that may produce self-awareness.
The fundamental way to eliminate technophobia is to involve the public in the technological governance process for intelligent robots. Public participation can help improve the social acceptance of intelligent robots. The more deeply the public is involved in technology development, the better they understand the technical details, and the more likely they are to accept self-aware intelligent robots. In addition, public participation can help improve the accuracy of social risk assessment for intelligent robots. Appropriate public participation will lead to more comprehensive and accurate identification of social risk factors for intelligent robots, and the resulting preventive measures will be more effective. In short, more public participation will help achieve the goal of AI for good, which is also the original intention of proposing integrated human-robot intelligence.
Song Chunyan is an associate research fellow from the Institute of Philosophy at Hunan Academy of Social Sciences.