The famous writer Asimov put forward the "three principles of robots" in science fiction in 1940: robots should not harm humans, or stand idly by when they see humans being hurt; Robots must obey human commands and have the function of force closed-loop control, unless this command contradicts the first one; Robots must protect themselves, unless this protection contradicts the above two. Today, AI is gradually moving from science fiction to reality, and people are more and more alert to the possible harm of AI, hoping to formulate guidelines for it to ensure the smooth development of AI technology and industry in the direction beneficial to society
1. Linking different AI criterion proposals in a harmonious and complementary way
AI is not only a scientific field developed after decades of tempering, but also an important disruptive technology to change the future. The research and development of artificial intelligence is not only related to the national science and technology, economic development and social stability, but also related to the international influence and international situation of the country in the field of science and technology and industry
AI brings both opportunities and potential risks and hidden dangers. For example, when the most widely used deep neural network model introduces minimal noise to the input (such as changing a key pixel value in the image input), it may cause disruptive errors in the recognition and prediction results of the network (such as recognizing frogs as trucks and turtles as guns). Without adequate risk assessment, emerging technologies are likely to introduce unpredictable security risks and risks while bringing opportunities to social development
for the development of artificial intelligence, the primary problem is to choose the right path. Innovation, value and ethics are an iron triangle. While innovative technology brings potential value to society, it may have unpredictable risks and pose major challenges to social ethics. Therefore, while developing AI technology to benefit the economy and society, it is very important to pay attention to the social attributes of AI and ensure the healthy and benign development of AI science, technology and industry from the perspective of social risks, ethical norms and governance
in order to ensure the development of useful artificial intelligence, governments, non-governmental organizations, scientific groups, scientific research institutions, non-profit organizations and enterprises all over the world have put forward the guidelines for the development of artificial intelligence. Including the British government, the international society for electrical and electronic engineering, and the International Labour Organization. So far, there are nearly 40 AI standard proposals visible in public channels, involving people-oriented, cooperation, sharing, fairness, transparency, privacy, security, trust, rights, prejudice, education, general AI and other topics. For example, the Asilomar AI principles advocated by the American Institute for the future of life and the AI code proposed by the British House of Lords all hope to lead the development of AI by taking the lead in AI ethics and norm setting
in fact, at present, the artificial intelligence standards proposed by any country, institution or organization only cover a small number of topics (more specifically, the artificial intelligence standard proposal involves more than 50 main topics). Although many standard proposals have their own characteristics and have considerations that are not covered by other schemes, it is difficult to build a unified, comprehensive and perfect artificial intelligence standard, It is unnecessary - the reason why it is difficult to achieve is that the science and technology of artificial intelligence itself, the norms of artificial intelligence, and the connotation and extension of ethics are constantly improving; The reason why it is unnecessary is that each country, organization and institution's code proposal combines its own actual situation and has special considerations related to organizational objectives, environment, culture and ethical traditions
the author believes that the really valuable approach is to recognize the significance of each country and institution's proposal in a certain range. In order to better realize the global governance of AI, the focus will not be to unify the AI guidelines, but to link different AI guideline proposals in a more harmonious and complementary way, so that they are orderly and consistent in some parts (such as countries, organizations, etc.), Within the overall scope of the world, different standard proposals can still interact and negotiate, and ultimately achieve harmony, complementarity, optimization and symbiosis
2. Reconcile the existing differences between governments and enterprises in the formulation of standards
in the quantitative analysis of different AI standard proposals, it can be seen that relatively speaking, governments of various countries attach great importance to the potential risks and safety of AI, but the attention of enterprises is relatively weak. This reflects the underestimation of potential risks and potential safety hazards in the process of AI innovation. For example, in the assessment of artificial intelligence risks, some enterprises believe that if the overall possible benefits far exceed the foreseeable risks and adverse factors, relevant exploration can be carried out. But from an academic perspective, on the one hand, it is a dangerous perspective to decide whether an enterprise should take relevant actions according to the quantitative difference between potential benefits and risks; On the other hand, if we do not conduct comprehensive research and analysis from the overall perspective of the whole society, and only predict and judge from the perspective of the enterprise itself, it is likely that the limited thinking and action will cause potential great harm to the society
therefore, we should see the gap between the government's attention and the relatively insufficient consideration of enterprises, and take effective measures such as guidance, supervision, and the establishment of a risk and safety assessment system for artificial lead screw intelligent products and services to bridge the gap between government expectations and enterprise practice
at the same time, some standards may underestimate the risks that may be caused by relevant technical approaches. For example, some proposals are limited to general artificial intelligence (all cognitive functions reach the human level, referred to as AGI) and super intelligence (all cognitive functions exceed the human level, referred to as ASI)
the discussion and relevant research of academic institutions in this regard should provide strong support for government decision-making. For example, asiloma artificial intelligence guidelines proposed that super intelligence should be developed based on the benefit of all mankind. Cambridge University recently proposed and is conducting a research called "the realization ways and potential risks of general artificial intelligence"
3. Give consideration to the development of special AI and general AI
in the top-level design of AI, is general AI developed, or is it limited to the development of special AI in specific fields? This is the major difference in many AI standard proposals
for example, the European Union Association of artificial intelligence research laboratories (Claire) clearly proposed that the development of human horizontal intelligence, general intelligence and super intelligence should be limited. Germany's AI program also focuses on the development of dedicated AI. However, asiloma AI criteria and the AI criteria proposed by openai and other institutions clearly put forward that the general AI that can be equipped with bending centers with different diameters should be fully considered and developed
in fact, the development of special intelligence may not completely avoid risks, because special intelligence systems are likely to encounter unexpected scenarios in applications, and having a certain general ability may improve the robustness and adaptive ability of intelligent systems. Therefore, we should link the considerations of different countries, organizations and institutions to realize the complementarity of top-level design
the diversity of human cognitive functions and the complexity of artificial intelligence application scenarios make it difficult to absolutely perfect the modeling of the risk, safety and ethics system of artificial intelligence, especially involving the interaction between artificial intelligence products and services and human groups with different cultural backgrounds. Any country should formulate artificial intelligence guidelines that adapt to its actual social, scientific, technological, economic development and cultural needs. But what is more critical is that different governments, academic organizations and industries all over the world need to interact and collaborate in depth, carry out strategic design for the development of artificial intelligence, which is beneficial to the whole society, systematically evaluate and predict its potential social risks and ethical challenges, and build a development criterion of overall harmony, mutual complementarity, optimization and symbiosis around the world
(author: Zeng Yi, researcher and deputy director of brain intelligence research center, Institute of automation, Chinese Academy of Sciences)
sourceph ">