Current location - Education and Training Encyclopedia - Education and training - Xinyi Technology Wang Chunping: The Development of Artificial Intelligence Algorithm with Value First
Xinyi Technology Wang Chunping: The Development of Artificial Intelligence Algorithm with Value First
On September 27th, the report conference and policy seminar on "Promoting Gender Equality of Artificial Intelligence Algorithms" sponsored by Mana Data Foundation was held in Shanghai. Senior Project Officer of UN Women in China, Vice Chairman of Shanghai Pudong New Area Women's Federation, Secretary-General of Manna Data Foundation, experts in charge of Manna Data Foundation, Zhou and Kuang Kun, as well as special convener of "Technology for Good" of Xiaomi Group Science and Technology Committee, chief algorithm scientist of Xinyi Group, and solution architect of data intelligence team of Si Workshop (China) and other guests attended the meeting to discuss how to promote gender equality in artificial intelligence algorithms.

As a representative of Xinyi Technology Group, a financial technology enterprise, and a female practitioner who has made outstanding achievements in the field of artificial intelligence, Dr. Wang Chunping expounded her views and suggestions on data ethics and gender equality in the research and application of artificial intelligence.

The era of artificial intelligence is accelerating, and promoting gender equality is the premise of development.

Since 1956 put forward artificial intelligence, the rapid development of this science and technology has brought earth-shaking changes to human society in just over 60 years. The artificial intelligence algorithm is an opinion expressed in mathematical way or computer code, and the prediction result is determined by the algorithm model and data input.

With the acceleration of the era of artificial intelligence, human beings will live in a world where algorithms are everywhere. As algorithm decision-making begins to intervene and dominate human social affairs, algorithms will have an inestimable impact on human life and future. Based on this, people began to pay attention to the possible ethical risks caused by algorithm deviation. From the perspective of gender alone, because the field of artificial intelligence is still dominated by men, it is easy to unconsciously generate disputes about "gender discrimination" in the daily application of artificial intelligence algorithms.

According to the report "Promoting Gender Equality of Artificial Intelligence Algorithms" released on the same day (hereinafter referred to as "Report"), there are certain gender discrimination in many application scenarios of artificial intelligence algorithms. Taking the artificial intelligence open platform as an example, a picture of a man with a fruit basket is detected as a woman by the AI face recognition platform, but it can be detected as a man by cutting off the avatar alone. In the employment scenario, in 20 18, Amazon's automatic recruitment algorithm tended to give low scores to resumes with "female" keywords, which reduced the success rate of female job hunting and finally gave up the recruitment algorithm.

Wang Chunping, chief scientist of Xinyi Technology, said: "The foundation of artificial intelligence algorithm is data, but in real life, due to various reasons, the distribution of a large number of precipitated data is biased, which may infiltrate some social prejudices into the algorithm. As a female practitioner of artificial intelligence algorithms, I think there are many ways to eliminate these prejudices. "

Eliminating prejudice and discrimination is a new topic for data algorithm practitioners.

In 20 19, the National Professional Committee of New Generation Artificial Intelligence Governance issued "The Governance Principles of New Generation Artificial Intelligence-Developing Responsible Artificial Intelligence", which put forward the framework and action guide of artificial intelligence governance, and clearly proposed to eliminate prejudice and discrimination in the process of data acquisition, algorithm design, technology development, product development and application. Recently, the Committee issued the "Code of Ethics for a New Generation of Artificial Intelligence". Article 13 places special emphasis on avoiding prejudice and discrimination. In data collection and algorithm development, we should strengthen ethical review, give full consideration to differentiated needs, avoid possible data and algorithm deviations, and strive to achieve inclusiveness, fairness and non-discrimination of artificial intelligence systems.

"The problem of gender discrimination exists in the algorithm, mainly because the data set of training AI reflects the gender bias existing in human society, and algorithm engineer lacks understanding of this problem and does not include solving gender bias in development requirements, thus making the algorithm magnify gender discrimination." Kuang Kun, an expert in the research group of Mana Data Foundation, said.

According to the special social survey data in the report, 58% of the practitioners of artificial intelligence algorithms are unaware of the existence of gender bias in the algorithm, and 73% of the practitioners are unaware of the existence of malicious algorithms specifically targeting women. The awareness and ability of algorithmic gender equality of practitioners need to be improved urgently.

Dr. Wang Chunping said, "To promote gender equality as much as possible in artificial intelligence algorithms, we think it is necessary to start with the awareness and environment cultivation of developers. For example, within Xinyi Technology Group, we have many company policies to ensure and promote equal opportunities for employees of different sexes. For example, we will have relatively complete training courses related to artificial intelligence or digital applications. These courses, including business scenarios, algorithm learning and engineering landing, will be open to employees in all positions, and will help employees interested in artificial intelligence algorithms to participate in some experimental innovation projects as much as possible and provide them with equal job opportunities. We believe that ensuring that female employees get fair career opportunities, creating an environment without obvious gender differences for the algorithm development team, and consciously eliminating algorithm engineer's gender bias are conducive to achieving gender equality as much as possible in the application of artificial intelligence algorithms. "

Balance between business and ethics, the future of artificial intelligence algorithm with values first

The unique operation logic of artificial intelligence algorithm has led to great changes in the structural scene of people's social life, which has aggravated the "digital gap" between decision makers and relative people. The unique running logic of artificial intelligence algorithm is profoundly changing the previous mode of production and lifestyle, and has formed a leading relationship with people's behavior. This also means that human behavior can be predicted or even changed by means of data algorithms.

For enterprises, the accuracy of artificial intelligence algorithm provides accurate prediction for the execution of business behavior, but at the same time, the ethical problems and gender discrimination brought about by the development of algorithm are also issues that enterprises need to consider.

Dr. Wang Chunping believes that up to now, the artificial intelligence algorithm still reflects the relevance in the application, and the factors considered in the development process are only related to the decision-making results. In many applications, gender may be a related interference term. How to get rid of prejudice and treat different sexes fairly without affecting the accuracy of the final business judgment is a great challenge. At present, this problem has attracted people's attention, and there are many attempts to solve this problem from different angles, such as finding the factors that have direct causal relationship with the expected prediction results based on causal reasoning theory; Through the innovative construction of feature representation space, some biased factors such as gender differences are suppressed.

"Although many times, based on business behavior, it is difficult for us to determine the boundary between preference and prejudice in some cases, in the process of algorithm innovation and development, if algorithm engineer, as an artificial intelligence, has the right values, he can examine and detect the existence of discrimination and prejudice as soon as possible, and use the continuous updating of the algorithm to adjust the problems it brings. This is our mission and responsibility as an algorithm developer and an important prerequisite for the development of artificial intelligence algorithms. " Wang Chunping said.

As one of the leading financial technology enterprises in China, Xinye Technology has been committed to promoting the mutual integration and development of technology application and social development, especially in the field of artificial intelligence. Xinye Technology tries its best to promote gender equality in the algorithm by incorporating the perspective of gender diversity in the development process. As an enterprise, Xinye Technology is committed to promoting objective understanding and rational reflection in the internal technological development, so as to better promote the development of science and technology. In the future, Xinyi Technology will continue to establish correct values through continuous technological innovation and help promote gender equality in the era of artificial intelligence.