At present, artificial intelligence can analyze and process data much faster than human analysts, and can find out behavior patterns and laws that are difficult for the human brain to find, but it will also make mistakes that the prisoner's brain will not make. The reason is that machine learning algorithms must rely on a large amount of data for training. Data is to artificial intelligence what blood is to humans. It is more difficult to enjoy data than to design algorithms. If the data set is too small, the data is inaccurate, or it is maliciously tampered with by opponents, then the effect of machine learning will be greatly reduced, even misled and misjudged. Especially in the fields of national security and military affairs, harmful data will have serious consequences. Once the training data set of artificial intelligence is mastered by the opponent, the opponent will design data traps, cheat, provide false data, and induce artificial intelligence to learn wrong data. What's more, because the internal working mechanism of machine learning algorithm is obscure, people usually don't know why artificial intelligence makes mistakes, especially in the absence of catastrophic consequences, and it is even difficult to detect the mistakes of artificial intelligence and feel at a loss about artificial intelligence falling into data traps.
So, how to avoid data traps? First of all, human brain intervention is needed. Only people have the ability to label data classification, so they can't simply throw data to machine algorithms, hoping that artificial intelligence can solve all problems without human brain intervention. If you only provide a large amount of data and lack an "intelligent brain" that can distinguish data, then artificial intelligence can only provide mechanical answers, not the correct answers that people need. Human brain intervention can not only ensure that artificial intelligence obtains the correct data, but also check whether it is learning the correct data. Second, build a multidisciplinary team. The "smart brain" that can avoid data traps must come from interdisciplinary teams, and computer experts, programmers, big data experts and artificial intelligence experts must work closely with experienced professionals in related fields. In the future, artificial intelligence will be able to provide real-time information directly to combatants after its continuous development and maturity, which requires combatants to continuously provide feedback to the "smart brain" team in order to update and correct the data in time. Third, the mutual check of multi-source data. When using sensors to detect targets, it is easy to be deceived by opponents. Therefore, it is necessary to use vision, radar, infrared and other sensors to detect the same target, and compare and verify the data from different sources, so as to distinguish the authenticity and discover hidden scams. In addition, label data classification. At present, even advanced artificial intelligence can make absurd low-level mistakes, and even mistake toothbrushes for baseball bats. Therefore, it is impossible to provide raw data for machine learning, especially in the initial stage of training. In order to check whether the conclusion of artificial intelligence is correct and ensure the accuracy and efficiency of artificial intelligence-aided decision-making, it is necessary to provide real data for machine algorithm to be correctly classified and labeled. Finally, take confrontational learning as an example. Set up an intelligent blue army, research and develop artificial intelligence opponents, let the artificial intelligence that are rivals and adversaries compete with each other, conduct confrontation learning in the process of confrontation learning, improve the ability to identify data traps in confrontation learning, and win with wisdom. In short, today's artificial intelligence is inseparable from the control of the human brain, and avoiding data traps ultimately depends on human experience and wisdom.