1
Ai, are you really awake?
Artificial intelligence, will it wake up?
This is an old and novel topic.
The inherent unpredictability of "deep learning
1
Ai, are you really awake?
Artificial intelligence, will it wake up?
This is an old and novel topic.
The inherent unpredictability of "deep learning" deepens this anxiety.
The biological analogy of "neural network" makes "AI black box" even more worrying.
Recently, a Google engineer once again detonated the topic: Is AI awake?
In June, 2022, Lemoine, a Google engineer, said that in his chat with AI "LAMDA", he found the latter's answer extremely personalized and thought that AI had "awakened".
To this end, I wrote a 2 1 page investigation report, trying to get high-level recognition of Ai's personality.
However, Google executives have not expressed their views yet, hoping to get more clear recognition.
However, Lemoine seems to be the protagonist of a science fiction movie. He never gave up, and published his chat record with Ai to the public, causing an uproar. After Washington post followed up the report, it even exploded all over the world.
Is AI really awake? The controversy continues.
No matter what the truth is, one thing is certain:
Because of deep learning and the blessing of neural networks, artificial intelligence has become more and more "unpredictable".
2
That night, mankind fell asleep peacefully.
About AI awakening, it reminds people of another thing six years ago.
On March 13, 2065438+June, human beings and AI had an ultimate intellectual contest in Go.
Prior to this, the contest between AI and human beings was successful.
But human beings think that Go is an insurmountable ceiling for AI.
Because the total number of atoms in the measurable universe is about 10 80, and the way to go is 2.08 * 10 170, AlphaGo can't win by calculation and calculation. So, how can a creative human be defeated by AI? If Go loses to AI, it means that it has completed the Turing test.
However, in the first three games, Li Shishi lost again and again, which shocked the whole world.
In the fourth game, Li Shishi judged that there was chess in the dark and played a white 78. Li Shishi, an epic "Hand of God", embodies the intuition, calculation and creativity of human beings at their peak. This is also the last battle of human dignity.
That year, an author wrote the above paragraph (revised) and mentioned that "after 23 years, no one was spared." Scientists have established a mathematical model to judge that artificial intelligence may reach the intelligence level of ordinary people in 2040, which will lead to an intellectual explosion.
In the face of the increasingly popular AI, machines are about to replace humans, and AI is rapidly expanding.
Five years later, mankind has made great strides towards the "mother".
So after 18, no one really survived?
three
The other side of AI: not stable enough.
The above two things are essentially concerns about AI awakening.
An AI with free will is not credible and will eventually threaten mankind.
Hawking warned mankind to face up to the threat posed by artificial intelligence.
Bill Gates believes that artificial intelligence is "summoning the devil".
In 200 1 A Space Odyssey, the supercomputer HAL9000 mercilessly obliterates human beings in the universe.
In The Matrix, human beings were imprisoned by AI in The Matrix.
However, to be realistic, the distrust of AI awakening is still only human speculation.
Although the description in science fiction movies is cruel and cold, it has not been universally confirmed.
But another "untrustworthiness" of AI is real.
It is not too smart, too smart or too conscious, but not stable enough.
The consequences of this instability are really "terrible".
There are many examples of artificial intelligence "failure", which is the unstable side of AI.
This is the real "untrustworthy" place, and it is the real threat of AI to mankind.
We don't want to see the "awakening" of AI, but we can't accept the "rashness" of artificial intelligence.
four
What humans need is a credible AI.
Therefore, human beings need a "credible AI".
It may not matter whether artificial intelligence is smart or stupid.
Whether AI evolves or degenerates may be just a false proposition for the time being.
What human beings need is a reliable assistant, a trustworthy machine assistant.
I am your creator, you should listen to me, and you can't make trouble.
Asimov put forward the "Three Laws of Robots" 70 years ago:
This is the direction of human beings in AI ethical thinking.
It can be called the moral code of artificial intelligence society.
For human beings, credibility is our most important demand for AI.
If we trace the source of artificial intelligence from Bayesian-Laplace theorem, the goal is to solve the "inverse probability" problem, but the essence is to solve the reliability problem of AI.
If it is not credible, AI may bite.
At the very least, AI should accompany us to ensure human beings' two points: life safety and property safety.
Take autonomous driving as an example. If artificial intelligence calculates with the probability of 99.99% accuracy, the error rate of 0.0 1% will still be scary. If there are1000000 self-driving cars in the future cities, even if the error rate is 0.0 1%, there will still be1000 hidden vehicles that pose a threat to human life safety.
If we can't have credible AI, we can't be sure whether artificial intelligence brings us technological progress or countless potential threats.
But it is actually the most valuable aviation light in the field of artificial intelligence, and it is also the direction that technology companies are pursuing now.
five
What is trusted AI,
What are these 16 technical brothers doing?
So, what is trusted AI?
Maybe many people don't know it yet, so we need to make this definition clear first.
We can watch a program "Burn, Genius Programmer 2 Trusted AI" first.
This variety show scored 8.0 in Douban in the first season, which made people's brains wide open.
In the second season, 1 6 AI technology guys were divided into four teams and stayed in the "little black room" for four days and three nights, completing the 60-hour task challenge.
In the competition, they need to compete with the "black production" for countless times, cultivate a "credible AI" to help mankind, defeat the "black production", and finally decide the strongest team.
Variety shows about program technology are very scarce in China and even in the world.
On the one hand, the program and code itself are too hard-core for most people to understand.
On the other hand, the conflict of script setting is more difficult than other variety shows.
But "Burn it, Genius Programmer 2 Trusted AI" constructs the game logic of the program through the actual scene of "anti-fraud".
16 AI technology guys need to face the challenge of fraudulent transaction identification and joint anti-fraud.
Through the cooperation between AI and attack and defense, the whole anti-fraud link is covered.
During the competition, programmers completed "technical anti-fraud" by creating "trusted AI".
Which algorithm and model produced by the team has better data recognition accuracy and coverage will win the competition.
Although it is not as profound and grand as The Matrix, it is not as thought-provoking as artificial intelligence.
But Burn, a gifted programmer, solved practical problems in real life through real application scenarios.
When you look at the whole program, you will understand that this is a credible AI: building an intelligent model based on existing data and solving real problems very stably.
Trusted AI technology is widely used, and anti-fraud is one of the important application scenarios.
Trusted AI is not so far away, it is close at hand. It is not so mysterious, it is often your little assistant.
At present, AI technology based on neural network is very cool, occupying the highest point of AI topic, providing too much creative and mysterious imagination space, and it is also a holy place that many AI technicians look up to. However, it faces many problems, such as unexplained, poor robustness, excessive dependence on data and so on, and hides many potential hazards.
Trusted AI exists to solve these "trust crisis" problems.
If the AI technology based on neural network has a strong idealistic color, then the AI technology based on big data sorting is a down-to-earth and realistic performer.
six
Technical characteristics of trusted artificial intelligence
To truly understand the help of trusted AI to human beings, we need to start from the bottom of technology.
Trusted AI has four technical characteristics: robustness, privacy protection, interpretability and fairness.
0 1
robustness
Robustness refers to the survivability of the system in abnormal and dangerous situations and the stability of the algorithm.
1, the former refers to the system's ability to resist attacks, such as whether computer software can not crash or crash under the conditions of input error, disk failure, network overload or malicious attacks. For example, if an AI model is compared to the Great Wall of Wan Li, its robustness is that the Great Wall can not easily collapse in the face of bad weather (such as typhoon) and natural disasters (such as earthquake).
2. The latter refers to the stability of the algorithm itself in the AI model. If the disturbed panda photos are added, the "eyes" of the AI model can be easily bypassed, indicating that its robustness is poor; For example, in fraudulent transactions, due to the escalating modus operandi, the model trained based on previous data may face the stability test brought by new risk data, and it needs constant iteration to ensure the analysis and recognition ability of the model.
Take Alipay as an example. Alipay has hundreds of millions of transactions every day, and it is not aimed at retail investors, but at professional black gangs. They may attack in two ways:
In order to ensure the safety of funds, Ant Group introduced "intelligent game attack and defense" technology, which has the ability to simulate, train and make up for risk knowledge and models in advance. The robustness of the AI model using this technology is greatly improved, and the realization of "left-right mutual beat" can not only "attack" more intelligently, but also "prevent" more safely.
02
privacy protection
The traditional data protection method objectively forms a "data island", which affects the coordinated operations in the fields of medical care and finance, and also restricts the development of AI technology and industry.
Therefore, it is particularly important to extend the privacy computing technology of data values and realize "dynamic data setting".
In the field of artificial intelligence, federated learning, as a new machine learning model and algorithm, is proposed to solve the data island problem. On the premise of ensuring that each participant does not disclose the original data, that is, the data does not leave the domain, the data is modeled by many parties to realize the availability and invisibility of the data, and then the "data is not moving" is realized.
03
interpretability
Humans always have an inexplicable fear of all unknown things.
If the behavior of artificial intelligence cannot be explained, only the result has no process, then it is like a blind box, and you never know whether Aladdin or Pandora is released.
AI model is an important basis for many important decisions, and its thinking process cannot be a black box in many applications.
Humans want to know the logic behind the model, gain new knowledge, and step on the brakes when it has problems to ensure that the process and results of AI thinking are legal.
This requires a combination of data-driven and model reasoning capabilities to produce interpretable results.
04
fair
AI fairness is an important part of trusted ai.
Only by realizing "fairness" can we really promote technology to benefit the whole society.
On the one hand, fairness needs to pay attention to vulnerable groups, give consideration to the development of backward areas, optimize AI under the principle of social ethics, and let the elderly, disabled people and users in underdeveloped areas enjoy the value of the digital economy era through AI technology.
On the other hand, fairness should think about how to technically think about how to reduce the AI decision-making bias that may be caused by factors such as algorithms and data.
Robustness, privacy protection, interpretability and fairness.
These are the four basic principles of trusted AI.
Nowadays, the development of trusted AI has become a global consensus.
Especially the leading technology companies, they have to serve users and can't make mistakes.
Microsoft, Google, Ant, JD.COM, Tencent, Shi Kuang and other technology companies are actively carrying out the research and exploration of trusted artificial intelligence.
Among them, ants have many technical advantages in trusted AI. From 20 15, they have completed the accumulation of trusted AI technology for 7 years.
According to the Patent Report on Key Technologies of Artificial Intelligence Security and Credibility issued by the authoritative patent agency IPR daily 202 1, Alipay, a subsidiary of Ant Group, ranks first in the world in the number of patent applications and authorizations in this field.
seven
Exploration on the Application of Trusted Artificial Intelligence
Based on the above characteristics of trusted AI, there are various application scenarios.
Artificial intelligence is widely used in medical care, education, industry, finance and other fields. However, problems such as algorithm security, data abuse and data discrimination are also emerging one after another. At present, the main contradiction of AI technology has been transformed into the contradiction between people's growing demand for AI application scope and the unreliability and instability of AI development.
In 20 18, IBM developed several AI trusted tools to evaluate and test the fairness, robustness, interpretability, accountability and value consistency of artificial intelligence products in the R&D process. After that, IBM donated these tools to the Linux Foundation as an open source project to help developers and data scientists build credible, secure and interpretable artificial intelligence systems.
As one of the pioneers in the field of trusted AI, ants have also made many explorations.
The best practical achievement of the application of ant trusted AI technology is to develop a set of intelligent risk control solution named IMAGE. The technical system has realized the problem of using trusted AI technology to ensure the safety of risk control business, and achieved very good results.
Alipay's asset loss rate can be controlled at 0.098 per ten thousand, and many world problems can be solved in the risk control scenario.
Another example is Alipay's "wake-up hotline", which can be controlled within 0. 1 second from the system identification to the user facing the risk of fraud, and then to the AI robot giving the user a "wake-up call".
Risk control system of ant colony image based on trusted artificial intelligence
In addition, ants also have their own practical applications in the fairness of trusted AI.
At present, the "graphic slider verification code" widely used in the industry has always been a huge obstacle for visually impaired people to obtain digital services. However, many apps have to keep the verification code service in order to prevent machine batch operation.
Therefore, ants have developed a set of "air gesture" verification code scheme, which can use "behavior recognition" technology to help visually impaired groups pass the "verification code" checkpoint.
The application exploration of trusted AI will not make AI technology lose its possibility.
It is more like a binding ethics treaty, keeping AI on the right track.
eight
18 years later, is it true that no one is spared?
Let's go back to the original question.
Will AI really wake up?
A hundred years ago, it was hard for human beings to imagine the highly digital world we live in today.
Then, what will happen to artificial intelligence in a hundred years, we really can't predict.
However, AI is a blessing or a curse to human beings, and it is a vital and important topic related to human destiny.
According to the current development model of AI, the future AI may be divided into two factions:
One is autonomous intelligent AI, and the other is trustworthy AI following human beings.
Of course, some people are still asking, does AI really have independent will?
It depends on how to explain it scientifically. An AI system can be stuck in a state of "self-awareness". The only difference lies in the depth and robustness of "stuck", which can also explain why AlphaZero can "stuck" himself on a Go master. What if it continues? This AI faction may pose what we think is a "threat" to mankind.
Another school of AI, namely Believing AI, will constantly improve itself within the framework of the four basic principles, help mankind solve more practical problems, become a reliable assistant of mankind, and coexist with mankind. So, will they always help and protect mankind?
But no matter how it develops in the future, different technological directions may bring different tragedies or comedies, but one thing is certain:
AI technology attacks from all sides, whether it is trusted AI or intelligent AI, it will eventually reach every aspect of our lives. It will permeate every corner of the world and replace "useless people" in many ways.
No matter how worried we are, AI will only become stronger and stronger, and human evolution will appear so fast and even degenerate.
So, how many people will be spared after 18?