Current location - Education and Training Encyclopedia - Graduation thesis - Interview with Mike Sanders, the "father of algorithms": The breakthrough point of artificial intelligence in the future may be autonomous driving.
Interview with Mike Sanders, the "father of algorithms": The breakthrough point of artificial intelligence in the future may be autonomous driving.
According to AI Technology Review, on April 25th, Mike Sanders, an honorary professor of Stanford University, an honorary member of the Royal New Zealand Society and a world-class algorithm expert, delivered a speech on the theme of "Algorithm Based on Constraint Optimization: Benefits of General Software" at the Global Smart Business Summit jointly organized by well-known financial media in the fields of graffiti intelligence, new wealth and artificial intelligence.

Mike Sanders is a professor of management science and engineering at Stanford University. He is currently an honorary professor at Stanford University, a mathematician, a world-class algorithm expert, an academician of the Institute of Industrial and Applied Mathematics, an honorary member of the Royal Society of New Zealand, and a member of the Stanford University Invention Hall of Fame.

Professor Mike Sanders studied under Jean golub, the father of scientific computing, and received his doctorate in computer science from Stanford University in 1972. As a "big coffee" in the computer field, he won the William Orchard-Hayes Award from the Mathematical Programming Society and the Siam Linear Algebra Award from the Institute of Industrial and Applied Mathematics. It is understood that its matrix equations and mathematical algorithms for optimization problems are widely used in the world. Professor Mike Sanders has provided consulting services for General Electric Company and Boeing Company.

Professor Mike Sanders's research fields include artificial intelligence, large-scale scientific computing, big data analysis, system optimization, sparse matrix solution, software engineering, AIoT and so on.

In his view, interconnection has always been an optimization problem in the AIoT industry. For example, graffiti intelligence, the organizer of this conference, has also introduced similar technologies to solve the problem of information islands. Professor Saunders has made outstanding contributions in this field.

The following is a summary of Professor Michael Saunders' speech and interview, and the AI Technology Review has been organized without changing its original intention:

Hello everyone! Thank you for coming today. I am very happy to come to China. Sorry, I'm from New Zealand. I can speak a little French, Spanish and English, but Chinese is much more difficult.

Today I want to talk to you about "constrained optimization". Before that, I want to talk about why I went to Stanford University to participate in computer-related research, and then talk about the history of constraint optimization.

From New Zealand to Stanford, focusing on "constrained optimization"

1972 I got my doctorate from Stanford University. When I returned to New Zealand, I thought I would stay in New Zealand forever. But George Dantzig, the father of linear algebra and a professor at Stanford University, started the SOL project and invited me back to Stanford.

When I participated in the system optimization laboratory, Professor Dantzig was responsible for establishing economic and energy models, while I focused on the nonlinear objective function and developed the initial version of MINOS optimization software to solve the problems of these models.

At that time, George Dantzig, a professor at Stanford University, proposed a new algorithm optimization-constrained optimization. This is a very difficult research topic, that is, to find a set of parameter values under a series of constraints, so that the target value of a function or a group of functions can be optimized. "Constrained optimization" is essentially a linear algebraic problem, and the optimization analysis is realized by software.

In the1980s, I extended MINOS to deal with some nonlinear constraints, and we also developed other constraint optimization software for General Electric and NASA.

In 1990, our software is used in greenhouse effect model and space optimization problems, such as the orbit optimization of aircraft and spacecraft.

I have a twin brother, David, who is an airplane pilot. He has been working in Ames Research Center of NASA since 1975. He used our optimization software to design supersonic planes, new space shuttles and capsules, although some of these projects were later cancelled.

Of course, our algorithm optimization also has applications in many other fields. For example, controlling the trajectory of the robot; In the medical field, we can aim at the X-ray beam to help doctors with radiotherapy.

Optimization is very important for aviation applications.

Our software is used in many NASA aviation projects, such as:

The above problems are inseparable from optimization.

20 10 I participated in the design of the Orion spacecraft, which is called Apollo 2.0. Orion and Apollo are similar in appearance, but much larger. David optimized the curvature of Orion heat shield. He found that the shape chosen by Apollo designers 50 years ago was an optimized shape.

Recently, our optimization has been applied to Stratolaunch, the world's largest aircraft, which completed its first flight in California on April 3, 20 19. Stratolanuch is equipped with two airframes and six Boeing 747 engines. Its wings are longer than a football field. It can carry rockets or small spacecraft to the height of 1 1000 meters and launch them into orbit. David's improved optimization results show that Stratolaunch may be a little premature to start the landing program at a distance of 2500 kilometers.

Optimization software and applications complement each other.

Algorithm optimization has helped us to make many solutions.

Twenty years ago, we used PDCO software for signal analysis (BPDN). Now, we use the same software for different applications: analyzing low-frequency nuclear magnetic resonance signals to analyze the composition of a substance, such as olive oil or biodiesel. Our existing software has found a new application way.

Sometimes, new applications will lead us to create new algorithms. For example, the multi-dimensional model problem in system biology cannot be solved by the existing software, so we use the optimized double-precision and triple-precision versions of MINOS software to develop DQQ programs.

We also developed the NCL algorithm to solve the tax law model, which can not be solved by the existing software. NCL has solved a series of large-scale but easy-to-solve optimization problems. Surprisingly, we found how to "hot start" every big problem by promoting optimization through internal methods. Hot start usually cannot be achieved by internal methods. Therefore, new and difficult applications prompt us to give birth to new universal software, which is a very interesting process.

To sum up the theme of my speech, when we design an optimized software, we always want to create a "universal" software to make the best use of it. But to be honest, we never know what kind of people are using our software. Sometimes, software will help scientists find optimal solutions for emerging applications, which brings us an immediate sense of accomplishment. But sometimes the opposite is true. Emerging applications force us to design new algorithms by combining existing software in new ways.

In the future, we will see many applications similar to autonomous vehicles, and the importance of autonomous driving safety is comparable to the launch and landing of spacecraft. The optimized system will also shine in the future medical field. It can make precision medicine a reality, and it makes radiotherapy more accurate and fast.

After the speech, AI Technology Review had an exclusive interview with Professor Mike Sanders.

AI technical comment: I am glad to have this opportunity to interview you today! The first question, can you talk about how you combine research with industry application, and what specific cases you have participated in?

Mike Sanders: I mentioned many application cases in my speech, including some very important cases, such as drug therapy, manufacturing, aerospace, systems biology and nuclear magnetic resonance. As I said before, we don't know who will use our software, but generic software will encourage the birth of more emerging applications. My favorite thing is that someone knocks on my door and says, "Professor, I have an optimization problem. Can you help me? "I hope everyone will knock on my door.

AI Technology Review: How do you view the relationship among artificial intelligence, Internet of Things and system optimization?

Mike Sanders: Artificial intelligence covers many aspects, including mathematics and computer science. Solving the minimum problem with large-scale variable equations is usually a representative case in the field of optimization.

The classical SVM method solves more complicated problems, and we have proved that our PDCO solution is a solution that can be applied on a larger scale than the existing methods.

The Internet of Things includes sensors. We study the wireless sensor network with the optimized method to detect where the sensors are. Each sensor can independently detect the distance between it and other nearby sensors. For example, we can throw the sensor from the helicopter into the forest and let it automatically sense whether there is a forest fire. Only a few sensors need to know the specific location.

AI Technology Review: Is it the interconnection between thousands of sensors?

Mike Sanders: My doctoral student Holly Jin, in her doctoral thesis, can accurately locate thousands of sensors, which is very important for large forests. Similarly, if firefighters or miners wear sensors on their bodies, the same optimization method can also be used to search their positions in forest fires or collapsed mines.

AI technology review: Now artificial intelligence technology is particularly hot in China. As an expert in this field, where do you think the future breakthrough point of artificial intelligence technology is, and what is the trend of this technology?

Mike Sanders: That's a good question. Artificial intelligence technology has developed for a long time. 1967, when I was still studying PhD at Stanford University, artificial intelligence was already a research topic of computer science. If AI was a bubble, the bubble would have burst long ago.

Self-driving cars are a big challenge in the future research field of artificial intelligence. Tesla founder Musk predicts that Tesla's self-driving cars will go on their own at the end of this year, and cars can also pick up other passengers after the trip to make money for the owners. We don't know whether this vision can be realized. Tesla claims that the computing speed of one chip is 2 1 times that of other chips. This is a great progress, which brings us one step closer to the future AI.

AI technology review: mainly chip optimization?

Mike Sanders: Our question just now is that the future direction of AI application is autonomous driving, which is a very big direction and will completely change our way of life. I am optimistic about the future of autonomous driving.

Audience question: Now there are two ways of machine learning, one is supervised and the other is unsupervised. Which one do you think has more development potential?

Mike Sanders: There are three ways of machine learning: supervised learning, unsupervised learning and reinforcement learning. I think both supervised learning and unsupervised learning are very important, and researchers have been trying to improve the methods they use. I think these two forms of learning will continue to evolve in the future.

Click to read the original text and view the Python technical exchange discussion group.