To know what grid computing is, we must first understand what distributed computing is.
The so-called distributed computing is a computer science, which studies how to divide a problem that requires huge computing power into many small parts, then distribute these parts to many computers for processing, and finally synthesize these calculation results to get the final result. Recent distributed computing projects have been used to use the idle computing power of thousands of volunteer computers around the world. Through the Internet, you can analyze telecommunication signals from outer space, find hidden black holes and explore possible extraterrestrial intelligent life. You can look for mersenne prime with digits above100000; You can also find and find more effective anti-HIV drugs. These projects are huge and require an amazing amount of calculation. It is absolutely impossible for a single computer or individual to complete the calculation in an acceptable time.
Grid computing is based on this idea, combined with thousands of idle computer resources in the world to solve the problem!
First of all, we need to find a problem that needs a lot of computing power to solve. Such problems are generally interdisciplinary, challenging and urgent scientific research topics for human beings. One of the more famous is:
1. Solve complex mathematical problems, such as GIMPS (Finding the Largest mersenne prime).
2. Research and find the most secure password system, such as RC-72 (password cracking).
3. Biopathological research, such as: Folding@home (protein folding, misunderstanding, polymerization and related diseases caused by it).
4. Drug research for various diseases, such as: United Devices (looking for effective drugs against cancer).
5. signal processing, such as SETI@Home (looking for extraterrestrial civilization at home).
From these practical examples, we can see that these projects are huge and require an amazing amount of calculation. It is absolutely impossible for a single computer or individual to complete the calculation in an acceptable time. In the past, these problems should be solved by supercomputers. However, the cost and maintenance of supercomputers are very expensive, which is beyond the capacity of an ordinary scientific research institution. With the development of science, a cheap, efficient and easy-to-maintain computing method came into being-distributed computing!
With the popularity of computers, personal computers began to enter thousands of households. Accompanied by this is the problem of using computers. More and more computers are idle, even if they are turned on, the potential of CPU is far from being fully exerted. We can imagine that a home computer is "waiting" most of the time. Even when the user actually uses the computer, the processor is still silently consumed, and there are still countless waits (waiting for input, but actually doing nothing). The appearance of Internet makes it possible to connect and call all these computer systems with limited computing resources.
Then, some problems that are very complicated but suitable to be divided into a large number of smaller computing segments are put forward, and then a research institution develops computing servers and clients through a lot of efforts. The server is responsible for dividing the calculation problem into many small calculation parts, then distributing these parts to many networked computers for parallel processing, and finally synthesizing these calculation results to get the final result.
Of course, this seems primitive and difficult, but with the increase of the number of participants and computers involved in the calculation, the calculation plan has become very rapid and has been proved to be feasible by practice. At present, the processing power of some large-scale distributed computing projects can reach or even exceed the fastest supercomputer in the world.
You can also choose to participate in some projects to donate Cpu kernel processing time, and you will find that the CPU kernel processing time you provide will appear in the contribution statistics of the project. You can compete with other participants for the ranking of contribution time, you can also join the existing calculation group or form your own calculation group. This method is conducive to mobilizing the enthusiasm of participants.
With the gradual increase of non-governmental teams, many large organizations (such as companies, schools and various websites) have also begun to form their own teams. At the same time, a large number of communities with the theme of distributed computing technology and project discussion have been formed. Most of these communities translate and make tutorials about the use of distributed computing projects, publish relevant technical articles and provide necessary technical support.
So who might be involved in these projects? Of course, anyone can! If you have joined a project and are considering joining a computing team, you will find your home in China distributed computing stations and forums. Anyone can join any distributed computing group formed by our station. I hope you can find fun in China distributed terminals and forums.
Participating in distributed computing-the most meaningful choice to give full play to the use value of your personal computer-only needs to download the relevant program, and then this program will run on the lowest priority computer, which has little effect on the normal use of the computer at ordinary times. If you want to do something useful in your spare time, why hesitate? Act now, your insignificant contribution may leave a big mark on the history of human science development!
The full English name of BOINC is Berkeley Open Infrastructure for Network Computing, which directly translates into Chinese meaning: Berkeley Open Network Computing.
BOINC is a distributed computing platform, which enables all kinds of distributed computing projects to run on one platform software. Different from traditional distributed computing projects (such as SETI@home Classic, Folding@home), it has independent kernel and distributed programs. It is very convenient to coordinate the system resources allocated by different projects through BOINC.
BOINC was developed by the University of California, Berkeley in 2003. After years of testing and projects, the platform is now relatively mature. Berkeley has successfully run SETI@home project for more than 6 years, and achieved great success, attracting more than 5 million users and completing 2 million CPU hours of calculation. An important reason for the development of BOINC platform is to attract more users to join more practical distributed computing projects, such as climate change and drug development.
You just need to go to BOINC's website (http://boinc.berkeley.edu/) to download the latest BOINC client, BOINC supports multi-platform Win32 &;; UNIX/Linux and. Mike ·OS X
The following are some projects of great scientific significance:
1, Einstein @ Home: sponsored by American Physical Association and University of Wisconsin-Milwaukee. The existence of gravitational waves is one of Einstein's most important predictions, which is proved by analyzing and processing the data collected by gravity detectors.
2.LHC@home: initiated by CERN. The SixTrack program of LHC@home can simulate the particle movement in the Large Hadron Collider to study its stability.
3.Predictor@home: initiated by Scripps Institute, USA. Through the study of protein sequence, the distributed computing scheme with protein structure is predicted.
4.SIMAP (Protein Similarity Matrix): initiated by technical university of munich. FASTA algorithm is used to explore the similarity of protein sequences in the calculation of this project.
5.FightAIDS@Home: initiated by World Community Grid. The purpose is to create the world's largest public grid computing platform. At present, "protein Fold" and "AIDS" are mainly calculated.
Attached drawings: screenshot.png (2006-12-219:1208.53k).
Attached drawings: ScreenShot2.png (2006-12-219:16, 18 1. 19K).
The principle of distributed computing is that multiple computers work together, the most obvious one is seti@home of NASA mentioned by the landlord. Similarly, in linux, what is more annoying is compilation. For example, my personal Athlon64 2800+ 5 12 memory was compiled with koffice, and I haven't finished playing an NBA game yet. If this machine compiles open office, it will probably take at least 65438. In the gentoo I use, there is a setting for distributed compilation. Generally, several computers with similar hardware conditions compile the source code of a program through distributed work. Seti@home is very different from ordinary distributed computing. For example, it only requires you to use your leisure resources for computing, but the distributed compilation I am talking about is different. He asked to compile the source code into binary. Different computer hardware will lead to great differences in the compiled binary files. Of course, such binary files cannot be installed on the machine, so at least the cpu model and compilation parameters are consistent.