Current location - Education and Training Encyclopedia - Education and training - Recommend a server with good application performance and suitable for deep learning AI scenarios.
Recommend a server with good application performance and suitable for deep learning AI scenarios.
Deep learning is a branch of machine learning, and it is an algorithm for representing data based on artificial neural network. Deep learning has made outstanding achievements in search technology, data mining, machine learning, machine translation, natural language processing and other fields, which shows its importance.

Anyone familiar with deep learning knows that deep learning needs training. The so-called training is the calculation of finding the best value among thousands of variables. This requires constant attempts to identify that the final value is not a manually determined number, but a formal formula. Through this pixel-level learning and constantly summing up the rules, computers can think like people. Therefore, GPUs that are better at parallel computing and high bandwidth have become the focus of attention.

Many people think that the configuration of deep learning GPU server is somewhat different from that of ordinary server, just as many people think that the machine for design must be very expensive. In fact, as long as the graphics card or CPU meets the application of deep learning, deep learning can be carried out. Because the core number and architecture of CPU are far less efficient than GPU, most deep learning servers are run by high-end graphics cards.

The following are some principles and suggestions on how to choose a deep learning GPU server and how to choose a deep learning server:

1. power supply: guaranteed quality, sufficient power and 30~40% redundancy.

Stability, stability or stability. A good power supply can ensure that the host runs for a long time without stopping or restarting. It is conceivable that if you suddenly restart in the process of calculation, you have to start again, which not only reduces efficiency, but also affects your mood. Some power supplies may not be problematic when used under low load, but they are prone to problems when operated under high load. When choosing a power supply, you must choose a power supply with redundancy and excellent quality, and don't just exceed the power.

2. Graphics card: At present, the mainstream RTX3090 and the latest RTX4090 will also be listed.

Graphics cards play an important role in deep learning and are also a large part of the budget. With limited budget, you can choose RTX3080 /RTX3090/RTX4090 (just released last month, listed on 12 this month). If the budget is sufficient, you can choose a professional deep learning card Titan RTX/ Tesla V100/A6000/A100/H100 (supply is suspended) and so on.

3.CPU: the two are dominant. What I want to talk about here is the positioning of PC-level and server-level processors.

Intel's processors are Xeon, Core, Celeron, Pentium, Atom5, and the server side is Xeon. At present, the most common one on the market is Core. At present, it is the third generation Xeon scalable series processor, which is divided into three types: platinum, gold and silver.

AMD processors are divided into Rui Long Rui Long, Rui Long Rui Long Pro, Rui Long Threadripper and Snapdragon EPYC, among which Snapdragon is the CPU on the server side, and Rui Long is the most common. At present, it is the third generation EPYC (Snapdragon) processor, and AMD's third generation EPYC 7003 series has the highest 64 cores.

Choosing one-way or two-way depends on the software. Simply using GPU to calculate, in fact, the CPU does not have much load. Considering more uses, of course, CPU can't be too bad. The mainstream high-performance multi-core and multi-thread CPU is enough.

4. Memory: single 16G/32G/64G is optional. It is very important that the server-level memory has ECC function and the PC-level memory does not.

The memory starts at 32G, and the memory can be expanded, just enough. If it is not enough, it can be added later. Buying too much is also a waste.

5. Hard disk: solid state hard disk and mechanical hard disk. Usually, the system disk uses solid-state hard disk for speed, and the data disk uses mechanical disk for storage.

There is little difference between Nvme or SATA protocol in the solid-state selection of big brands and enterprises, so don't consider the mixed solid-state, and it will be bad to use it suddenly.

6. Chassis platform: On the server level, it is recommended to choose ultra-micro motherboard platform, and stability and reliability are the first requirements.

Reserve enough space for easy upgrade, such as using a single graphics card now and adding a graphics card later; The structure should be reasonable, and reasonable space is more conducive to air circulation. It is best to add several chassis fans with good heat dissipation effect to assist heat dissipation. Temperature is also a factor leading to instability.

7. Software and hardware support/solution: Yes.

Application direction: deep learning, quantitative calculation, molecular dynamics, bioinformatics, radar signal processing, seismic data processing, optical adaptation, transcoding and decoding, medical imaging, image processing, password cracking, numerical analysis, computational fluid dynamics, computer-aided design and many other scientific research fields.

Software: installation of Caffe, TensorFlow, Abinit, Amber, Gromacs, Lamps, Namd, VMD, Materials Studio, Wien2K, Gaussian, Vasp, CFX, OpenFoam, ABAQUS, ANSYS, LS-DYNA, Maple, MATLAB, Blast, FFTW and NASTRAN.

————————————————

Copyright statement: This article is the original article of CSDN blogger "AI17316391579", which follows the CC 4.0 BY-SA copyright agreement. Please attach the original source link and this statement.

Original link:/ai17316391579/article/details/127536 17.