I. Development history
The evolution of computing tools has gone through different stages, from simple to complex, from low to advanced, for example, from knots in Knoting Notes to calculation, abacus slide rule, mechanical computer and so on. They have played their respective historical roles in different historical periods, and also inspired the development ideas of modern electronic computers.
From 65438 to 0889, herman hollerith, an American scientist, developed a spreadsheet based on electricity to store calculated data.
1930, American scientist Vaneva Bush built the world's first analog electronic computer.
1On February 4th, 946, the world's first electronic computer "ENIAC electronic numerical value and calculator" customized by the US military came out at the University of Pennsylvania. Eniac (Chinese name: ENIAC) was developed by Oberding Weapon Test Range to meet the needs of trajectory calculation. This calculator uses 17840 electron tubes, with a size of 80 feet ×8 feet and a weight of 28t (tons). Its power consumption is 170kW, its operation speed is 5000 times per second, and its cost is about 487,000 dollars. The advent of ENIAC is of epoch-making significance, which indicates the arrival of the era of electronic computers. In the next 60 years, computer technology will develop at an amazing speed, and the cost performance of any technology can be improved by 6 orders of magnitude within 30 years.
1 generation: electronic tube digital machine (1946— 1958)
In hardware, the logic element is vacuum tube, and the main memory is mercury delay line, cathode ray oscilloscope electrostatic memory, magnetic drum and magnetic core. External storage uses magnetic tape. The software uses machine language and assembly language. The application fields are mainly military and scientific computing.
Disadvantages are large volume, high power consumption and poor reliability. It is slow (usually thousands to tens of thousands of times per second) and expensive, but it lays the foundation for the future development of computers.
The second generation: transistor digital computer (1958— 1964)
The application fields of operating system, high-level language and its compiler in software mainly focus on scientific calculation and transaction processing, and begin to enter the field of industrial control. Its characteristics are smaller size, lower energy consumption, higher reliability, faster operation speed (generally 654.38+ million times per second, up to 3 million times), and its performance is better than that of 654.38+0 generation computers.
Third Generation: Integrated Circuit Digital Machine (1964— 1970)
In terms of hardware, small and medium-sized integrated circuits (MSI, SSI) are used as logic elements, and the magnetic core is still used as main memory. In terms of software, there are time-sharing operating systems and structured and large-scale programming methods. It is characterized by faster speed (usually millions to tens of millions of times per second), significantly improved reliability, further reduced price, and products moving towards universality, serialization and standardization. The application field began to enter the field of word processing and graphic image processing.
Fourth Generation: Large Scale Integrated Circuit Computer (1970 to present)
In terms of hardware, logic elements are large-scale and very large-scale integrated circuits (LSI and VLSI). In software, database management system, network management system and object-oriented language have appeared. 197 1 year, the world's first microprocessor was born in silicon valley, which opened a new era of microcomputers. The application fields are gradually moving from scientific calculation, transaction management and process control to family.
Due to the development of integration technology, semiconductor chips are more integrated, each chip can accommodate tens of thousands or even millions of transistors, and the arithmetic unit and controller can be concentrated on one chip, thus a microprocessor appears, which can be assembled into a microcomputer with large-scale and ultra-large-scale integrated circuits, that is, a microcomputer or a pc. Microcomputer is small, cheap and easy to use, but its function and operation speed have reached or even surpassed that of large computers in the past. On the other hand, various logic chips made of large-scale and very large-scale integrated circuits are made into giant computers, which are not very large, but the operation speed can reach 100 million times or even billions. In China, after 1983 successfully developed the Galaxy I supercomputer with 1 billion operations per second, 1993 also successfully developed the Galaxy II universal parallel supercomputer with1billion operations per second. This period also produced a new generation of programming languages, database management systems and network software.
With the change of physical components and equipment, not only the host has been upgraded, but also its external equipment is constantly changing. For example, external storage, from the initial cathode ray tube to the magnetic core and drum, and then to the universal disk, now has a smaller volume, larger capacity and faster disk.
Two. major feature
Fast operation speed: the internal circuit of the computer can complete various arithmetic operations at high speed and accurately. At present, the operation speed of computer system has reached trillions of times per second, and the microcomputer can reach more than 100 million times per second, which can solve a large number of complex scientific calculation problems. For example, the calculation of satellite orbit, the calculation of dams and the calculation of 24-hour weather all take years or even decades, while in modern society, it only takes a few minutes to complete with computers.
High calculation accuracy: the development of science and technology, especially the development of cutting-edge science and technology, requires very high calculation accuracy. The reason why the missile controlled by computer can accurately hit the intended target is inseparable from the accurate calculation of the computer. The average computer can have a dozen or even dozens of significant bits (binary), and the calculation accuracy can range from a few thousandths to a few millionths, which is beyond the reach of any calculation tool.
Strong logical operation ability: the computer can not only calculate accurately, but also have logical operation function, which can compare and judge information. The computer can save data, programs, intermediate results and final results, and automatically execute the next instruction according to the judgment result for users to call at any time.
Large storage capacity: The internal memory of a computer has memory characteristics, which can store a large amount of information, including not only various data information, but also programs for processing these data.
High degree of automation: because computers have the ability of memory and logical judgment, people can put pre-programmed program groups into computer memory. Under the control of the program, the computer can work continuously and automatically without human intervention.
Cost-effective ratio: Almost every household will have a computer, which is becoming more and more popular. In the 2 1 century, computers will become one of the essential appliances in every family. Computers have developed rapidly, including desktops and notebooks.
Three. Main categories
super computer
Usually refers to a computer composed of hundreds of thousands or more processors, which can calculate large and complex problems that ordinary PCs and servers can't complete. Supercomputer is a computer with the strongest function, the fastest operation speed and the largest storage capacity, and it is an important symbol of the national scientific and technological development level and comprehensive national strength. Supercomputers have the strongest parallel computing ability and are mainly used for scientific computing. In meteorology, military, energy, aerospace, prospecting and other fields to undertake large-scale, high-speed computing tasks. Structurally, although both supercomputers and servers may be multiprocessor systems, there is no substantial difference between them. However, modern supercomputers use cluster systems and pay more attention to the performance of floating-point operations. It can be seen that they are high-performance servers that focus on scientific computing and are very expensive.
network computer
1, server
Especially some high-performance computers that can provide services to the outside world through the network. Compared with ordinary computers, it requires higher stability, security and performance, so it is different from ordinary computers in CPU, chipset, memory, disk system, network and other hardware. The server is the node of the network, which stores and processes 80% of the data and information on the network and plays an important role in the network. They are high-performance computers that provide various services for client computers, and their high performance mainly includes high-speed computing ability, long-term reliable operation, strong external data throughput and so on. The composition of the server is similar to that of an ordinary computer, and it also includes a processor, a hard disk, a memory, a system bus and so on. However, because it is specially formulated for specific network applications, there are great differences between servers and microcomputers in terms of processing capacity, stability, reliability, security, scalability and manageability. Servers mainly include network servers (DNS, DHCP), print servers, terminal servers, disk servers, mail servers, file servers, etc.
2. Workstation
It is a high-performance computer based on personal computer and distributed network computing, which is mainly oriented to professional application fields and has strong data operation and graphics and image processing capabilities. It is designed and developed to meet the needs of engineering design, animation production, scientific research, software development, financial management, information service, simulation and other professional fields. The most prominent feature of workstation is its powerful graphics exchange ability, so it has been quickly applied in the field of graphics and images, especially in the field of computer aided design. The typical product is the Sun series workstations of Sun Company in the United States.
A diskless workstation refers to a computer without floppy disks, hard disks and CD-ROMs connected to a local area network. In the network system, the operating system and application software used by the workstation are placed on the server. As long as the system administrator completes the management and maintenance on the server, the software upgrade and installation only need to be configured once, and all computers in the whole network can use the new software. Therefore, diskless workstation has the advantages of cost saving, high system security, easy management and maintenance, which is very attractive to network administrators.
The working principle of the diskless workstation is that the Boot ROM of the network card sends the boot request number to the server in different forms. After the server receives it, it sends the boot data to the workstation according to different mechanisms. After the workstation downloads the boot data, the system control right is transferred from the boot rom to a specific area in the memory, and the operating system is booted.
According to different startup mechanisms, commonly used diskless workstations can be divided into RPL and PXE. RPL stands for remote initial program loading, which is commonly used in Windows95. PXE is an upgraded version of RPL, which is the abbreviation of pre-boot execution environment. The difference between them is that RPL is a static route and PXE is a dynamic route. Its communication protocol adopts TCP/IP, which realizes efficient and reliable connection with the Internet. It is commonly used in Windows98, Windows NT, Windows2000 and Windows XP.
3. Center
Hub is a network device that enjoys media. Its function can be simply understood as connecting some machines to form a local area network. The hub itself does not recognize the destination address. All ports on the hub are competing for the bandwidth of a shared channel, so with the increase of the number of network nodes and data transmission, the available bandwidth of each node will decrease. In addition, the hub transmits data in the form of broadcast, that is, transmits data to all ports. For example, when a host A in the same local area network sends data to a host B, the data packet is sent in the form of broadcast on the hub-based network, and the same information is sent to all nodes on the network at the same time, and then each terminal decides whether to receive it by verifying the address information of the data packet header. In fact, generally speaking, only one terminal node receives data and sends it to all nodes. This will easily lead to network congestion, and most of the data traffic is invalid, which makes the data transmission efficiency of the whole network quite low. On the other hand, because each node can monitor the packets sent, it is easy to bring some unsafe hidden dangers to the network.
Step 4 convert
Switch is a general technical term that sends the information to be transmitted to the corresponding qualified route through manual or automatic equipment according to the needs of information transmission at both ends of communication. Generalized switch is a kind of equipment that completes information exchange function in communication system. It is an upgraded product of hub, with very similar appearance and roughly the same function as hub. However, there is a difference in performance between the two: hubs use * * * bandwidth, while switches use exclusive bandwidth. In other words, all ports on the switch have their own channel bandwidth to ensure the fast and effective transmission of data on each port. Switches provide users with exclusive point-to-point connections, and packets are only sent to the destination port, not to all ports. It is difficult for other nodes to listen to the transmitted information, so it is not easy to cause network congestion when there are many machines or a large amount of data, which also ensures the security of data transmission and greatly improves the transmission efficiency. The difference between the two is obvious.
5. Router
A router is a network device that is responsible for routing. It finds a network path with the least traffic from multiple paths of the Internet and provides it to users for communication. Routers are used to connect multiple logically separated networks to provide users with the best communication path. Routers use routing tables to choose the path of data transmission. The routing table contains a list of network addresses and distances between addresses. Routers use routing tables to find the correct path for a packet from its current location to its destination address. Routers use minimum time algorithm or optimal path algorithm to adjust the path of information transmission. Routers are produced after switches, just as switches are produced after hubs, so routers and switches are interrelated and not completely independent devices. Routers mainly overcome the shortage that switches can't forward packets to routes.
Switches and routers are special network computers. Their hardware is based on CPU, memory and interface, and their software is based on the Internet operating system IOS.
Switches and routers, like PCs, all have a central processing unit (CPU), and the CPUs of different switches and routers are generally different. CPU is the processing center of switches and routers.
Memory is where switches and routers store information and data. CISCO switches and routers have the following memory components:
ROM (Read Only Memory) stores switches, routers power-on self-test (post), boot programs and some or all IOS. Roms in switches and routers are erasable, so IOS can be upgraded.
RAM (Random Access Memory) is similar to the random access memory on PC, which provides temporary information storage and saves the current routing table and configuration information.
NVRAM (Non-volatile Random Access Memory) stores startup configuration files of switches and routers. NVRAM is erasable, and the configuration information of switches and routers can be copied into NVRAM.
Flash Memory Flash memory is erasable and programmable, and is used to store other versions of CISCO IOS and upgrade IOS of switches and routers.
Interfaces are used to connect switches and routers to the network, and can be divided into LAN interfaces and WAN interfaces. Due to the different models of switches and routers, the number and types of interfaces are also different. Common interfaces mainly include the following:
High-speed synchronous serial port can connect DDN, Frame Relay, X.25 and PSTN.
Synchronous/asynchronous serial port, which can be set to synchronous working mode by software.
AUI port, that is, thick cable port. Usually, an external converter (AUI-RJ45) is needed to connect 10/ 100Base-T Ethernet.
ISDN port, which can be connected to ISDN network (2B+D) and can be used as a local area network to access the Internet.
AUX port is an asynchronous port, which is mainly used for remote configuration, dial-up backup and connection with modem. Support hardware flow control.
The console port is an asynchronous port, which is mainly connected to a terminal or a computer running a terminal emulation program. Switches and routers are configured locally. Hardware flow control is not supported.
Articles related to the history of computer development:
★ A brief history of operating system development
★ What is the history of computer viruses?
★ How much do you know about the basics of computer hardware?
★ Overview of the development history of fast reading in China
The influence of computer on our people's lives
★ Take stock of the famous computer viruses in the 40-year history.
★ The origin and development of human beings
★ Computer learning experience
★ 3 model articles of computer research report