Current location - Education and Training Encyclopedia - Graduation thesis - What are the applications of virtualization?
What are the applications of virtualization?
In recent years, cloud native is one of the hottest concepts in IT industry, and many Internet giants have begun to actively embrace cloud native. When it comes to the originality of cloud, we have to understand the protagonist of this article-container. Container technology can be said to support half of the cloud native ecology. As an advanced virtualization technology, container has become the standard infrastructure for software development and operation and maintenance in the cloud native era. Before we understand it, we might as well start with virtualization technology.

What is virtualization technology?

196 1 year-IBM 709 realizes time-sharing system.

The first virtualization technology in computer history was realized in 196 1 year. For the first time, IBM709 computer divided the use of CPU into several extremely short (1100 sec) time slices, each of which was used to perform different tasks. By polling these time slices, one CPU can be virtualized or disguised as multiple CPUs, and each virtual CPU appears to be running at the same time. This is the prototype of the virtual machine.

The function of the container is actually similar to that of a virtual machine. Both containers and virtual machines are actually virtualized at different levels of the computer, that is, resources are represented by logic, thus getting rid of the constraints of physical restrictions and improving the utilization rate of physical resources. Virtualization technology is an abstract and rich concept, which has different meanings in different fields or levels.

Let's talk about the hierarchical structure of the computer first. For most software developers, computer systems can be divided into the following hierarchies:

application layer

Function library layer

Operating system layer

hardware layer

All layers are bottom-up, and each layer provides an upward interface. At the same time, each layer only needs to know the interface of the lower layer to call the function of the lower layer to realize the operation of the upper layer (without knowing the specific operation mechanism of the lower layer).

However, because the hardware produced by early computer manufacturers followed their own standards and specifications, the operating system compatibility between different computer hardware was poor; Similarly, different softwares have poor compatibility under different operating systems. Therefore, some developers have artificially created an abstraction layer between layers:

application layer

Function library layer

API abstraction layer

Operating system layer

Hardware abstraction layer

hardware layer

As far as we are concerned, the so-called virtualization is to artificially create a new abstraction layer between the upper and lower layers, so that the upper software can run directly in the new virtual environment. Simply put, virtualization is to "cheat" the upper layer by accessing the original functional modules of the lower layer to create interfaces, so as to achieve the purpose of cross-platform development.

Combined with the above ideas, we can re-recognize several well-known virtualization technologies:

Virtual machine: Virtualization technology that exists between hardware layer and operating system layer.

By forging a hardware abstract interface, the virtual machine grafts an operating system and the layers above the operating system layer to the hardware, achieving almost the same functions as the real physical machine. For example, if we use the Android virtual machine on a Windows computer, we can use this computer to open applications on the Android system.

Container: Virtualization technology that exists between the operating system layer and the function library layer.

By forging the interface of the operating system, the container puts the function on the function library layer of the operating system. Take Docker as an example, it is an isolated container of Namespace and Cgroup functions based on Linux operating system, which can simulate the functions of operating system. Simply put, if the virtual machine encapsulates and isolates the entire operating system to achieve cross-platform applications, then the container encapsulates and isolates each application separately to achieve cross-platform applications. As a result, containers are much smaller than virtual machines and theoretically occupy less resources.

JVM: Virtualization technology that exists between the function library layer and the application.

Java virtual machines also have cross-platform characteristics, and the so-called cross-platform characteristics are actually the credit of virtualization. We know that the Java language calls the function library of the operating system. The JVM is to establish an abstract layer between the application layer and the function library layer, and adapt different operating system function libraries through different versions, providing a unified running environment for programs and developers, so that developers can call the function libraries of different operating systems.

With a general understanding of virtualization technology, we can understand the birth history of containers. Although the concept of container became popular around the world after the appearance of Docker, countless pioneers had been exploring this forward-looking virtualization technology before Docker.

The predecessor of container "prison"

1979- bell labs invented chroot.

One of the main characteristics of containers is process isolation. As early as 1979, Bell Labs found that when a system software was compiled and installed, the variables of the whole test environment would change. If you want to build, install and test it next time, you need to rebuild and configure the test environment. You know, in those days, a 64K memory chip cost 4 19 dollars, and the cost of "quickly destroying and rebuilding infrastructure" was really too high.

Developers began to think, can we isolate an independent environment for software refactoring and testing under the existing operating system environment? Thus, a system call function called chroot (change root) was born.

Chroot can redirect the root directory of a process and its subprocesses to a new location on the file system, which means that it can separate the file access rights of each process, making it impossible for the process to contact external files, so this isolated new environment is also named Chroot Jail. After that, as long as the required system files are copied to Chroot Jail, the software can be reconfigured and tested. This progress opens the door to process isolation and provides a simple system isolation function for Unix, especially jail's idea lays the foundation for the development of container technology. But at this time, the isolation function of chroot is limited to the file system, and the process and cyberspace have not been dealt with accordingly.

2 1 century, virtual machine (VM) technology has been relatively mature, and people can realize cross-operating system development through VM technology. However, because VM needs to encapsulate and isolate the whole operating system, it takes up a lot of resources and is too complicated in production environment. So people began to pursue a more portable virtualization technology, and many process isolation technologies based on chroot extension were born one after another.

2000-FreeBSD launches FreeBSD prison.

A few years after the birth of 2 1 ch root, FreeBSD version 4.0 introduced a set of FreeBSD Jail, a mini-host environment sharing system, which expanded the existing mechanism of chroot. In FreeBSD Jail, programs have their own file systems, independent processes and cyberspace. Processes in prison can neither access nor see files, processes and network resources outside prison.

20065438+0-the birth of Linux vserver

200 1, Linux kernel adds Linux VServer (virtual server) to provide virtualization function for Linux system. Linux VServer also adopts prison mechanism, which can divide file system, network address and memory on a computer system and allow multiple virtual units to run at the same time.

2004-Sun releases the Solaris container.

This technology was further developed by chroot. In February, 2004, SUN released the 10 beta version of Solaris, a Unix-like system, which added an operating system virtualization function container and improved it in the later official version of Solaris 10. Solaris container supports x86 and SPARC systems, and SUN created a zone function to cooperate with the container. The former is a completely isolated virtual server in a single operating system, and process isolation is realized by system resource control and boundary separation provided by regions.

In 2005, OpenVZ was born.

Similar to Solaris Containers, it provides virtualization, isolation, resource management and health check checkpoints by patching the Linux kernel. Each OpenVZ container has an independent set of file systems, users and user groups, process trees, networks, devices and IPC objects.

Most of the process isolation technologies in this period were based on Jail mode, which basically realized the isolation operation of process-related resources. However, due to the lack of corresponding use scenarios in production and development at this time, this technology has been confined to a small and limited world.

At this moment, a new technology called "cloud" is quietly sprouting. ...

The birth of "cloud"

From 2003 to 2006, Google published three product design papers, from computing mode to storage mode, which created a distributed computing architecture and laid the foundation for big data computing technology. On this basis, Google subversive put forward the plan of "Google 10 1", which formally created the concept of "cloud". For a time, new words such as "cloud computing" and "cloud storage" caused a sensation all over the world. Subsequently, Amazon, IBM and other industry giants also announced their own "cloud" plans, announcing the arrival of the "cloud" technology era.

It is also from this period that process isolation technology has entered a more advanced stage. Under the framework of cloud computing proposed by Google, the isolated process is not only a prison isolated from the outside world, but also static. Like portable containers, they need to be controlled and deployed in addition to being isolated from the outside world, so as to realize the characteristics of cross-platform, high availability and scalability in distributed application scenarios.

In 2006, Google released the process container, which was later renamed Cgroups.

Process Container is the embryonic form of "container" technology in the eyes of Google engineers, which is used to limit, account and isolate resources (CPU, memory, disk I/O, network, etc. ) is a set of processes. This is actually consistent with the goal of the process isolation technology mentioned above. Due to the more mature technology, Process Container was officially launched in 2006, and entered the Linux kernel backbone in the following year, and it was officially renamed Cgroups, which marked that the concept of "container" in the Linux camp began to be re-examined and recognized.

2008-The Linux container tool LXC is born.

In 2008, by combining the resource management ability of Cgroups with the view isolation ability of Linux namespace, a complete container technology LXC(Linux Container) appeared in the Linux kernel, which is the realization basis of the container technology widely used today. We know that a process can call all the resources on its physical machine, which will crowd out the available resources of other processes. In order to limit this situation, Linux kernel developers provide a feature that processes running in Cgroup are similar to those running in namespaces, but Cgroup can limit the available resources of processes. Although the capabilities provided by LXC to users are very similar to the early Linux sandbox technologies such as Jails and OpenVZ mentioned above, with various Linux distributions rapidly occupying the commercial server market, many cloud computing pioneers, including Google, have been able to make full use of this early container technology, which makes LXC gain far more development space in the field of cloud computing than its predecessors.

In the same year, Google launched the first application hosting platform GAE (Google App Engine) based on LXC, and provided the development platform as a service for the first time. GAE is a distributed platform service. Google provides users with services such as development environment, server platform and hardware resources through virtualization technology. Users can customize and develop their own applications based on this platform, and distribute them through Google's servers and Internet resources, which greatly reduces the requirements for their own hardware.

It is worth mentioning that in GAE, Google uses Borg (the predecessor Borg(Kubernetes), a tool that can arrange and schedule LXC. Borg is a large-scale cluster management system used within Google, which can carry hundreds of thousands of tasks, thousands of different applications and manage tens of thousands of machines at the same time. Borg achieves high resource utilization through rights management, resource sharing and performance isolation. It can support high availability applications, reduce the failure probability through scheduling strategy, and provide task description language, real-time task monitoring and analysis tools. If isolated containers are containers, Borg can be said to be the earliest port system, and LXC+ Borg is the earliest container arrangement framework. At this point, the container is no longer a simple process isolation function, but a flexible and lightweight program packaging mode.

20 1 1 year-Woden is launched in Yundai Factory.

Cloud Foundry is a cloud platform launched by VMware, a well-known cloud service provider, in 2009. This is also the first project in the industry to formally define the PaaS (Platform as a Service) model. Concepts such as "PaaS project enables developers to focus on business logic rather than infrastructure through direct management, arrangement and scheduling of applications" and "PaaS project encapsulates and starts applications through container technology" all come from Cloud Foundry. Warden is the resource management container of the core part of Cloud Foundry. At first, it was encapsulated by LXC, and later it was reconstructed into an architecture that runs directly on Cgroups and Linux namespaces.

With the continuous development of the "cloud" service market, various PaaS projects have appeared one after another, and container technology has also ushered in an era of explosive growth, and a large number of entrepreneurial projects around container technology have emerged one after another. Of course, many people know the later story. A startup named Docker was born, making Docker almost synonymous with "container".

Doc was born.

20 13-Docker was born.

Docker was originally an internal project of a PaaS service company named dotCloud, and later the company was renamed Docker. Docker was similar to Warden at first, using LXC, and then began to replace LXC with libcontainer developed by itself. Different from other pure container projects, Docker introduces an ecosystem for managing containers, including an efficient and hierarchical container mirror model, global and local container registries, clear REST API, command line and so on.

Docker itself is actually a package belonging to LXC, which provides an easy-to-use container interface. Its biggest feature is the introduction of container mirroring. Docker packages the application and the environment needed to run the application in one file through container mirroring. Running this file will generate a virtual container.

More importantly, the Docker project also adopted the idea of Git-introduced the concept of "layer" in the production of container images. Based on different "layers", the container can add different information, so that it can be versioned, copied, shared and modified just like ordinary code. By making Docker images, developers can distribute software directly through an image hosting warehouse like DockerHub.

In other words, the birth of Docker not only solved the problem of containerization of software development, but also solved the problem of software distribution, providing a complete solution for the software life cycle process in the "cloud" era.

Soon, Docker became famous in the industry and was selected as the standard of cloud computing infrastructure construction by many companies. Containerization technology has also become the hottest frontier technology in the industry, and the ecological construction around containers has been in full swing.

Controversy between containers and rivers and lakes

The rise of a new technology has also brought a new market, and a blue ocean dispute about containers is inevitable.

Release of the core operating system

After Docker exploded, at the end of the same year, CoreOS came into being. CoreOS is a lightweight operating system based on Linux kernel, which is specially designed for the infrastructure construction of computer clusters in the era of cloud computing. It has the characteristics of automation, easy deployment, safety, reliability and scale. At that time, it had a very conspicuous label: the operating system designed for containers.

With the help of Docker, CoreOS quickly became popular in the field of cloud computing. For a time, Docker+CoreOS became the golden partner for the deployment of industry content devices. At the same time, CoreOS has also made great contributions to the promotion and community construction of Docker.

However, the growing dockers seem to have greater "ambitions". Docker is not willing to be just "a simple basic unit". It has developed a series of related container components, acquired some container technology companies and started to build its own container ecological platform. Obviously, this has formed a direct competitive relationship with CoreOS.

2014-coreos releases open source container engine rocket.

At the end of 20 14, CoreOS launched its own container engine rocket (rkt), trying to compete with Docker. Similar to Docker, rkt can help developers package applications and dependency packages into portable containers, simplifying the deployment work such as building environment. The difference between rkt and Docker is that rkt does not have the "friendly functions" provided by Docker for enterprise users, such as cloud service acceleration tools and cluster systems. Conversely, what rkt wants to do is a purer industry standard.

2014-Google launches an open source container orchestration engine, Kubernetes.

In order to adapt to the problem of container deployment and management of large-scale cluster in hybrid cloud scenario, Google launched the container cluster management system Kubernetes (K8S for short) in June 2065438+2004. K8S comes from Borg mentioned earlier, which has the function of managing and arranging containers in the production environment of hybrid cloud scene. Kubernetes introduces Pod function on the basis of containers, which enables different containers to communicate with each other and realize the grouping and deployment of containers.

Thanks to Google's strong accumulation in the construction of large-scale cluster infrastructure, K8S, which was born out of Borg, has quickly become an industry standard application and can be called an essential tool for container arrangement. As a pivotal member of the container ecosystem, Google sided with CoreOS in the container dispute between Docker and rkt, and regarded K8S's support for rkt as an important milestone.

20 15-Docker launched the container cluster management tool Docker Swarm.

In response, Docker Company also began to add Docker swarm, a container cluster management tool, to the version Docker 1. 12 released on 20 15.

Subsequently, in April of 20 15, Google invested120,000 USD in Core OS, and cooperated with CoreOS to release the first enterprise distribution version of Kubernetes-structural. Since then, container rivers and lakes have been divided into two camps, Google School and Docker School.

The competition between the two factions has intensified, and gradually expanded to the dispute over the formulation of industry standards.

2065438+June 2005 -Docker took the lead in establishing OCI.

Docker and Linux Foundation set up OCI (Open Container Initiative) organization, aiming at "formulating and maintaining the formal specification of container image format and container runtime" and formulating an open industrial standard around container format and runtime.

2065438+July 2005-Google took the lead in establishing CNCF.

In July of the same year, Google, which focuses on "cloud", established CNCF (Cloud Native Computing Foundation) together with Linux Foundation, and took Kubernetes as the first open source project to be included in the management system of CNCF, aiming at "building cloud native computing-an infrastructure-centric architecture, dynamically scheduling microservices, containers and applications, and promoting their widespread use".

These two open source foundations built around container-related open source projects have played an important role in promoting the future development of the cloud. They complement each other and have formulated a series of industry fact standards, becoming the most active open source organization at present.

Kubernetes ecology unifies rivers and lakes.

Although Docker has been pushing rkt to be a well-deserved container brother for many years, as a huge container technology ecology, Docker ecology lost to Google's Kubernetes in the subsequent container arrangement dispute.

As more and more developers use Docker to deploy containers, the importance of orchestration platform is increasingly prominent. After the popularity of Docker, a large number of open source projects and proprietary platforms appeared one after another to solve the problem of container arrangement. Mesos, Docker Swarm and Kubernetes all provide different abstractions to manage containers. In the meantime, for software developers, choosing a container orchestration platform is like a gamble, because once the selected platform loses in the future competition, it means that the next development will lose the market in the future. Just like the original mobile phone system dispute between Android, iOS and WP, only the winner can gain greater market prospects, and the loser will even disappear. The battle of container layout platform has begun.

20 16-CRI-O was born.

In 20 16, the Kubernetes project introduced CRI (Container Runtime Interface), which enabled kubelet (a cluster node agent for creating pod and starting containers) to use different OCI-compliant container runtime environments without recompiling Kubernetes. Based on CRI, an open source project named CRI-O was born to provide a lightweight runtime environment for Kubernetes.

CRI-O allows developers to run containers directly from Kubernetes, which means that Kubernetes can manage containerized workloads without relying on traditional container engines such as Docker. In this way, on the Kubernetes platform, as long as the container conforms to the OCI standard (not necessarily Docker), CRI-O can run, which makes the container return to the most basic function-it can package and run cloud native programs.

At the same time, the appearance of CRI-O makes people who use container technology for software management and operation and maintenance find that Kubernetes technology stack (such as scheduling system, CRI and CRI-O) is more suitable for managing complex production environment than Docker's own standard container engine. It can be said that CRI-O puts the container choreographer in an important position in the container technology stack, thus reducing the importance of the container engine.

In the case of K8S winning the first prize, Docker made a mistake in promoting its own container arrangement platform Docker Swarm. At the end of 20 16, there were rumors in the industry that Docker might change the Docker standard in order to better adapt to Swarm. This makes many developers prefer Kubernetes, which is more compatible with the market, in the choice of platform.

Therefore, after entering 20 17, more manufacturers are willing to bet on K8S and invest in the construction of K8S-related ecology. The dispute over container arrangement ended in the victory of Google camp. At the same time, CNCF with K8S as the core has also begun to develop rapidly and become the hottest open source project foundation at present. In the past two years, China science and technology enterprises, including Alibaba Cloud, Tencent and Baidu, have joined CNCF, embracing container technology and cloud nativity in an all-round way.

label

From the exploration of process isolation function in the laboratory decades ago to the construction of cloud native infrastructure all over the production environment, it can be said that container technology has developed from a small container to a large modern port, which has condensed the efforts of several generations of developers. It can be predicted that container technology will be an important infrastructure for software development and operation and maintenance for a long time to come.