First of all, it is a brand-new leading scheme of distributed architecture based on container technology. Although this scheme is still very new, it is an important achievement accumulated and sublimated by Google's large-scale application experience of container technology for more than ten years. To be exact, Kubernetes is an open source version of ——Borg, a secret weapon that Google has kept strictly confidential for more than ten years. Borg is a famous large-scale cluster management system used inside Google. Based on container technology, it aims to realize the automation of resource management and maximize the resource utilization of multiple data centers. For more than ten years, Google has been managing a large number of application clusters through Borg system. Since Google employees have signed a confidentiality agreement, even if they leave, they can't disclose Borg's internal design, so the outside world has been unable to learn more. It was not until 2065438+April 2005 that the long-rumored Borg paper was first published by Google under the high-profile publicity of Kubernetes, and everyone was able to learn more about it. It is precisely because of standing on Borg's shoulders and learning from Borg's experience and lessons in the past ten years that Kubernetes became a blockbuster and quickly dominated the container field. Secondly, if our system design follows the design idea of Kubernetes, the underlying codes or functional modules that have little to do with business in the traditional system architecture can disappear from our sight immediately. We don't have to worry about the selection, deployment and implementation of the load balancer, introduce or develop a complex service governance framework ourselves, and don't have to worry about the development of service monitoring and fault handling modules. In a word, with the solution provided by Kubernetes, we can not only save no less than 30% of the development cost, but also focus more on the business itself. Moreover, due to the powerful automation mechanism provided by Kubernetes, the difficulty and cost of the later operation and maintenance of the system are greatly reduced.
Then, Kubernetes is an open development platform. Different from J2EE, it is not limited to any language and has no programming interface, so any service written in Java, Go, C++ or Python can be mapped to the service of Kubernetes and interact through the standard TCP communication protocol. In addition, the Kubernetes platform does not interfere with the existing programming language, programming framework and middleware, so the existing system can be easily upgraded and migrated to the Kubernetes platform.
Finally, Kubernetes is a complete distributed system support platform. Kubernetes has complete cluster management capabilities, including multi-level security protection and access mechanisms, multi-tenant application support capabilities, transparent service registration and service discovery mechanisms, built-in intelligent load balancer, powerful fault discovery and self-repair capabilities, rolling service upgrade and online capacity expansion capabilities, extensible resource automatic scheduling mechanisms, and multi-granularity resource quota management capabilities. At the same time, Kubernetes provides perfect management tools, covering all aspects including development, deployment testing, operation and maintenance monitoring. Therefore, Kubernetes is a brand-new distributed architecture solution based on container technology and a one-stop and complete distributed system development and support platform.
Before starting the Hello World tour in this chapter, we should first learn some basic knowledge of Kubernetes, so as to understand the solutions provided by Kubernetes.
In Kubernetes, service is the core of distributed cluster architecture, and the service object has the following key characteristics.
Have a unique name (such as mysql-server).
Have virtual IP (cluster IP, service IP or VIP) and port number.
The ability to provide some kind of remote service.
Is mapped to a set of container applications that provide this service function.
At present, the Service process of service is based on Socket communication to provide services to the outside world, such as Redis, Memcache, MySQL,
Web server, or a specific TCP server process that implements a specific service. Although a Service is usually provided by multiple related service processes, and each service process has an independent endpoint (IP+Port) access point, Kubernetes enables us to connect to the specified service through service (virtual cluster IP+ service port). With the transparent load balancing and fault recovery mechanism built in Kubernetes, no matter how many service processes are in the back end, or whether a service process is redeployed to other machines due to a fault, it will not affect the normal call of services. More importantly, once the service itself is created, it will not change, which means that we no longer need to worry about the change of the IP address of the service in the Kubernetes cluster.
Containers provide powerful isolation function, so it is necessary to isolate this group of processes that provide services in containers. Therefore, Kubernetes designed a Pod object to package each service process into a corresponding Pod, making it a container running in the Pod. In order to establish the relationship between services and Pods, Kubernetes first marks each Pod, marks the Pod running MySQL with name=mysql, marks the POD running PHP with name=php, and then defines a label selector for the corresponding service. For example, the selection condition of the label selector of MySQL service is name=mysql, which means that the service should act on all PODs containing the name=mysql label. This skillfully solves the problem of the association between service and Pod.
Let's briefly introduce the concept of Pod. First of all, PODs run in an environment called nodes, which can be physical machines or virtual machines in private or public clouds. Usually, hundreds of PODs run on one node. Secondly, each Pod runs a special container called Pause, and the other containers are business containers. These business containers * * * share the network stack of the pause container and the volume mount volume, so the communication and data exchange between them is more efficient. We can make full use of this feature and put a group of closely related service processes into the same Pod when designing. Finally, it should be noted that not every Pod and the container running in it can be mapped to a service, only the group of PODs (whether internal or external) that provide services will be mapped to a service.
In cluster management, Kubernetes divides the machines in the cluster into a master node and some nodes. A group of processes related to cluster management, kube-apiserver, kube-controller-manager and kube-scheduler, are running on the main server. These processes realize the management functions of the whole cluster, such as resource management, Pod scheduling, flexible expansion, security control, system monitoring and error correction, etc., and they are all completed automatically. As a working Node in a cluster, Node runs real applications, and the smallest running unit managed by Kubernetes on Node is Pod. The nodes are running the kubelet and kube-proxy service processes of Kubernetes, which are responsible for creating, starting, monitoring, restarting and destroying Pod, and realizing the load balancer in software mode.
Finally, look at the two major problems of service expansion and service upgrade in traditional IT systems, and the new solutions provided by Kubernetes. The extension of service involves resource allocation (which node to choose for extension), instance deployment and start-up. In complex business systems, these two problems are basically solved by manual step-by-step operation, which is time-consuming and laborious, and it is difficult to ensure the quality of implementation.
In the Kubernetes cluster, it is only necessary to create an RC (Replication Controller) for the service-related Pod that needs to be expanded, and the headache problems such as service expansion and service upgrade can be solved. There are three key pieces of information in the RC definition file.
Definition of target Pod.
The number of copies that the target Pod needs to run.
Label of the target Pod to be monitored.
After the RC is created (Pod will be created automatically), Kubernetes will monitor the status and quantity of LabelPod instances defined in the RC in real time. If the number of instances is less than the defined number of copies, a new Pod will be created according to the Pod template defined in RC, and then the Pod will be assigned to the appropriate node to start running until the number of Pod instances reaches the predetermined target. This process is completely automatic and does not require human intervention. With RC, service expansion becomes a purely simple digital game, just need to modify the number of copies in RC. Subsequent service upgrades will also be completed automatically by modifying RC.