Kubernetes is definitely the easiest and easiest way to meet the needs of complex web applications. -- Scott Mccarty (Author) In the late 1990s and early 2000s, it was fun to work on large websites.

Kubernetes is definitely the easiest and easiest way to meet the needs of complex web applications.

-- Scott Mccarty (Author)

In the late 1990s and early 2000s, it was fun to work on large websites. My experience reminds me of American Greetings Interactive, on Valentine’s Day we had one of the top 10 websites on the internet (measured by the amount of web visits). We provide e-cards for companies such as AmericanGreetings.com, BlueMountain.com, and e-cards for partners such as MSN and AOL. The organization’s veteran employees still remember the epic story of fighting other e-card sites like Hallmark. By the way, I also run large websites for Holly Hobbie, Care Bears and Strawberry Shortcake.

I remember that it was like it happened yesterday, and it was the first time we had a real problem. Typically, our front doors (routers, firewalls, and load balancers) have approximately 200Mbps of traffic in. But suddenly, the Multi Router Traffic Grapher (MRTG) diagram suddenly soared to 2Gbps in a few minutes. I ran around like crazy. I learned about our entire technology stack, from routers, switches, firewalls and load balancers, to Linux/Apache web servers, to our Python stack (meta-version of FastCGI), and to the network file system (NFS) server. I know where all the configuration files are, I have access to all the management interfaces, and I am an experienced, hard-working system administrator with years of experience in solving complex problems.

However, I can't figure out what's going on...

When you type commands frantically on a thousand Linux servers, five minutes feels like eternity. I know the site can crash at any time because it's so easy to crush a cluster of thousands of nodes when it's divided into smaller clusters.

I quickly ran to the boss’s desk and explained the situation. He barely looked up from the email, which frustrated me. He looked up and smiled, saying, "Yes, marketing may run ad campaigns. This happens sometimes." He told me to set a special logo in the app to reduce Akamai's visits. I ran back to my desk, set the sign on thousands of web servers, and after a few minutes, the site returned to normal. Disasters are also avoided.

I can share 50 similar stories, but you may have a little curiosity in your mind: "Where will this operation and maintenance method go?"

The key is that we have encountered business problems. When technical issues keep you from doing business, they become business issues. In other words, if your website is inaccessible, you cannot process customer transactions.

So, what does all this have to do with Kubernetes? everything! The world has changed. As early as the late 1990s and early 2000s, only large websites had large-scale problems. Now, with the microservices and digital transformation , each enterprise faces a large-scale and large-scale problem - possibly multiple large-scale and large-scale problems.

Your business needs to be able to manage complex, scale-level websites through many different, often complex services built by many different people. Your website needs to process traffic dynamically, and they must be secure. These properties need to be driven by APIs on all layers, from infrastructure to application layer. Entering Kubernetes

Kubernetes is not complicated; your business problems are more complicated. When you want to run your application in a production environment, it requires a minimum level of complexity to meet performance (scalability, performance jitter, etc.) and security requirements. Data technologies such as high availability (HA), capacity requirements (N+1, N+2, N+100) and final consistency will become necessary. These are the production requirements of every company that undergoes digital transformation, not just large sites like Google, Facebook and Twitter.

In the old days, when I was still at American Greetings, every time we joined a new service, it looked like this: All of this was handled by the website operations team, and none of them were handled through the order system to other teams. This is before the advent of DevOps DevOps:

  1. configuration DNS (usually internal service layer and public-facing external)
  2. configuration load balancer (usually internal services and public-facing)
  3. configuration shared access to files (large NFS servers, cluster file systems, etc.)
  4. configuration cluster software (databases, service layers, etc.)
  5. configuration web server cluster (can be 10 or 50 servers)

Most configurations are automatically done by configuration management , but the configuration is still complex because each system and service has different configuration files and the format is completely different. We looked at tools like Augeas to simplify it, but we thought using converters to try and standardize a bunch of different configuration files is an anti-pattern.

Nowadays, with the help of Kubernetes, launching a new service essentially looks like this:

  1. configuration Kubernetes YAML/JSON.
  2. is submitted to the Kubernetes API (kubectl create -f service.yaml).

Kubernetes greatly simplifies the start and management of services. Service owners (whether they are system administrators, developers, or architects) can create YAML/JSON files in Kubernetes format. With Kubernetes, each system and every user speaks the same language. All users can submit these files in the same Git repository, enabling GitOps.

Also, services can be deprecated and deleted. Historically, it's terrible to remove DNS entries, load balancer entries, and web server configurations, etc., because you're almost certainly breaking something. With Kubernetes, everything is under the namespace, so the entire service can be deleted with a single command. Although you still need to make sure other applications don't use it (the disadvantage of microservices and functions as a service [FaaS]), you can be more confident that deleting services will not destroy the infrastructure environment.

Build, manage, and use Kubernetes

Too many people focus on building and managing Kubernetes rather than using it (see Kubernetes is a dump truck for details).

Building a simple Kubernetes environment on a single node is no more complicated than installing a LAMP stack, but we endlessly debate the problem of building and buying. Not Kubernetes is hard; it runs applications at a high availability scale. Building a complex, highly available Kubernetes cluster is difficult because it is difficult to build any cluster of this size. It requires planning and a lot of software. Building a simple dump truck is not complicated, but building a truck that can carry 10 tons of garbage and can travel stably at 200 mph is complicated.

Managing Kubernetes can be complicated because managing large, large-scale clusters can be complicated. Sometimes it makes sense to manage this infrastructure; sometimes it is not. Because Kubernetes is a community-driven open source project, it enables the industry to manage it in many different ways. Vendors can sell hosted versions, and users can manage them at their discretion as needed. (But you should question whether you really need it.)

Using Kubernetes is by far the easiest way to run a large-scale website. Kubernetes is popularizing the ability to run a large, complex set of web services – just like Linux did in Web 1.0 back then.

Since time and money are a zero-sum game, I recommend focusing on using Kubernetes. Spend your time and money on the best way to master Kubernetes primitives or deal with activity and readiness probes (another example of how large, complex services are difficult). Don't focus on building and managing Kubernetes. (In building and managing) Many vendors can help you.

Conclusion

I remember troubleshooting countless problems, such as the one I described at the beginning of this article - NFS in the Linux kernel at that time, our own CFEngine, redirection problems that only occur on certain web servers, etc.). The developer couldn't help me with all of these issues. In fact, it is impossible for developers to even get into the system and help as a second pair of eyes unless they have the skills of a senior system administrator. There is no console with graphics or "observability" – observability is in the brains of me and other system administrators. Now, with Kubernetes, Prometheus, Grafana, etc., everything has changed. The key to

is:

  1. times are different. Now all web applications are large distributed system . Just as complex as AmericanGreetings.com used to be, every website now has scalability and HA requirements.
  2. is difficult to run a large distributed system. Absolutely. This is a business requirement, not a Kubernetes issue. Using a simpler orchestration system is not the solution.

Kubernetes is definitely the easiest and easiest way to meet the needs of complex web applications. This is the era we live in, and Kubernetes is good at it. You can discuss whether you should build or manage Kubernetes yourself. There are many vendors that can help you build and manage it, but it's hard to deny that it's the easiest way to run complex web applications at scale.


via: https://opensource.com/article/19/10/kubernetes-complex-business-problem

Author: Scott McCarty Topic: lujun9972 Translator: laingke Proofreading: wxy

This article was originally compiled by LCTT, Linux China Honored launch

Click "Learn More" to access the link in the article