This post was originally published on this site

The Kubernetes project just turned six years old. But what is Kubernetes? Where does it come from? And what problem is it trying to solve? 

When I first started working at Dynatrace, nine years ago, our customers were using our solution to get deep end-to-end visibility into environments we would now refer to be mostly monolithic. The bold ones were building distributed architectures using SOA, trying to implement ESBs and this all looked good on paper but ended up being difficult to implement. 

But a perfect storm was brewing on the horizon. Three revolutions that have been feeding on each other, as commented by John Arundel and Justin Domingus in their book Cloud Native DevOps with Kubernetes: 

  • Cloud: Automation of infrastructure and services on-demand, pay as you use model 
  • DevOps and Continuous delivery: Revolution in the process, the way people and organizations delivering software work 
  • Containers and Microservices: Revolution in the architecture of distributed systems 

Cloud-native refers to cloud-based, containerized, distributed systems, made up of cooperating microservices, dynamically managed by automated infrastructure as code. 

Our product leadership at Dynatrace saw the coming of this cloud-native era and the challenges it would bring to organizations, so they made the decision to re-invent our solution and transform the way we operate. 

A bit over three years ago, I was showing a customer this new solution we were starting to offer, built for the hybrid enterprises, with cloud-native in mind. Explaining the need to think about the digital transformation and your future projectsas competition is ferocious and things are evolving fast, I said to the customer they must be looking at implementing CI/CD and running your apps in containers. To this, they responded that as they were in the fashion retail business, they weren’t doing containers and didn’t think they’ll ever make it to become cloud-native. Two years later, I came across that same customer at the AWS re:Invent conference, who then told me they were in the midst of a big project, moving their eCommerce to AWS, targeting running a big chunk of it in Kubernetes next year. 

A change was happening, and it was happening fast; more organizations were adopting containerized deployment methods (such as Docker) and also DevOps practices and CI/CD pipelines to confidently deliver business differentiating features quickly in an increasingly competitive market. At the start, Docker was mainly used by developers for testing purposes, but the main challenge was to manage containers at scale in real-world production environments. 

Why are containers so hot?  

Containers provide a lightweight mechanism to package an application code and its dependencies, creating a small factored, self-contained but fully functional environment to run a workload (app, service), isolated from the other applications running on the same machine. These packages, known as container images, are immutable and because they abstract the environment on which they run, they’re portable and can be moved from one environment to another, regardless of what platform it’s running on, i.e. physical or virtual machine, on-premise, data center, or in the public cloud.  

Container runtime engines (such as Docker), leverage OS-level virtualization capabilities offered from the kernel to create those isolated spaces. Because the concern of environmental conflicts are removed, you can run multiple containers on the same machine and achieve higher resource utilization, driving infrastructure costs down. 

But on its own, it’s not sufficient.  

What’s missing here? Well, many things can happen with containers.  

As containers are the vehicle of choice for microservices, you’re not expecting to run a full-fledged enterprise application in a single container; instead, you’ll have multiple containers running on different machines making up a distributed system. But how will you set up the communication? Who manages the networking aspects? How do you make this system resilient and fault-tolerant? How do you make it scalable?  

Containers cannot be used at their full potential when only on their own. Enter the orchestration platform.  

Think of it as a classical orchestra and replace composer with software architect, conductor with container platform, score with workloadmusicians with containers, hand gestures with API-based messages, performance with current system state, and vision with desired system state.  

If there was any company that was positioned to understand those problems and limitations of containers before anyone else, it was Google. 

Google has been running production workloads in containers for longer than any other organization, in the beginning, to operate their infrastructure at a high utilization, they moved their most intensive services into containers. To overcome the challenges of efficiently managing such deployments at a massive scalethey invented a platform to enable container orchestration, known as Borg, which has been Google’s secret weapon for a decade until, in 2014 whenit announced Kubernetes, an open-source project based on the experience and lessons learned from Borg and Omega.   

Since then, Kubernetes has taken the container world by stormbecoming the de facto standard for container orchestration, leaving Swarm and Mesos far behind. According to the CNCF Survey 2019, conducted from organizations of all sizes, 8 out of the top 10 most used container management tools were a distribution of Kubernetes. Google eventually donated the project to the Cloud Native Computing Foundation (CNCF), while remaining its largest contributor but giants such as Microsoft, Intel, and Red Hat are also on board.  

Some of the most popular Kubernetes distributions include: 

Kubernetes managed by cloud service providers 

  • GKE (Google Cloud Platform) 
  • EKS (Amazon Web Services) 
  • AKS (Microsoft Azure) 
  • IKS (IBM Cloud) 

Kubernetes enterprise distributions 

  • Red Hat OpenShift Container Platform 
  • Rancher Kubernetes Engine 
  • Mirantis Docker Kubernetes Service (fka Docker EE)  
  • VMWare Tanzu Kubernetes Grid (fka PKS) 
  • D2iQ Mesosphere Kubernetes Engine 

Conclusion 

Kubernetes ihardcomplex implementation, and operations at the enterprise level is not a walk in the park, requiring adequate monitoring and a different approach than with classic stacks. In our next upcoming article, we’ll look at the concepts behind the architecture of a Kubernetes platform.  

Further your Kubernetes knowledge 

This syndicated content is provided by Dynatrace and was originally posted at https://www.dynatrace.com/news/blog/what-is-kubernetes-orchestrating-the-world-at-the-age-of-a-first-grader/