left-icon

Kubernetes Succinctly®
by Rahul Rai and Tarun Pabbi

Previous
Chapter

of
A
A
A

CHAPTER 9

Next Steps

Next Steps


Kubernetes is one of the fastest-growing technologies in the DevOps world. In the world of containers, Kubernetes has become as significant as Docker. As covered in Chapter 2, Docker has introduced inbuilt support for Kubernetes. However, like any other technology, it is still evolving, and is under continuous development.

In this book, we have covered the primary components to deploy applications in Kubernetes. If you are interested in learning more or want to know what new stuff is available, you can explore some of the topics presented in the following sections.

Advanced Kubernetes networking

At the core of Kubernetes is its networking model. In Chapter 2, we saw that before adding nodes to our cluster, we need to have a pod-network add-on deployed in the cluster, which extends the Kubernetes existing networking to provide better pod-to-pod communication. Similarly, the container-to-container interaction can happen using pods and localhost communication. There is also the services concept in Kubernetes, which gives you the capability to interact with the external world.

Kubernetes follows a different networking model than Docker, where every pod has an IP address through which it can communicate with another component. This helps improve communication between the component and load balancing.

To learn more, you can start with official Kubernetes documentation.

Kubernetes and serverless

Serverless is the next big thing in compute resources in which you don’t even bother about the executing machine or OS, and just focus on running your code and paying only for resources you consume. If you compare it to traditional cloud models, it comes under the PaaS (Platform-as-a-Service) model, but provides more abstraction regarding VMs and operating systems, and enables you to focus more on your application (the idea of cloud). Now, since we have applications moving from traditional deployment to container-based deployment, we have serverless container services coming up, like Amazon Elastic Container Service and Azure Container Instances, which provide serverless infrastructure to run your containers.

For a simple application, this is good enough, but when you have complex containers and services, and they need constant interaction and monitoring, you need an orchestrator to manage them—enter Kubernetes.

Due to immense popularity and maturity, Kubernetes (after all, Google has been running it for a long time) is becoming a de facto orchestrator for managing containers and running applications smoothly at a large scale.

Kubernetes automation

We saw in Chapter 3 that we could easily deploy our application to Kubernetes clusters using the kubectl tool. We can also perform updates and manage different components of a cluster using the same command. However, when you have a complex cluster with many services and components, doing all these tasks manually becomes cumbersome and error-prone. Like any other DevOps technology, you can also automate Kubernetes cluster deployment using open-source tools and scripts. You can also use good old Jenkins (a popular open-source automation server), with Helm as a package manager, and automate your cluster deployment.

There are many open-source and paid automation tools available that help in automation of Kubernetes cluster. Some of the other popular Kubernetes automation tools are Keel and Buddy.

Keel

Keel is an open-source Kubernetes automation tool that helps in automating deployment updates. It is deployed in the Kubernetes cluster and manages subsequent cluster upgrades automatically by monitoring version updates in your repository. It is a self-hosted solution without any dependencies and doesn’t require any specific storage. It depends on configuration files within the application or Helm charts to manage the cluster.

Buddy

Buddy is another continuous-delivery tool that can be used to automate cluster updates. You can perform the following tasks using Buddy:

  • Managing configuration changes of a K8s deployment.
  • Deploying code changes.
  • Managing Docker file updates.
  • Building Docker images and pushing them to the Docker registry.
  • Applying new images to your K8s cluster.

Kubernetes and Windows

We all know that open source is the future of technology, and many companies have started embracing it wholeheartedly, including Microsoft. However, many enterprises have invested heavily in .NET and Windows, and it might not be immediately possible for them to migrate to other technologies. There are many advantages these enterprises can leverage if they move their deployment to container-based, even if the core technology stack is legacy.

To make Kubernetes available for all, the Kubernetes team is working tirelessly to have Kubernetes available for Windows Server containers. As of this book writing, the Kubernetes 1.9 release has introduced beta support for Windows Server containers, which can be used to have Windows-based containers deployed on Kubernetes.

General availability of Windows Server containers may happen very soon, and we expect it will be a popular and exciting feature.

Kubernetes Kustomize

Kubernetes Kustomize is a template-free configuration customization for Kubernetes.

In Kubernetes deployment, YAML are the main files that contain all the cluster configuration. To deploy the cluster in a different environment, say testing, staging, or production, usually a copy of the configuration file is maintained for each environment. Since each configuration has its own set of labels or resource limits, it quickly becomes difficult to maintain all these files, and a change made in one file needs to be manually propagated across all the YAMLs.

You can always maintain a standard copy and replace environment-specific values via scripts or tools, but it involves custom processing and the effort of scripting and understanding the existing configuration. Kustomize provides a solution to this problem by reading the configuration file and the Kubernetes API resource files it references, and then generating an entirely new resource to standard output. This output can then be directly utilized to build or update the cluster.

Kubernetes security

Security is one of the critical aspects of any software deployment, and Kubernetes is no exception. While Kubernetes by default has many inbuilt security mechanisms, there are many best practices you need to follow to make sure your cluster and your application running inside that cluster are both secure and free from malicious access. Some of these are explained in the following sections.

Restricting access to Kubernetes API

First and foremost, you should have all your cluster APIs exposed via SSL protocol only. With free SSL certificates sites like LetsEncrypt, you can easily have one procured for your cluster, or you can buy one from the certificate providers. Moreover, Kubernetes supports authentication for its APIs, and you can use any authentication pattern to make sure your APIs are accessed by authorized personnel only.

Restricting kubelet

You can control both node and containers using the kubelet HTTPs endpoint, which is exposed unauthenticated by default. However, you can easily disable that using --anonymous-auth=false while starting the kubelet.

Controlling resource usage and access

Kubernetes gives you the flexibility to limit resource consumption for a cluster using the resource quotas and limit ranges.

Resource quotas limit the aggregate resource consumption by the Kubernetes namespace. They restrict the number of resources and objects that can be created in a namespace to avoid having an uneven distribution of resources. This helps in maintaining the stability of the cluster.

Kubernetes on cloud

In the previous chapter, we saw how we could package our application using Helm, which helps us automate repeated deployment tasks. In our journey, we have seen many critical phases in a Kubernetes deployment, starting from installation, monitoring, logging, etc. In the world of cloud and Platform-as-a-Service, it seems redundant to do all these tasks manually and risk running into bottlenecks as the application grows. Like all other services, we have Kubernetes offerings also available from many cloud providers, including Microsoft, Google, and Amazon. They provide managed services for Kubernetes, which take care of most of the cluster management operations and leave you focusing only on your application.

Kubernetes and AWS

AWS provides support for both managed and unmanaged Kubernetes service. You can use EC2 instances and run your own Kubernetes cluster, or choose its managed service, Amazon Elastic Container Service for Kubernetes. You can use the Kops tool to deploy your Kubernetes cluster on AWS EC2 automatically.

Amazon Elastic Container Service for Kubernetes (EKS) is a fully managed offering from Amazon to deploy, manage, and scale containerized applications. Amazon EKS provides a managed control pane that runs across different geography to ensure high availability and scalability. You can think of it as a Kubernetes master. It ensures that all your components and resources inside the cluster are running as expected.

Amazon EKS also provides integration with existing AWS services, like IAM (AWS Identity and Access Management) and VPC (AWS Virtual Private Cloud) to ensure better security and accessibility. It provides inbuilt logging through integration with the AWS CloudTrail service (a logging and monitoring service), which also helps in cluster monitoring. Amazon EKS also supports all kubectl commands.

Kubernetes and Azure

Microsoft Azure also supports both managed and unmanaged Kubernetes services. You can use Azure Virtual Machine with accelerated networking to create your Kubernetes cluster, or choose its managed service, called Azure Kubernetes Service.

Microsoft’s Azure Kubernetes Service (AKS) is a managed Kubernetes service that handles most of the complexity of managing and operating the cluster. Many of the components of Kubernetes are abstracted by AKS, including the most essential Kubernetes master. It also takes care of monitoring and central logging for you. Since the master node is abstracted, you are not charged for it, and you only pay for worker node(s) in your cluster. Also, it supports most of the kubectl commands, so you don’t have to learn a new one for AKS. Installing AKS is relatively easy; you can set it up using CLI or the Azure portal, and even automate it like other Azure services.

Kubernetes and GCP

Like AWS and Azure, Google Cloud Platform (GCP) also supports setting up Kubernetes clusters in GCP Compute Engine instances. It also has a managed service, Google Kubernetes Engine. Since Kubernetes was originally a Google product, and Google is still the major contributor to it, it has slightly better integration with Kubernetes than other cloud providers.

Google Kubernetes Engine (GKE) is a fully managed service to run your containers in a Kubernetes cluster without worrying about its operational maintenance. It provides inbuilt logging, security, monitoring, autoscale, and dashboard capabilities.

Kubernetes communities

Although introduced by Google, Kubernetes is now backed by the Cloud Native Computing Foundation (CNCF). Being open source has helped it become one of the popular orchestrator systems. As a result, like other open-source systems, it has a wide community base that keeps on growing day by day. CNCF ensures that it remains vendor-neutral and is compatible with other OSS products like Prometheus so that the developer community gets the most benefit from it. From its website: “Cloud native computing uses an open source software stack to deploy applications as microservices, packaging each part into its container, and dynamically orchestrating those containers to optimize resource utilization.”

The comprehensive list of Kubernetes working groups and communities can be found on GitHub.

Summary

In this chapter, we have covered Kubernetes’s upcoming releases like Kubernetes Kustomize, Windows Server containers, and Kubernetes automation tools. We have also briefly covered managed cloud offerings in Kubernetes and touched on various Kubernetes security concepts. Being the leader in container orchestration, Kubernetes is growing every day, and it has now become one of the default skills for a developer or DevOps engineer.

Kubernetes is a vast platform, but we have tried to cover it in a precise and interactive manner. In this short title, we discussed some of the core components of Kubernetes. It’s essential that we keep ourselves up to date with the latest developments in the wonderful container world, as it is very likely to replace the traditional deployment model very soon. What containers are doing today to the traditional deployment models is what cloud did to the bare metal deployments a decade back, and as developers, it’s our responsibility to stay on top of modern technology trends. We wish you success on your learning journey.

Scroll To Top
Disclaimer
DISCLAIMER: Web reader is currently in beta. Please report any issues through our support system. PDF and Kindle format files are also available for download.

Previous

Next



You are one step away from downloading ebooks from the Succinctly® series premier collection!
A confirmation has been sent to your email address. Please check and confirm your email subscription to complete the download.