I reviewed the basic setup for building applications in Kubernetes in part 1 of this blog series, and discussed processes as pods and controllers in part 2. In part 3, I explained how to configure networking services in Kubernetes to allow pods to communicate reliably with each other. In this installment, I’ll explain how to identify and manage the environment-specific configurations expected by your application to ensure its portability between environments.
Factoring out Configuration
One of the core design principles of any containerized app must be portability. We absolutely do not want to reengineer our containers or even the controllers that manage them for every environment. One very common reason why an application may work in one place but not another is problems with the environment-specific configuration expected by that app.
A well-designed application should treat configuration like an independent object, separate from the containers themselves, that’s provisioned to them at runtime. That way, when you move your app from one environment to another, you don’t need to rewrite any of your containers or controllers; you simply provide a configuration object appropriate to this new environment, leaving everything else untouched.
When we design applications, we need to identify what configurations we want to make pluggable in this way. Typically, these will be environment variables or config files that change from environment to environment, such as access tokens for different services used in staging versus production or different port configurations.
Decision #4: What application configurations will need to change from environment to environment?
From our web app example, a typical set of configs would include the access credentials for our database and API (of course, you’d never use the same ones for development and production environments), or a proxy config file if we chose to include a containerized proxy in front of our web frontend.
Once we’ve identified the configs in our application that should be pluggable, we can enable the behavior we want by using Kubernetes’ system of volumes and configMaps.
In Kubernetes, a volume can be thought of as a filesystem fragment. Volumes are provisioned to a pod and owned by that pod. The file contents of a volume can be mounted into any filesystem path we like in the pod’s containers.
I like to think of the volume declaration as the interface between the environment-specific config object and the portable, universal application definition. Your volume declaration will contain the instructions to map a set of external configs onto the appropriate places in your containers.
ConfigMaps contain the actual contents you’re going to use to populate a pod’s volumes or environment variables. They contain-key value pairs describing either files and file contents, or environment variables and their values. ConfigMaps typically differ from environment to environment. For example, you will probably have one configMap for your development environment and another for production—with the correct variables and config files for each environment.
Checkpoint #4: Create a configMap appropriate to each environment.
Your development environment’s configMap objects should capture the environment-specific configuration you identified above, with values appropriate for your development environment. Be sure to include a volume in your pod definitions that uses that configMap to populate the appropriate config files in your containers as necessary. Once you have the above set up for your development environment, it’s simple to create a new configMap object for each downstream environment and swap it in, leaving the rest of your application unchanged.
Basic configMaps are a powerful tool for modularizing configuration, but some situations require a slightly different approach.
- Secrets in Kubernetes are like configMaps in that they package up a bunch of files or key/value pairs to be provisioned to a pod. However, secrets offer added security guarantees around encryption data management. They are the more appropriate choice for any sensitive information, like passwords, access tokens or other key-like objects.
From here, we wrap up the series with a post about storage configuration for Kubernetes applications.
To learn more about configuring Kubernetes and related topics:
We will also be offering training on Kubernetes starting in early 2020. In the training, we’ll provide more specific examples and hands on exercises.To get notified when the training is available, sign up here:
This syndicated content is provided by Docker and was originally posted at https://www.docker.com/blog/designing-your-first-application-kubernetes-configuration-part4/