As developers working across multiple Kubernetes environments, effectively managing access to each cluster is critical for our productivity and sanity. This is where kubectl contexts come into play…
Kubernetes Contexts Explained
First, what exactly is a context in Kubernetes? A context provides the basic configuration needed to connect to a given Kubernetes cluster by bundling together key access parameters:
These pieces are:
- Cluster: The Kubernetes API server endpoint for connecting to the cluster….
…(content continues)….
Comparing Contexts and Kubeconfig Files
While contexts wrap key cluster access config under a single name, the other approach is to work directly with raw kubeconfig files. What are the pros and cons of each?
Contexts
- Pros:
- Streamlined access configuration
- Rapid switching between clusters
- Built-in mechanism for Kubernetes
- Cons
- Added abstraction on top of kubeconfig
- Still requires underlying kubeconfig setup
Kubeconfig Files
- Pros:
- Direct YAML-based API access configuration
- Works across all Kubernetes tooling
- Cons:
- Redundant credentials and endpoints
- Manual swapping of files to switch clusters
The choice depends on your workflows. For developers actively alternating between multiple environments, contexts provide the simplicity and convenience needed for everyday use with kubectl.
Meanwhile, directly using kubeconfig files allows for precise control when configuring advanced policies or integrating customized tooling…
…(content continues)…
Authenticating Multiple Clusters
Let‘s walk through a real example of setting up access for local and cloud-based Kubernetes using contexts.
Our hypothetical startup Rover offers a pet photo sharing app deployed across multiple envionments. Here is the planned architecture:
- Local Dev: Minikube clusters for feature development
- Test Environments: GKE staging clusters per service
- Production: Regional GKE clusters across 4 geos
We need unified Kubernetes access and sane context setup across our team. Let‘s get started!
First, access credentials. For secured cloud environments like GKE, EKS, and AKS, we leverage built-in integrations to simplify things…
(GKE setup details)
With cloud authentication configured, we pull the kubeconfig files down to our local machines. Here we have credentials for staging and production:
Now we incorporate our local Minikube cluster that developers use for day-to-day feature work:
With credentials in place, we now create contexts to tie our access together:
$ kubectl config set-context dev --cluster=minikube --namespace=rover-dev
$ kubectl config set-context staging --cluster=gke_staging --namespace=rover-staging
$ kubectl config set-context prod-us --cluster=gke_us-prod --namespace=rover
$ kubectl config set-context prod-eu --cluster=gke_eu-prod --namespace=rover
...
Viewing our resulting context setup:
Now a developer can seamlessly flow between environments as needed:
$ kubectl config use-context dev
$ kubectl get pods
$ kubectl config use-context staging
$ kubectl get services
$ kubectl config use-context prod-us
$ kubectl get deployments
Context switching saves precious time and eliminates confusion when moving between clusters!
…(content continues)…