Why choose Panoptica?
Four reasons you need the industry’s leading cloud-native security solution.
Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications – in other words, a container orchestration platform.
A Kubernetes cluster is built out of a control plane and worker nodes. The control plane manages the worker nodes and pods in the cluster, and the worker nodes are machines that host the pods that are the components of the application workload (for more information - Kubernetes Components). It can be challenging to configure, deploy, and manage a cluster on your own, and with the increasing use of Kubernetes, many cloud providers now offer a managed Kubernetes service. This allows companies to have easy access to a Kubernetes cluster without having to set it up or maintain it by themselves.
In AWS this service is called EKS (Elastic Kubernetes Service). Each managed Kubernetes service works a bit differently and varies in the amount of “management” for which the provider is responsible. In AWS EKS, you have no access to the master nodes (the nodes that construct the control plane), but you can still access the API server and use tools such as kubectl to manage the workloads in your worker nodes.
Kubectl is a command line tool that lets you control Kubernetes clusters. It allows you to perform almost any operation on Kubernetes resources, such as create, delete, and update them. Kubectl crafts the HTTP request that matches the command you run and sends it to the Kubernetes API server. For example, if we run the “Kubectl get pods” command, Kubectl will send the HTTP request “GET /apis/apps/v1/namespaces/default/pods” to the Kubernetes API server.
Kubectl gets its configuration from the kubeconfig file usually located in $HOME/.kube/config path.
Here is a kubeconfig file example from the Kubernetes documentation: (https://kubernetes.io/docs/tasks/access-application-cluster/configure-access-multiple-clusters/#define-clusters-users-and-contexts)
The kubeconfig file has three sections: clusters, contexts, users.
To be able to communicate with your cluster, your kubeconfig file must be configured correctly, and this is another point where a managed Kubernetes service can assist, as we will soon see.
As previously mentioned, EKS is AWS’s managed Kubernetes service. As such, it takes the responsibility of deploying and managing the Kubernetes cluster off of you. In addition, EKS is integrated with other AWS services, among them is the Identity and Access Management (IAM), which is the main subject of this research. EKS uses IAM for cluster authentication, but the authorization still happens on native Kubernetes using RBAC (Role Based Access Control).
In this section, we will take a close look at EKS’s authentication, and examine every step of the way, from creating a cluster to using Kubectl to run commands on your cluster, including what happens behind the scenes.
First, if you don’t have an EKS cluster, you can create one using one of AWS’s guides, or Panoptica's secure EKS deployment guide. Once our cluster is created, we want to be able to connect to it from our computer. In their guide, AWS instructs us to run the following command:
aws eks update-kubeconfig --region REGION --name CLUSTER-NAME
The command we ran changed our kubeconfig file automatically and added all of the relevant information about the new cluster so that we can immediately use Kubectl to communicate with our new cluster.
We can test it by running the command:
kubectl get svc
Let’s take a closer look at the users’ section in our new kubeconfig file:
We can see that the user authentication is different from the usual authentication methods (Bearer token, username and password, etc.). This exec section is specifying a command to run and arguments to use for authenticating the user. The command that is going to be run is:
aws --region us-east-2 eks get-token --cluster-name eks-demo
AWS EKS responds with an access token. This access token will be added to the HTTP requests that Kubectl sends to the API server.
We can test it by running Kubectl get svc and pass it through a proxy:
Now, we will run the command aws –region us-east-2 eks get-token –cluster-name eks-demo and switch the authorization token in the request, to the new token that we just generated:
Voila! We got the same answer with our token.
Let’s explore this token for a bit.
The token is simply a base64 encoded string, if we decode it (without the k8s-aws-v1.), we will get the resulting string below:
Interesting! This is an HTTP request to AWS STS (Security Token Service), and to be specific, it is a “GetCallerIdentity” request (GetCallerIdentity request documentation).
This request is sent to the EKS cluster encoded as a token used for authentication. The EKS cluster receives the request, extracts the token, decodes it, and sends the same STS “GetCallerIdentity” request to the AWS STS service. The AWS STS response details provide our EKS cluster with the exact identity which is trying to perform the action. If the cluster gets a successful response for this request, it knows that AWS authenticated the user, and it gets the user’s identity.
So, what exactly is the response that the cluster gets?
Since the cluster access the AWS STS “GetCallerIdentity” API, we can run a similar command in our local CLI.
aws sts get-caller-identity
However, we didn’t pass our credentials (AWS Access Key) to the cluster, so how can it get the same response as when it includes the connected user identity?
Let’s take the base64 decoded token and try to send it to STS ourselves:
We got the error “Signature Does Not Match”.
Why is that?
Most of the requests to AWS need to be signed to verify the identity of the requester and to protect the data from being altered in transit. The signing process is complex, and we are not going to examine it in detail in this post (for more information about signing AWS requests - https://docs.aws.amazon.com/general/latest/gr/signature-version-4.html).
There are two main things that we need to understand in this case:
If we want to be able to get a response to the request, we must add the x-k8s-aws-id header with our cluster name as the value.
Basically, we got the same answer as we did when we ran the “sts get-caller-identity” from the CLI. Once our cluster gets this response, there is one last step to complete the authentication process, and that is to translate the AWS identity into a Kubernetes identity.
In general, a ConfigMap in Kubernetes is an API object that lets you store data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume. The main point of integration between Kubernetes and AWS IAM is the “aws-auth” ConfigMap.
This ConfigMap object maps AWS identities to Kubernetes identities. It is not enough that the "GetCallerIdentity" request is successful, because the AWS identity has no meaning in native Kubernetes. You cannot assign access permissions without having a parallel Kubernetes identity that works with RBAC.
Here is an example of an “aws-auth” ConfigMap from AWS's documentation:
There are two main parts in the ”aws-auth” ConfigMap:
Users that have permissions to edit the “aws-auth” ConfigMap, can become cluster admins by mapping themselves to a group that is already bound to a cluster-admin (“system:masters” group for instance).
Note that if you created your cluster using AWS CLI, the ”aws-auth” ConfigMap is not created automatically. Follow this guide to apply it to your cluster.
You can view the “aws-auth” ConfigMap content in your cluster by running this command:
kubectl get configmap aws-auth -n kube-system –o yaml
If an AWS identity is mapped in your “aws-auth” ConfigMap to a Kubernetes identity, this identity will be able to access your cluster. The scope of access will be determined by the roles/cluster roles that are bound to this identity.
Let’s review a successful flow of running a kubectl command with the help of a schema from AWS’s documentation:
It is important to understand that managed Kubernetes services, though very fast and easy to use, come at a cost. You will not have complete control over your cluster, and parts of the deployment might be hidden. Before using EKS, or any other managed service, try wrapping your head around what AWS (or other cloud providers) is doing for you. The more knowledge you have on how things work behind the scenes, the more secure your cluster will be, especially when it comes to authentication. In this post, we looked into EKS's authentication process, hoping to help you use it more wisely and with more awareness.
In the next part of our EKS series, we will deep dive into the implementation of the aws-iam-authenticator tool which is being used by the cluster for the AWS IAM credentials authentication.