When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added to the Kubernetes RBAC authorization table as the administrator. Initially, only that IAM user can make calls to the Kubernetes API server using kubectl.
eks-docs
So to add access to other aws users, first
you must edit ConfigMap to add an IAM user or role to an Amazon EKS cluster.
You can edit the ConfigMap file by executing:
kubectl edit -n kube-system configmap/aws-auth
, after which you will be granted with editor with which you map new users.
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
mapAccounts: |
- "111122223333"
Mind the mapUsers
where you're adding ops-user together with mapAccounts
label which maps the AWS user account with a username on Kubernetes cluster.
However, no permissions are provided in RBAC by this action alone; you must still create role bindings in your cluster to provide these entities permissions.
As the amazon documentation(iam-docs) states you need to create a role binding on the kubernetes cluster for the user specified in the ConfigMap. You can do that by executing fallowing command (kub-docs):
kubectl create clusterrolebinding ops-user-cluster-admin-binding --clusterrole=cluster-admin --user=ops-user
which grants the cluster-admin ClusterRole
to a user named ops-user across the entire cluster.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…