I created an EC2 instance and an EKS cluster in the same AWS account.
In order to use the EKS cluster from EC2, I have to grant necessary permissions to it.
I added an instance profile role with some EKS operation permissions. Its role arn is arn:aws:iam::11111111:role/ec2-instance-profile-role(A) on dashboard. But in the EC2 instance, it can be found as arn:aws:sts::11111111:assumed-role/ec2-instance-profile-role/i-00000000(B).
$ aws sts get-caller-identity
{
"Account": "11111111",
"UserId": "AAAAAAAAAAAAAAA:i-000000000000",
"Arn": "arn:aws:sts::11111111:assumed-role/ec2-instance-profile-role/i-00000000"
}
I also created an aws-auth config map to set into Kubernetes' system config in EKS, in order to allow the EC2 instance profile role can be registered and accessible. I tried both A and B to set into the mapRoles, all of them got the same issue. When I run kubectl command on EC2:
$ aws eks --region aws-region update-kubeconfig --name eks-cluster-name
$ kubectl config view --minify
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://xxxxxxxxxxxxxxxxxxxxxxxxxxxx.aw1.aws-region.eks.amazonaws.com
name: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
contexts:
- context:
cluster: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
user: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
name: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
current-context: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
kind: Config
preferences: {}
users:
- name: arn:aws:eks:aws-region:11111111:cluster/eks-cluster-name
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- --region
- aws-region
- eks
- get-token
- --cluster-name
- eks-cluster-name
- --role
- arn:aws:sts::11111111:assumed-role/ec2-instance-profile-role/i-00000000
command: aws
env: null
provideClusterInfo: false
$kubectl get svc
error: You must be logged in to the server (Unauthorized)
I also checked the type of the assumed role. It's Service but not AWS.
It seems this type is necessary.
{
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Principal": { "AWS": "arn:aws:iam:: 333333333333:root" },
"Action": "sts:AssumeRole"
}
}
Terraform aws assume role
But I tried to create a new assume role with AWS type and set it to Kubernetes' aws-auth config map, still the same issue.
How to use it? Do I need to create a new IAM user to use?
- name: external-staging
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
args:
- exec
- test-dev
- --
- aws
- eks
- get-token
- --cluster-name
- eksCluster-1234
- --role-arn
- arn:aws:iam::3456789002:role/eks-cluster-admin-role-e65f32f
command: aws-vault
env: null
this config file working for me. it should be role-arn & command: aws-vault
I have been trying to run an external-dns pod using the guide provided by k8s-sig group. I have followed every step of the guide, and getting the below error.
time="2021-02-27T13:27:20Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 87a3ca86-ceb0-47be-8f90-25d0c2de9f48"
I had created AWS IAM policy using Terraform, and it was successfully created. Except IAM Role for service account for which I had used eksctl, everything else has been spun via Terraform.
But then I got hold of this article which says creating AWS IAM policy using awscli would eliminate this error. So I deleted the policy created using Terraform, and recreated it with awscli. Yet, it is throwing the same error error.
Below is my external dns yaml file.
apiVersion: v1
kind: ServiceAccount
metadata:
name: external-dns
# If you're using Amazon EKS with IAM Roles for Service Accounts, specify the following annotation.
# Otherwise, you may safely omit it.
annotations:
# Substitute your account ID and IAM service role name below.
eks.amazonaws.com/role-arn: arn:aws:iam::268xxxxxxx:role/eksctl-ats-Eks1-addon-iamserviceaccoun-Role1-WMLL93xxxx
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
spec:
serviceAccountName: external-dns
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.6
args:
- --source=service
- --source=ingress
- --domain-filter=xyz.com # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=Z0471542U7WSPZxxxx
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files
I am scratching my head as there is no proper solution to this error anywhere in the net. Hoping to find a solution to this issue in this forum.
End result must show something like below and fill up records in hosted zone.
time="2020-05-05T02:57:31Z" level=info msg="All records are already up to date"
I also struggled with this error.
The problem was in the definition of the trust relationship.
You can see in some offical aws tutorials (like this) the following setup:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"${OIDC_PROVIDER}:sub": "system:serviceaccount:<my-namespace>:<my-service-account>"
}
}
}
]
}
Option 1 for failure
My problem was that I passed the a wrong value for my-service-account at the end of ${OIDC_PROVIDER}:sub in the Condition part.
Option 2 for failure
After the previous fix - I still faced the same error - it was solved by following this aws tutorial which shows the output of using the eksctl with the command below:
eksctl create iamserviceaccount \
--name my-serviceaccount \
--namespace <your-ns> \
--cluster <your-cluster-name> \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess \
--approve
When you look at the output in the trust relationship tab in the AWS web console - you can see that an additional condition was added with the postfix of :aud and the value of sts.amazonaws.com:
So this need to be added after the "${OIDC_PROVIDER}:sub" condition.
I was able to get help from the Kubernetes Slack (shout out to #Rob Del) and this is what we came up with. There's nothing wrong with the k8s rbac from the article, the issue is the way the IAM role is written. I am using Terraform v0.12.24, but I believe something similar to the following .tf should work for Terraform v0.14:
data "aws_caller_identity" "current" {}
resource "aws_iam_role" "external_dns_role" {
name = "external-dns"
assume_role_policy = jsonencode({
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": format(
"arn:aws:iam::${data.aws_caller_identity.current.account_id}:%s",
replace(
"${aws_eks_cluster.<YOUR_CLUSTER_NAME>.identity[0].oidc[0].issuer}",
"https://",
"oidc-provider/"
)
)
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
format(
"%s:sub",
trimprefix(
"${aws_eks_cluster.<YOUR_CLUSTER_NAME>.identity[0].oidc[0].issuer}",
"https://"
)
) : "system:serviceaccount:default:external-dns"
}
}
}
]
})
}
The above .tf assume you created your eks cluster using terraform and that you use the rbac manifest from the external-dns tutorial.
I have a few possibilities here.
Before anything else, does your cluster have an OIDC provider associated with it? IRSA won't work without it.
You can check that in the AWS console, or via the CLI with:
aws eks describe-cluster --name {name} --query "cluster.identity.oidc.issuer"
First
Delete the iamserviceaccount, recreate it, remove the ServiceAccount definition from your ExternalDNS manfiest (the entire first section) and re-apply it.
eksctl delete iamserviceaccount --name {name} --namespace {namespace} --cluster {cluster}
eksctl create iamserviceaccount --name {name} --namespace {namespace} --cluster
{cluster} --attach-policy-arn {policy-arn} --approve --override-existing-serviceaccounts
kubectl apply -n {namespace} -f {your-externaldns-manifest.yaml}
It may be that there is some conflict going on as you have overwritten what you created with eksctl createiamserviceaccount by also specifying a ServiceAccount in your ExternalDNS manfiest.
Second
Upgrade your cluster to v1.19 (if it's not there already):
eksctl upgrade cluster --name {name} will show you what will be done;
eksctl upgrade cluster --name {name} --approve will do it
Third
Some documentation suggests that in addition to setting securityContext.fsGroup: 65534, you also need to set securityContext.runAsUser: 0.
I've been struggling with a similar issue after following the setup suggested here
I ended up with the exception below in the deploy logs.
time="2021-05-10T06:40:17Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 3fda6c69-2a0a-4bc9-b478-521b5131af9b"
time="2021-05-10T06:41:20Z" level=error msg="records retrieval failed: failed to list hosted zones: WebIdentityErr: failed to retrieve credentials\ncaused by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus code: 403, request id: 7d3e07a2-c514-44fa-8e79-d49314d9adb6"
In my case, it was an issue with wrong Service account name mapped to the new role created.
Here is a step by step approach to get this done without much hiccups.
Create the IAM Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"route53:ChangeResourceRecordSets"
],
"Resource": [
"arn:aws:route53:::hostedzone/*"
]
},
{
"Effect": "Allow",
"Action": [
"route53:ListHostedZones",
"route53:ListResourceRecordSets"
],
"Resource": [
"*"
]
}
]
}
Create the IAM role and the service account for your EKS cluster.
eksctl create iamserviceaccount \
--name external-dns-sa-eks \
--namespace default \
--cluster aecops-grpc-test \
--attach-policy-arn arn:aws:iam::xxxxxxxx:policy/external-dns-policy-eks \
--approve
--override-existing-serviceaccounts
Created new hosted zone.
aws route53 create-hosted-zone --name "hosted.domain.com." --caller-reference "grpc-endpoint-external-dns-test-$(date +%s)"
Deploy ExternalDNS, after creating the Cluster role and Cluster role binding to the previously created service account.
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: external-dns
rules:
- apiGroups: [""]
resources: ["services","endpoints","pods"]
verbs: ["get","watch","list"]
- apiGroups: ["extensions","networking.k8s.io"]
resources: ["ingresses"]
verbs: ["get","watch","list"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["list","watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: external-dns-viewer
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: external-dns
subjects:
- kind: ServiceAccount
name: external-dns-sa-eks
namespace: default
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: external-dns
spec:
strategy:
type: Recreate
selector:
matchLabels:
app: external-dns
template:
metadata:
labels:
app: external-dns
# If you're using kiam or kube2iam, specify the following annotation.
# Otherwise, you may safely omit it.
annotations:
iam.amazonaws.com/role: arn:aws:iam::***********:role/eksctl-eks-cluster-name-addon-iamserviceacco-Role1-156KP94SN7D7
spec:
serviceAccountName: external-dns-sa-eks
containers:
- name: external-dns
image: k8s.gcr.io/external-dns/external-dns:v0.7.6
args:
- --source=service
- --source=ingress
- --domain-filter=hosted.domain.com. # will make ExternalDNS see only the hosted zones matching provided domain, omit to process all available hosted zones
- --provider=aws
- --policy=upsert-only # would prevent ExternalDNS from deleting any records, omit to enable full synchronization
- --aws-zone-type=public # only look at public hosted zones (valid values are public, private or no value for both)
- --registry=txt
- --txt-owner-id=my-hostedzone-identifier
securityContext:
fsGroup: 65534 # For ExternalDNS to be able to read Kubernetes and AWS token files
Update Ingress resource with the domain name and reapply the manifest.
For ingress objects, ExternalDNS will create a DNS record based on the host specified for the ingress object.
- host: myapp.hosted.domain.com
Validate new records created.
BASH-3.2$ aws route53 list-resource-record-sets --output json
--hosted-zone-id "/hostedzone/Z065*********" --query "ResourceRecordSets[?Name == 'hosted.domain.com..']|[?Type == 'A']"
[
{
"Name": "myapp.hosted.domain.com..",
"Type": "A",
"AliasTarget": {
"HostedZoneId": "ZCT6F*******",
"DNSName": "****************.elb.ap-southeast-2.amazonaws.com.",
"EvaluateTargetHealth": true
}
} ]
In our case this issue occurred when using the Terraform module to create the eks cluster, and eksctl to create the iamserviceaccount for the aws-load-balancer controller. It all works fine the first go-round. But if you do a terraform destroy, you need to do some cleanup, like delete the CloudFormation script created by eksctl. Somehow things got crossed, and the CloudTrail was passing along a resource role that was no longer valid. So check the annotation of the service account to ensure it's valid, and update it if necessary. Then in my case I deleted and redeployed the aws-load-balancer-controller
%> kubectl describe serviceaccount aws-load-balancer-controller -n kube-system
Name: aws-load-balancer-controller
Namespace: kube-system
Labels: app.kubernetes.io/managed-by=eksctl
Annotations: eks.amazonaws.com/role-arn: arn:aws:iam::212222224610:role/eksctl-ch-test-addon-iamserviceaccou-Role1-JQL4R3JM7I1A
Image pull secrets: <none>
Mountable secrets: aws-load-balancer-controller-token-b8hw7
Tokens: aws-load-balancer-controller-token-b8hw7
Events: <none>
%>
%> kubectl annotate --overwrite serviceaccount aws-load-balancer-controller eks.amazonaws.com/role-arn='arn:aws:iam::212222224610:role/eksctl-ch-test-addon-iamserviceaccou-Role1-17A92GGXZRY6O' -n kube-system
In my case, I was able to attach the oidc role with route53 permissions policy and that resolved the error.
https://medium.com/swlh/amazon-eks-setup-external-dns-with-oidc-provider-and-kube2iam-f2487c77b2a1
and then with the external-dns service account used that instead of the cluster role.
annotations:
# # Substitute your account ID and IAM service role name below.
eks.amazonaws.com/role-arn: arn:aws:iam::<account>:role/external-dns-service-account-oidc-role
For me the issue was that the trust relationship was (correctly) setup using one partition whereas the ServiceAccount was annotated with a different partition, like so:
...
"Principal": {
"Federated": "arn:aws-us-gov:iam::${AWS_ACCOUNT_ID}:oidc-provider/${OIDC_PROVIDER}"
},
...
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::{{ .Values.aws.account }}:role/{{ .Values.aws.roleName }}
Notice arn:aws:iam vs arn:aws-us-gov:iam
I am trying my hands on configuring Endpoints on Cloud Functions by following the article.
Have performed the following steps:
1) Create a Google Cloud Platform (GCP) project, and deployed the following Cloud Function.
export const TestPost = (async (request: any, response: any) => {
response.send('Record created.');
});
using following command
gcloud functions deploy TestPost --runtime nodejs10 --trigger-http --region=asia-east2
Function is working fine till here.
2) Deploy the ESP container to Cloud Run using following command
gcloud config set run/region us-central1
gcloud beta run deploy CLOUD_RUN_SERVICE_NAME \
--image="gcr.io/endpoints-release/endpoints-runtime-serverless:1.30.0" \
--allow-unauthenticated \
--project=ESP_PROJECT_ID
ESP container is successfully deployed as well.
3) Create an OpenAPI document that describes the API, and configure the routes to the Cloud Functions.
swagger: '2.0'
info:
title: Cloud Endpoints + GCF
description: Sample API on Cloud Endpoints with a Google Cloud Functions backend
version: 1.0.0
host: HOST
schemes:
- https
produces:
- application/json
paths:
/Test:
get:
summary: Do something
operationId: Test
x-google-backend:
address: https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/Test
responses:
'200':
description: A successful response
schema:
type: string
4) Deploy the OpenAPI document using following command
gcloud endpoints services deploy swagger.yaml
5) Configure ESP so it can find the configuration for the Endpoints service.
gcloud beta run configurations update \
--service CLOUD_RUN_SERVICE_NAME \
--set-env-vars ENDPOINTS_SERVICE_NAME=YOUR_SERVICE_NAME \
--project ESP_PROJECT_ID
gcloud alpha functions add-iam-policy-binding FUNCTION_NAME \
--member "serviceAccount:ESP_PROJECT_NUMBER-compute#developer.gserviceaccount.com" \
--role "roles/iam.cloudfunctions.invoker" \
--project FUNCTIONS_PROJECT_ID
This is done successfully
6) Sending requests to the API
Works absolutely fine.
Now I wanted to Implement authentication so I made following changes to OpenAPI document
swagger: '2.0'
info:
title: Cloud Endpoints + GCF
description: Sample API on Cloud Endpoints with a Google Cloud Functions backend
version: 1.0.0
host: HOST
schemes:
- https
produces:
- application/json
security:
- client-App-1: [read, write]
paths:
/Test:
get:
summary: Do something
operationId: Test
x-google-backend:
address: https://REGION-FUNCTIONS_PROJECT_ID.cloudfunctions.net/Test
responses:
'200':
description: A successful response
schema:
type: string
securityDefinitions:
client-App-1:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
scopes:
read: Grants read access
write: Grants write access
x-google-issuer: SERVICE_ACCOUNT#PROJECT.iam.gserviceaccount.com
x-google-jwks_uri: https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT#PROJECT.iam.gserviceaccount.com
I created a service account using following command.
gcloud iam service-accounts create SERVICE_ACCOUNT_NAME --display-name DISPLAY_NAME
Granted Token Creator role to service account using following
gcloud projects add-iam-policy-binding PROJECT_ID --member serviceAccount:SERVICE_ACCOUNT_EMAIL --role roles/iam.serviceAccountTokenCreator
Redeploy the OpenAPI document
gcloud endpoints services deploy swagger.yaml
Now when I test the API I get following error
{
"code": 16,
"message": "JWT validation failed: BAD_FORMAT",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "auth"
}
] }
I am passing the access token generated via gcloud into the request using BearerToken
cmd for generating access token is gcloud auth application-default print-access-token
Can some one point out what the issue here. Thanks...
Edit#1:
I am using Postman to connect to my API's
After using the following command I am getting a different error.
Command:
gcloud auth print-identity-token SERVICE_ACCOUNT_EMAIL
Error:
{
"code": 16,
"message": "JWT validation failed: Issuer not allowed",
"details": [
{
"#type": "type.googleapis.com/google.rpc.DebugInfo",
"stackEntries": [],
"detail": "auth"
}
]
}
For ESP, you should use jwt-token, or identity token. not access token. Please check this out.
Finally I manager to solve the issue.
Two things were wrong.
1) The format of the raw JWT token, it should be as following
{
"iss": SERVICE_ACCOUNT_EMAIL,
"iat": 1560497345,
"aud": ANYTHING_WHICH_IS_SAME_AS_IN_OPENAPI_YAML_FILE,
"exp": 1560500945,
"sub": SERVICE_ACCOUNT_EMAIL
}
and then we need to generate a signed JWT token using following command
gcloud beta iam service-accounts sign-jwt --iam-account SERVICE_ACCOUNT_EMAIL raw-jwt.json signed-jwt.json
2) The security definition in the YAML file should be like following
securityDefinitions:
client-App-1:
authorizationUrl: ""
flow: "implicit"
type: "oauth2"
scopes:
read: Grants read access
write: Grants write access
x-google-issuer: SERVICE_ACCOUNT_EMAIL
x-google-jwks_uri: "https://www.googleapis.com/robot/v1/metadata/x509/SERVICE_ACCOUNT_EMAIL
x-google-audiences: ANYTHING_BUT_SAME_AS_IN_RAW_JWT_TOKEN
I am trying to deploy Hashicorp Vault with Kubernetes Auth Method on AWS EKS.
Hashicorp Auth Method:
https://www.vaultproject.io/docs/auth/kubernetes.html
Procedure used, derived from CoreOS Vault Operator. Though I am not actually using their operator:
https://github.com/coreos/vault-operator/blob/master/doc/user/kubernetes-auth-backend.md
Below is the summary of the procedure use with some additional content. Essentially, I am getting a certificate error when attempting to actually login to vault after following the needed steps. Any help is appreciated.
Create the service account and clusterrolebinding for tokenreview:
$kubectl -n default create serviceaccount vault-tokenreview
$kubectl -n default create -f example/k8s_auth/vault-tokenreview-binding.yaml
Contents of vault-tokenreview-binding.yaml file
=========================================
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: vault-tokenreview-binding
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:auth-delegator
subjects:
- kind: ServiceAccount
name: vault-tokenreview
namespace: default
Enable vault auth and add Kubernetes cluster to vault:
$SECRET_NAME=$(kubectl -n default get serviceaccount vault-tokenreview -o jsonpath='{.secrets[0].name}')
$TR_ACCOUNT_TOKEN=$(kubectl -n default get secret ${SECRET_NAME} -o jsonpath='{.data.token}' | base64 --decode)
$vault auth-enable kubernetes
$vault write auth/kubernetes/config kubernetes_host=XXXXXXXXXX kubernetes_ca_cert=#ca.crt token_reviewer_jwt=$TR_ACCOUNT_TOKEN
Contents of ca.crt file
NOTE: I retrieved the certificate from AWS EKS console. Which
is shown in the "certificate authority" field in
base64 format. I base64 decoded it and placed it here
=================
-----BEGIN CERTIFICATE-----
* encoded entry *
-----END CERTIFICATE-----
Create the vault policy and role:
$vault write sys/policy/demo-policy policy=#example/k8s_auth/policy.hcl
Contents of policy.hcl file
=====================
path "secret/demo/*" {
capabilities = ["create", "read", "update", "delete", "list"]
}
$vault write auth/kubernetes/role/demo-role \
bound_service_account_names=default \
bound_service_account_namespaces=default \
policies=demo-policy \
ttl=1h
Attempt to login to vault using the service account created in last step:
$SECRET_NAME=$(kubectl -n default get serviceaccount default -o jsonpath='{.secrets[0].name}')
$DEFAULT_ACCOUNT_TOKEN=$(kubectl -n default get secret ${SECRET_NAME} -o jsonpath='{.data.token}' | base64 --decode)
$vault write auth/kubernetes/login role=demo-role jwt=${DEFAULT_ACCOUNT_TOKEN}
Error writing data to auth/kubernetes/login: Error making API request.
URL: PUT http://localhost:8200/v1/auth/kubernetes/login
Code: 500. Errors:
* Post https://XXXXXXXXX.sk1.us-west-2.eks.amazonaws.com/apis/authentication.k8s.io/v1/tokenreviews: x509: certificate signed by unknown authority
your kubernetes url https://XXXXXXXXX.sk1.us-west-2.eks.amazonaws.com has the bad cert try adding -tls-skip-verify
vault write -tls-skip-verify auth/kubernetes/login .......
I tested kubernetes deployment with EBS volume mounting on AWS cluster provisioned by kops. This is deployment yml file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: helloworld-deployment-volume
spec:
replicas: 1
template:
metadata:
labels:
app: helloworld
spec:
containers:
- name: k8s-demo
image: wardviaene/k8s-demo
ports:
- name: nodejs-port
containerPort: 3000
volumeMounts:
- mountPath: /myvol
name: myvolume
volumes:
- name: myvolume
awsElasticBlockStore:
volumeID: <volume_id>
After kubectl create -f <path_to_this_yml>, I got the following message in pod description:
Attach failed for volume "myvolume" : Error attaching EBS volume "XXX" to instance "YYY": "UnauthorizedOperation: You are not authorized to perform this operation. status code: 403
Looks like this is just a permission issue. Ok, I checked policy for node role IAM -> Roles -> nodes.<my_domain> and found that there where no actions which allow to manipulate volumes, there was only ec2:DescribeInstances action by default. So I added AttachVolume and DetachVolume actions:
{
"Sid": "kopsK8sEC2NodePerms",
"Effect": "Allow",
"Action": [
"ec2:DescribeInstances",
"ec2:AttachVolume",
"ec2:DetachVolume"
],
"Resource": [
"*"
]
},
And this didn't help. I'm still getting that error:
Attach failed for volume "myvolume" : Error attaching EBS volume "XXX" to instance "YYY": "UnauthorizedOperation: You are not authorized to perform this operation.
Am I missing something?
I found a solution. It's described here.
In kops 1.8.0-beta.1, master node requires you to tag the AWS volume with:
KubernetesCluster: <clustername-here>
So it's necessary to create EBS volume with that tag by using awscli:
aws ec2 create-volume --size 10 --region eu-central-1 --availability-zone eu-central-1a --volume-type gp2 --tag-specifications 'ResourceType=volume,Tags=[{Key=KubernetesCluster,Value=<clustername-here>}]'
or you can tag it by manually in EC2 -> Volumes -> Your volume -> Tags
That's it.
EDIT:
The right cluster name can be found within EC2 instances tags which are part of cluster. Key is the same: KubernetesCluster.