I'm using Kubeless on AWS EKS. While installing Kubeless, the installation has a manifest which has some CRDs and a deployment object. The deployment object has already a Service Account attached. I have created another Service Account in the kube-system namespace, which has some AWS IAM roles attached. I also want to attach this newly created Service Account used for IAM roles to the Kubeless deployment object by modifying the Kubeless manifest file.
I want to have 2 Service Accounts attached to the deployment object: One that comes with Kubeless and other for AWS IAM. Any help would appreciated. Thanks
This is not possible. If you look at the API documentation for PodSpec v1 core you can see that serviceAccountName expects a string not an array or object. This is because using a ServiceAccount resource creates a 1:1 relationship between your pod and authentication against the API server.
You will either need to:
Diversify your workload into multiple containers. Which with you can apply different service accounts.
Combine your service account capabilities into a single account and apply it exclusively to this pod.
I recommend #2.
Related
Service account "abcdefc-compute#developer.gserviceaccount.com" does not exist.
I am trying to create a kubernetes cluster but GCP gives me the error above.
I checked for the account name in service account but could not find it, rather I have
'ayushaccount#abcdef.iam.gserviceaccount.com'.
I tried to create another service account with this email "abcdefc-compute#developer.gserviceaccount.com" but it does not allow me to create.
I am new to GCP and I do not know how to solve this problem. All I am looking for to create a kubernetes cluster in GCP.
Looks like you are missing the default service account for your GCP project.
You have two options:
(re)create the default service account
when creating your GKE cluster, under NODE POOLS, go to default-pool->Security and for Service account, select one the one which exists.
If you want to (re)create the default service account, you can disable/enable the Google Compute Engine API via the console or run gcloud services enable compute.googleapis.com from Cloud Shell or from the command line on your workstation.
I am having some trouble doing code deploy with my AWS Educate account. Initially, when I was setting things up I was following this article.
https://hackernoon.com/deploy-to-ec2-with-aws-codedeploy-from-bitbucket-pipelines-4f403e96d50c?fbclid=IwAR3rezVMGpuQxTJ3AneOeTL2oMHjCKbQB5C5ouTLhJQ5gRp3JeL4GK0f53o
In it is talks about setting up an IAM service account. The problem is that AWS Educate allows you to create the accounts but it won't generate keys. In order for me to deploy my Spring Boot (and VueJS) apps to my s3 buckets and ec2s from my bitbucket repo, I need a key and secret key and CodeDeploy Group.
Fine I was able to use my Click the Account Details button on the labs.vocareum page and get my keys, however when I am attempting to set up a Code Deploy Group it asks for a service role and I am unsure where to get this?
Why is the service role necessary?
The service role is used by the CodeDeploy service in order to perform actions outside CodeDeploy (i.e. on another service such as S3).
AWS has a special approach of integrating services. Basically, you have to give each service you are using explicit permission to use another service (even if the access stays in the bounds of the same account). There is no inherent permission given to the CodeDeploy service to change things in S3. In fact, CodeDeploy is not even allowed to read files from S3 without explicitly allowing it.
Here is the official explanation from the docs [1]:
In AWS, service roles are used to grant permissions to an AWS service so it can access AWS resources. The policies that you attach to the service role determine which AWS resources the service can access and what it can do with those resources.
What you are actually doing according to the hackernoon article
you need a user account with programmatic access to your aws account
the user account needs to have a policy attached which grants permission to upload files into S3 and trigger a CodeDeploy deployment --> you provide the access key and secret access key of this user to Bitbucket so it can upload the stuff into S3 and trigger a deployment on bahalf of your user identity
Unrelated to steps 1 and 2: Create a role in AWS IAM [2] which will be used by both services (NOT Bitbucket): CodeDeploy and EC2. Strictly speaking, the author of the hackernoon article is merging two steps into one here: You are creating one role which is used by both services (as specified by the two different principals in the trust relationship: ec2.amazonaws.com and codedeploy.us-west-2.amazonaws.com). Usually this is not how IAM policies should be configured because it violates the principle of granting least privilege [4] as the EC2 instances receives permissions from the AWSCodeDeployRole policy which it probably does not need as far as I see. But that is just a philosophical note here. All the steps mentioned in the hackernoon article should technically work.
So, what you actually do is:
granting CodeDeploy permission to perform various actions inside your account, such as viewing which EC2 instances you have started etc. (this is specified inside the policy AWSCodeDeployRole [3])
granting EC2 permission to read the revision which was uploaded to S3 (this is specified inside the policy AmazonS3FullAccess)
To get back to your question...
However when I am attempting to set up a Code Deploy Group it asks for a service role and I am unsure where to get this?
You need to create the service role by yourself inside the IAM service (see [2]). I do not know if this is supported by AWS Educate, but I guess it should be. After creating the service role, you MUST assign it to the CodeDeploy Group (that is the point where you are stuck right now). Moreover, you must assign that same service role to you EC2 instance profile.
References
[1] https://docs.aws.amazon.com/codedeploy/latest/userguide/getting-started-create-service-role.html
[2] https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_create_for-service.html#roles-creatingrole-service-console
[3] https://github.com/SummitRoute/aws_managed_policies/blob/master/policies/AWSCodeDeployRole
[4] https://docs.aws.amazon.com/IAM/latest/UserGuide/best-practices.html#grant-least-privilege
I have setup an IAM OIDC provider in my EKS cluster, and have used it to manually assign IAM roles to Kubernetes Pods.
For the pipeline for my Kubernetes pod however, I want to have it automatically create and update the IAM role for the pod as part of the normal application pipeline. This makes it easy to update the IAM permissions as your application needs to interact with more services over time.
Does anyone know of a way to create an IAM role with a service operator and associate with a pod in a manifest file? I've searched all day and it doesn't look like it can be done.
The only alternative I can think of is creating the IAM role in a different pipeline/workflow, which would work but would make updating the IAM role with new permissions frustrating, as you would have to coordinate the ordering of deployment between the two pipelines.
I would like to move a pod from AWS hosted K8s cluster to GKE (Google). The problem is that on a GKE instance I don't have the AWS metadata in order to assume an IAM role (obviously).
But I guess I can do something similar to kube2iam in order to allow the pods to assume roles as if they were running inside AWS. Meaning, to run a daemonset that would simulate the access to the metadata for the pods.
I already have a VPN set up between the clouds.
Anyone did this already?
I haven’t tried that yet. But keep in mind that in GKE, IAM roles are associated to accounts (user accounts/service accounts) and not to resources (pod/nodes).
Also, kube2iam looks more like a security solution more than a compatibility solution. Once you have the credentials from the kube2iam node you still having the compatibility issues.
I think a better solution would be to use API calls and deal with the authentication.
A newer and possibly better option for your use case is the GKE Workload Identity feature that Google announces in June of this year: https://www.google.com/amp/s/cloudblog.withgoogle.com/products/containers-kubernetes/introducing-workload-identity-better-authentication-for-your-gke-applications/amp/
It lets you bind GCP IAM SAs to K8s SAs and namespace. Then, any pod that is created with that K8s SA for that namespace will automatically have temporary credentials mounted for the bound IAM SA - and the GCP gcloud SDK auto authenticates when executing gcloud commands from the pod.
I'm logging into a role through SSO and I'm trying to create a new Elastic Beanstalk environment (newest tomcat if it matters) and I am getting the following error which is preventing me from even getting the environment started building:
(Namespace: 'aws:elasticbeanstalk:environment', OptionName: 'ServiceRole'): Invalid service role
This is happening even when I am trying to clone an existing environment. I've tried to auto generate a service role and to manually create one. Both are giving the error. This error does not happen when I am logging into a user with the same permissions.
When you clone an environment using the Elastic Beanstalk console, you have the option to choose a new platform and a service role. Service role is a new concept in beanstalk documented here. Service is not required if you are using basic health monitoring but it is required if you choose to use enhanced health monitoring.
When creating an environment you can choose to pass an IamInstanceProfile (typically named aws-elasticbeanstalk-ec2-role) and a service role (typically named aws-elasticbeanstalk-service-role). These two roles are required when using Enhanced Application Health Monitoring.
Please note that these two roles require a completely a different set of permissions and you should use different roles for each of them. You can find the list of permissions required for Service Role and Instance profile documented here.
When creating/cloning/modifying environments using AWS console you will be shown an option to choose a service role. If you have never used a Service role before, you will be presented with an option to "Create a new role". The console allows you to create the Service role required by beanstalk using a single button click. You can view the permissions before creating the role.
After the first create, the console will present you with a dropdown with the role you created previously (typically named aws-elasticbeanstalk-service-role) and you can reuse this service role.
From the documentation: "A service role is the IAM role that Elastic Beanstalk assumes when calling other services on your behalf. Elastic Beanstalk uses the service role that you specify when creating an Elastic Beanstalk environment when it calls Amazon Elastic Compute Cloud (Amazon EC2), Elastic Load Balancing, and Auto Scaling APIs to gather information about the health of its AWS resources."
When creating/using a role you need to make sure the IAM user has pass role permission for the role you created. In case you are not using the root account make sure you have the correct policies for the IAM user.
Note the iam:PassRole permission allows your IAM user to pass the role to beanstalk service.
Update
There was an issue with Single Sign On that has now been resolved. Please update here or in the AWS forum thread below if you are still seeing issues.
AWS forum thread: https://forums.aws.amazon.com/thread.jspa?threadID=171369
I got the same error yesterday and a different one today using the same stack "Unable to assign role. Please verify that you have permission to pass this role: XXXXXX."
And I solved assigning this policy "AWSElasticBeanstalkFullAccess" to my user
Here you could read more:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/concepts-roles.html#concepts-roles-user
There seems to be a thread on aws support forum here: https://forums.aws.amazon.com/thread.jspa?messageID=670359
I am having the same issue when trying to access a beanstalk environment via crossaccount iam policy.
I think that logging into console with an IAM account that belongs to that particular AWS account with resolve the issue. Im certain AWS folks are working on it