Followed this link: Enabling cross-account access to Amazon EKS cluster resources
I can make a pod in an Amazon EKS cluster hosted in ci account interact and manage the AWS resources in a target account.
This is the aws config file:
[profile ci-env]
role_arn = arn:aws-cn:iam::CICD_ACCOUNT:role/eksctl-jenkins-cicd-demo-addon-iamserviceacc-Role1-1AQZO394370HE
web_identity_token_file = /var/run/secrets/eks.amazonaws.com/serviceaccount/token
region = cn-north-1
[profile target-env]
role_arn = arn:aws-cn:iam::TARGET_ACCOUNT:role/target-account-iam-role
source_profile = ci-env
role_session_name = xactarget
region = cn-north-1
When I run aws s3 ls --profile target-env, it worked and listed the s3 buckets in my target account.
Then, I want to deploy a cdk app on ci account which can create s3 bucket on target account.
But When I run cdk deploy --profile target-env, it appeared:
Need to perform AWS calls for account TARGET_ACCOUNT, but no credentials have been configured.
I am very confused and don't know how to solve it.
I am a beginner of aws service, thanks advance for helping me!
You need to bootstrap all of your (target) accounts to trust the CICD account.
Otherwise, you would have to create and manage the cross-account access by yourself.
IAM Roles + Policies (in all accounts)
S3 Bucket for artifacts + bucket policies (in CICD account)
Key Management Service -> Customer Managed Key + Policies to allow the target accounts
You can spot here an example architecture, which is applying that:
If it's possible for you, you might switch to the CDK Pipelines.
In this guide, also the bootstrapping (incl. trusting) is being applied and every step/resource mentioned from above is being created and properly configured.
It has a few drawbacks as of now, but it's in developer preview and has a quite decent usability and makes your life a lot easier already.
Related
I have an Amazon EKS Cluster in which I am deploying ExternalSecrets. I do the deployment of our cluster using IaC (Terraform). Now I want to be able to use
this methodology to allow my new service pods to assume any AWS role for my AWS account and pull the Secrets. I create IAM Roles in terraform files using kube2iam. Any help how I can achieve this? I have added the doc to this as well. Documentation
I have a AWS account created under an Organization. Say Account ID : 12345. It is a parent account. Now i have new Role created, Say Account ID : 67890. I have switched my role from parent account to the new one. But when i execute the cloud formation template from AWS cli. It is still trying to create env in my parent account (i.e,12345) instead of the new account.
My question is - How can i execute/create env using CFT from AWS Cli in my new account (ie, 67890) ? or is there a way to specify Account id in which the env should be created ?
You most likely forgot to configure your AWS CLI to use credentials from the linked account. You may create a new profile and specify it when you run the CLI command. Example:
aws configure --profile=account2
aws --profile=account2 cloudformation create-stack ...
If you are unable to setup an IAM credential on Account2, you may try to setup CLI to use the cross-account role you already have. You'll need to manually add the following block to your ~/.aws/config file:
[profile account2]
role_arn = arn:aws:iam::123456789012:role/account2role
source_profile = account1
Replace 123456789012 and account2role with their corresponding values.
I want a aws master account, where i can manage other aws accounts/iam users. Is this achievable? I tried with AWS Organizations, but it does not applies for IAM users(Only account level). Please help
You could create a custom role in any account that you have, and the use aws-api to assume this role with an script.
For example, you create the role custom_role in everyaccount that you own.
Then you use aws sdk or cli to assume role
Configure role in credentials profile
[profile custom_role]
role_arn = arn:aws:iam::123456789012:role/custom_role
source_profile = default
Use aws api to create user in the other account
aws iam create-user --user-name user_test --profile custom_role
You could do the same thing through aws sdk (like boto3 in python). If you want to manage all accounts, you could develop some script that automate that work.
I am trying to configure jboss to use AWS IAM Roles for accessing S3 and SQS. All of the documentation I've seen uses static access and secret keys rather than the dynamic keys that roles allow for.
Is there any documentation on doing this?
Create an EC2 instance assigning that Role. Whatever you run any app in that instance will be able to access the AWS resources.
This way you don't need to write any code for security within the application.
Also in your code you don't need to supply any credentials when you assign the role to the EC2 instance.
In AWS there are two approaches to provide permission using AWS IAM to your code to access AWS resources such as S3 and SQS.
If your code runs in Amazon Compute Services such as EC2, Lambda it is recommended to create a IAM Role with required policies to access S3 & SQS also allowing the Compute Service (EC2, Lambda) to assume that role (Using Trust Relationships). After attaching this role, either to EC2 or Lambda, you can directly use AWS SDK to access S3 and SQS without needing any credentials or access tokens to configure for SDK.
For more information, see Using an IAM Role to Grant Permissions to
Applications Running on Amazon EC2 Instances.
If your code runs on premise or external to the Amazon infrastructure, you need to create a IAM user with required policies and also create access keys (Access Key ID & Secret Key) and initialize SDK to allow access to S3 or SQS as shown below.
var AWS = require('aws-sdk');
AWS.config.credentials = new AWS.Credentials({
accessKeyId: 'akid', secretAccessKey: 'secret'
});
I am trying to use aws cookbook with iam roles, but when I trying to not include aws_access_key and aws_secret_access_key in the aws_ebs_volume block, the chef keep showing an error: RightAws::AwsError: AWS access keys are required to operate on EC2.
I assume when cookbook mean omit the resource parameters aws_secret_access_key and aws_access_key, I just delete them from the block.
aws_ebs_volume "userhome_volume" do
provider "aws_ebs_volume"
volume_id node['myusers']['usershome_ebs_volid']
availability_zone node['myusers']['usershome_ebs_zone']
device node['myusers']['usershome_ebs_dev_id']
action :attach
end
Does anyone have the example of aws cookbook with iam roles please?
update:
Do I still need to define aws creeds data bag if I have already have proper iam role attached to the instance?
When I use iam role and aws cookbook, what does the was_ebs_volume block look like?
In order to manage AWS components, you need to provide authentication credentials to the nodein one of two ways:
explicitly pass credentials parameter to the resource
or let the resource pick up credentials from the IAM role assigned to the instance
When you provision the instance, you should assign it the appropriate role in "Step 3. Configure Instance Details" (when using the console). The setting "IAM role" for EC2 automatically deploys and rotates AWS credentials for you, eliminating the need to store your AWS access keys with your application. On an instance provisioned this way, you no longer need to include aws_access_key and aws_secret_access_key in the aws_ebs_volume block.
Here are code examples on how to launch an instance with an IAM role using the IAM and Amazon EC2 CLIs:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html
and here are some code examples:
http://www.getchef.com/blog/2013/12/19/automating-iam-credentials-with-ruby-and-chef/
When you assign the appropriate IAM role during instance provisioning, your code should work without aws_access_key and aws_secret_access_key.
Here are the steps:
Set up your S3, Chef server, and IAM role as described here:
https://securosis.com/blog/using-amazon-iam-roles-to-distribute-security-credentials-for-chef
Execute “knife client ./” to create client.rb and validation.pem, then transfer them from your Chef server into your bucket.
Launch a new instance with the appropriate IAM Role you set up for Chef and your S3 bucket.
Specify your customized cloud-init script in the User Data field or command-line argument as described here:
https://securosis.com/blog/using-cloud-init-and-s3cmd-to-automatically-download-chef-credentials
You can also host the script as a file and load it from a central repository using an include.
Execute chef-client.