Currently we are running 4 commands:
below two aws cli commands in jenkins docker container:
sh 'aws cloudformation package ...'
s3Upload()
Below two aws cli commands in docker container:
aws s3 cp source dest
aws cloudformation deploy
To run these above 4 commands in docker container, aws cli derive access permissions from docker host( EC2 ) which assumes a role with policy having permissions ( to access s3 and create/update cloud formation stack).
But the problem with such solution is,
we have to assign this role(say xrole) to every EC2 that is running in each test environment. There are 3-4 test environments.
Internally, aws creates an adhoc user as aws::sts::{account Id}::assumerole/xrole/i-112223344 and above 4 commands run on behalf of this user.
Better solution would be to create a user and assign the same role(xrole) to this and run above 4 commands as this user.
But,
1) what is the process to create such user? Because it has to assume xrole...
2) how to run above 4 commands with this user?
Best practice is to use roles, not users when working with EC2 instances. Users are necessary only when you need to grant permissions to applications that are running on computers outside of AWS environment (on premise). And even then, it is still best practice to grant this user permissions to only assume role which grants the necessary permissions.
If you are running all your commands from within containers and you want to grant permissions to containers instead of the whole EC2 instance then what you can do is to use ECS service instead of plain EC2 instances.
When using EC2 launch type with ECS, you have the same control over the EC2 instance but the difference is that you can attach role to a particular task (container) instead of the whole EC2 instance. By doing this, you can have several different tasks (containers) running on the same EC2 instance while each of them have only permissions that its needs. So if one of your containers needs to upload data to S3, you can create necessary role, specify the role in task definition and only that particular task will have those permissions. Neither other tasks nor the EC2 instance itself will be able to upload objects to S3.
Moreover, if you specify awsvpc networking mode for your tasks, each task will get its own ENI which means that you can specify Security Group for each task separately even if they are running on the same EC2 instance.
Here is an example of task definition using docker image stored in ECR and role called AmazonECSTaskS3BucketRole.
{
"containerDefinitions": [
{
"name": "sample-app",
"image": "123456789012.dkr.ecr.us-west-2.amazonaws.com/aws-nodejs-sample:v1",
"memory": 200,
"cpu": 10,
"essential": true
}
],
"family": "example_task_3",
"taskRoleArn": "arn:aws:iam::123456789012:role/AmazonECSTaskS3BucketRole"
}
Here is documentation for task definitions
Applications running on the same host share the permissions assigned to the host through the instance profile. If you would like to segregate different applications running on the same instance due to security requirements, it is best to launch them on separate instances.
Using access keys per application is not a recommended approach as access keys are long-term credentials and they can easily be retrieved when the host is shared.
It is possible to assign IAM roles to ECS tasks as suggested by the previous answer. However, containers that are running on your container instances are not prevented from accessing the credentials that are supplied through the instance profile. It is therefore recommended to assign minimal permissions to the container instance roles.
If you run your tasks in awsvpc network mode, then you can configure ECS agent to prevent a task from accessing the instance metadata. You should just set agent configuration variable, ECS_AWSVPC_BLOCK_IMDS=true and restart the agent.
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task-iam-roles.html
Related
I'm trying to use AWS cli commands inside the container.
I have given policy within ECS cluster instance but it seems the container comes up with error as it tries to call AWS CLI command inside the container as an entrypoint when it boots and fails.
My IAM role with Instance Profile allows to do KMS get and decrypt which is what I need for the AWS CLI operations
Is there a way to pass credentials like instance profile inside ECS task container?
To pass a role to your caontainer(s) in a task you can use IAM Roles for Tasks:
With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with AWS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.
I have an ECS service, which requires AWS credentials. I use ECR to store docker images and jenkins visible only for VPN connections to build images.
I see 2 possibilities to provide AWS credentials to the service
Store them as Jenkins secret and insert into the docker image during build
Make them a part of the environment when creating ECS Task definition
What is more secure? Are there other possibilities?
First thing, You should not use AWS credentials while working inside AWS, you should assign the role to Task definition or services instead of passing the credentials to docker build or task definition.
With IAM roles for Amazon ECS tasks, you can specify an IAM role that
can be used by the containers in a task. Applications must sign their
AWS API requests with AWS credentials, and this feature provides a
strategy for managing credentials for your applications to use,
similar to the way that Amazon EC2 instance profiles provide
credentials to EC2 instances
So sometimes the underlying application is not designed in a way that can use role so in this I will recommend storing ENV in the task definition but again from where to get the value of ENV?
Task definition support two methods to deal with ENV,
Plain text as direct value
Use ‘valueFrom’ attribute for ECS task definition
The following is a snippet of a task definition showing the format when referencing an Systems Manager Parameter Store parameter.
{
"containerDefinitions": [{
"secrets": [{
"name": "environment_variable_name",
"valueFrom": "arn:aws:ssm:region:aws_account_id:parameter/parameter_name"
}]
}]
}
This is the most secure and recommended method by AWS documentation so this is the better way as compared to ENV in plain text inside Task definition or ENV in Dockerfile.
You can read more here and systems-manager-parameter-store.
But to use these you will must provide permission to task definition to access systems-manager-parameter-store.
I am attempting to launch a Docker container stored in ECR as an AWS batch job. The entrypoint python script of this container attempts to connect to S3 and download a file.
I have attached a role with AmazonS3FullAccess to both the AWSBatchServiceRole in the compute environment and I have also attached a role with AmazonS3FullAccess to the compute resources.
This is the following error that is being logged: botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://s3.amazonaws.com/"
There is a chance that these instances are being launched in a custom VPC, not the default VPC. I'm not sure this makes a difference, but maybe that is part of the problem. I do not have appropriate access to check. I have tested this Docker image on an EC2 instance launched in the same VPC and everything works as expected.
You mentioned compute environment and compute resources. Did you add this S3 policy to the Job Role as mentioned here?
After you have created a role and attached a policy to that role, you can run tasks that assume the role. You have several options to do this:
Specify an IAM role for your tasks in the task definition. You can create a new task definition or a new revision of an existing task definition and specify the role you created previously. If you use the console to create your task definition, choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter. For more information, see Creating a Task Definition.
Specify an IAM task role override when running a task. You can specify an IAM task role override when running a task. If you use the console to run your task, choose Advanced Options and then choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter in the overrides JSON object. For more information, see Running Tasks.
I am deploying my first batch job on AWS. When I run my docker image in an EC2 instance, the script called by the job runs fine. I have assigned an IAM role to this instance to allow S3 access.
But when I run the same script as a job on AWS Batch, it fails due to Access Denied errors on S3 access. This is despite the fact that in the Job Definition, I assign an IAM role (created for Elastic Container Service Task) that has full S3 access.
If I launch my batch job with a command that does not access S3, it runs fine.
Since using an IAM role for the job definition seems not to be sufficient, how then do I grant S3 permissions within a Batch Job on AWS?
EDIT
So if I just run aws s3 ls interlinked as my job, that also runs properly. What does not work is running the R script:
library(aws.s3)
get_bucket("mybucket")[[1]]
Which fails with Access Denied.
So it seems the issue is either with the aws.s3 package or, more likely, my use of it.
The problem turned out to be that I had IAM Roles specified for both my compute environment (more restrictive) and my jobs (less restrictive).
In this scenario (where role based credentials are desired), the aws.s3 R package uses aws.signature and aws.ec2metadata to pull temporary credentials from the role. It pulls the compute environment role (which is an ec2 role), but not the job role.
My solution was just to grant the required S3 permissions to my compute environment's role.
I'm fairly new to AWS. I'm setting up an EC2 instance (an Ubuntu 18.04 LAMP server).
I've installed the aws CLI on the instance, so I can automate EBS snapshots for backup.
I've also created an IAM role with the needed permissions to run aws ec2 create-snapshot, and I've assigned this role to my EC2 instance.
My question: is there any need to run aws configure on the EC2 instance, in order to set the AWS Access Key ID and AWS Secret Access Key? I'm still wrapping my head around AWS IAM roles – but (since the EC2 instance has a role), it sounds like the instance will acquire the needed keys from IAM automagically. Therefore, I assume that there's never any need to run aws configure. (In fact, it seems like this would be counterproductive, since the keys set via aws configure would override the keys acquired automatically via the role.)
Is all of that accurate?
No, the AWS CLI will progress through a list of credential providers. The instance metadata service will eventually be reached, even if you have not configured the AWS cli:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#config-settings-and-precedence
And yes, if you add keys to the AWSCLI config file, they will be used with higher priority than those obtained from the instance metadata service.