Passing IAM role to a Docker on EC2 - amazon-web-services

What is the suggested way to pass IAM role to a Docker container on EC2?
I have a mlflow project running in a docker environment on EC2.
The python code needs to read and write from S3.
The following is the error (sometimes other types of error also indicating no s3 access from the container, for example s3 resourece not found error)
botocore.exceptions.ProfileNotFound: The config profile (xxx) could not be found
To solve the s3 access issue, I already created an IAM role that allows access to the bucket.
What are the best ways to give this role to the Docker container?
Is it possible to define the role name in Dockerfile?
Thanks

If you are using ECS to run containers on your EC2 instances you can set the taskRoleArn in the Task Definition. If you are running docker on EC2 without ECS you could give the instance the role and use --net host and the container should use the ec2 instance's role.

I'm using docker on EC2 and I just created an Instance Profile.
If you use Console:
Instances -> CHOOSE YOUR INSTANCE -> Actions -> Instance Settings -> Attach/Replace IAM Role
If you use CloudFormation:
EC2Instance:
Type: AWS::EC2::Instance
Properties:
IamInstanceProfile: !Ref EC2InstanceIAMInstanceProfile
...

Related

ECS task. How to use AWS CLI within container

I'm trying to use AWS cli commands inside the container.
I have given policy within ECS cluster instance but it seems the container comes up with error as it tries to call AWS CLI command inside the container as an entrypoint when it boots and fails.
My IAM role with Instance Profile allows to do KMS get and decrypt which is what I need for the AWS CLI operations
Is there a way to pass credentials like instance profile inside ECS task container?
To pass a role to your caontainer(s) in a task you can use IAM Roles for Tasks:
With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task. Applications must sign their AWS API requests with AWS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.

The actual ec2 instance does not get the iam instance profile that Elastic Beanstalk creates and says the environment has. How do make this happen?

I am using boto3 to create Elastic Beanstalk applications and environments remotely. I want one of these environments to call other AWS services using boto3. My understanding is that Elastic Beanstalk "creates a default instance profile, called aws-elasticbeanstalk-ec2-role, and assigns managed policies with default permissions to it." (from https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html)
That page also states "An instance profile is a container for an AWS Identity and Access Management (IAM) role that you can use to pass role information to an Amazon EC2 instance when the instance starts."
When I inspect the Elastic Beanstalk Environment's configuration on the web console I see IAM instance profile: aws-elasticbeanstalk-ec2-role under the Security heading.
However, my instance cannot call boto3 functions without an error botocore.exceptions.NoCredentialsError: Unable to locate credentials.
When I inspect the ec2 instance on the console I see nothing under IAM role. If I set the IAM role from here the instance is then able to call boto3 functions.
How do I go about getting the ec2 instance to automatically inherit the IAM role .. or how do I specify that role to be set or indeed another custom role?
If we take a look the boto3 "create_environment()" function definition, we have an option to specify "optionSettings"
OptionSettings=[
{
'ResourceName': 'string',
'Namespace': 'string',
'OptionName': 'string',
'Value': 'string'
},
]
We can use this to explicitly specify the IAM instance profile to be attached to the EC2 instances launched as a part of your Beanstalk environment.
The Option setting to use would be "aws:autoscaling:launchconfiguration", option name : IamInstanceProfile, who's default value is "NONE". Specify the Instance profile name or ARN for value.
You get the instance profile from the metadata: curl -s 169.254.169.254/latest/meta-data/iam/info

boto3 can't connect to S3 from Docker container running in AWS batch

I am attempting to launch a Docker container stored in ECR as an AWS batch job. The entrypoint python script of this container attempts to connect to S3 and download a file.
I have attached a role with AmazonS3FullAccess to both the AWSBatchServiceRole in the compute environment and I have also attached a role with AmazonS3FullAccess to the compute resources.
This is the following error that is being logged: botocore.exceptions.ConnectTimeoutError: Connect timeout on endpoint URL: "https://s3.amazonaws.com/"
There is a chance that these instances are being launched in a custom VPC, not the default VPC. I'm not sure this makes a difference, but maybe that is part of the problem. I do not have appropriate access to check. I have tested this Docker image on an EC2 instance launched in the same VPC and everything works as expected.
You mentioned compute environment and compute resources. Did you add this S3 policy to the Job Role as mentioned here?
After you have created a role and attached a policy to that role, you can run tasks that assume the role. You have several options to do this:
Specify an IAM role for your tasks in the task definition. You can create a new task definition or a new revision of an existing task definition and specify the role you created previously. If you use the console to create your task definition, choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter. For more information, see Creating a Task Definition.
Specify an IAM task role override when running a task. You can specify an IAM task role override when running a task. If you use the console to run your task, choose Advanced Options and then choose your IAM role in the Task Role field. If you use the AWS CLI or SDKs, specify your task role ARN using the taskRoleArn parameter in the overrides JSON object. For more information, see Running Tasks.

AWS Codedploy Agent Access denied from EC2 instance to S3

I have set up the Codedeploy Agent, however when I run it, I get the error:
Error: HEALT_CONSTRAINTS
By going further , this is the entry in the code deploy log from the EC2 instance:
InstanceAgent::Plugins::CodeDeployPlugin::CommandPoller: Cannot reach InstanceService: Aws::S3::Errors::AccessDenied - Access Denied
I have done a simple wget from the bucket and it results:
Connecting to s3-us-west-2.amazonaws.com (s3-us-west-2.amazonaws.com)|xxxxxxxxx|:443... connected.
HTTP request sent, awaiting response... 403 Forbidden
On the opposite, if I use the AWS cli I can correctly reach the S3 bucket.
The EC2 instance is on a VPC, it has a role associated with full permission on S3, firewall settings inbound and outbound seem correct. So it is obviously something related to permissions in accessing from https.
The questions:
Under which credentials Code Deploy Agent runs ?
What permissions or roles have to be set on S3 bucket ?
The EC2 instance's credentials (the instance role) will be used when pulling from S3.
To be clear, the Service Role that CodeDeploy needs does not need S3 permissions. The ServiceRole CodeDeploy needs allows CodeDeploy to call AutoScaling & EC2 APIs to describe the instances so CodeDeploy knows how to deploy to them.
That being said, for your AccessDenied issue for S3, there are 2 things you need to check
The role that the EC2 instance(s) has s3:Get* and s3:List* (or more specific) permissions
The S3 bucket you want to deploy has a policy attached that allows the EC2 instance role to get the object.
Documentation for permissions: http://docs.aws.amazon.com/codedeploy/latest/userguide/instances-ec2-configure.html#instances-ec2-configure-2-verify-instance-profile-permissions
CodeDeploy uses "Service Roles" to access AWS resoures. In the AWS console for CodeDeploy, look for "Service role". Assign the IAM role that you created for CodeDeploy in your application settings.
If you have not created a IAM role for CodeDeploy, do so and then assign it to your CodeDeploy application.

Ansible module to attach an IAM role to existing EC2 instances

I am trying to attach an IAM role to multiple EC2 instances based on tags. Is there a module already available which I can use. I have been searching for a bit but couldn't find anything specific.
Attaching an IAM role to existing EC2 instances is a relatively new feature (announced in Feb 2017). There is no support for that in Ansible currently. If you AWS CLI 1.11.46 or higher installed, then you can use shell module to invoke the AWS CLI and achieve desired result.
See: New! Attach an AWS IAM Role to an Existing Amazon EC2 Instance by Using the AWS CLI
I submitted a PR last year to add 2 AWS modules : boto3 and boto3_wait.
These 2 modules allow you to interact with AWS API using boto3.
For instance, you could attach a role to an existing EC2 instance by calling associate_iam_instance_profile method on EC2 service :
- name: Attach role MyRole
boto3:
service: ec2
region: us-east-1
operation: associate_iam_instance_profile
parameters:
IamInstanceProfile:
Name: MyRole
InstanceId: i-xxxxxxxxxx
Feel free to give the PR a thumbs-up if you like it! ;)
In addition to this, you can use AWS dynamic inventory to target instances by tag.