I'm new to AWS and I'm trying to deploy a multicontainer Docker application to Elastic Beanstalk.
My Dockerrun.aws.json file is very simple, and it's the only thing that's uploaded to EB:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "mycontainer",
"image": "somethingsomething.eu-central-1.amazonaws.com/myimage",
"essential": true,
"memory": 128
}
]
}
In http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html it says that when using a Docker image uploaded to Amazon ECR:
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account
When deploying the application, it raises the following error:
ECS task stopped due to: Essential container in task exited.
(myimage: CannotPullContainerError: AccessDeniedException: User:
arn:aws:sts::xxx:assumed-role/aws-elasticbeanstalk-ec2-role/i-xyz
is not authorized to perform: ecr:GetAuthorizationToken on resource: *
status code: 400, request id: 4143c35d-)
I added the AWSElasticBeanstalkReadOnlyAccess to the aws-elasticbeanstalk-ec2-role, but it doesn't change anything...
Help?!
I'm not sure where it's written, but I needed to actually add the AmazonEC2ContainerRegistryReadOnly policy to aws-elasticbeanstalk-ec2-role. AmazonEC2ContainerRegistryReadOnly contains the GetAuthorizationToken action.
per https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html#iam-instanceprofile-addperms
open https://console.aws.amazon.com/iam/home#roles
Choose aws-elasticbeanstalk-ec2-role
On the Permissions tab, choose Attach policies.
select AmazonEC2ContainerRegistryReadOnly
Choose Attach policy
Related
I've created a docker image using AWS SageMaker and am now trying to push said image to ECR. When I do docker push ${fullname} it retries a couple of times and then errors.
In CloudTrail I can see that I'm getting an access denied error with message:
"User: arn:aws:sts::xxxxxxxxxx:assumed-role/AmazonSageMaker-ExecutionRole-xxxxxxxxxxxx/SageMaker is not authorized to perform: ecr:InitiateLayerUpload on resource: arn:aws:ecr:us-east-x:xxxxxxxxxx:repository/image because no identity-based policy allows the ecr:InitiateLayerUpload action"
I have full permissions, but from the error message above it thinks the user is SageMaker and not me.
How do I change the user? I'm guessing that's the problem.
When you're running commands from SageMaker, you're executing them as the SageMaker execution role, instead of your role. There are two options -
[Straighforward solution] Add ecr:InitiateLayerUpload permissions to the AmazonSageMaker-ExecutionRole-xxxxxxxxxxxx role
Assume a different role using sts (in that case, AmazonSageMaker-ExecutionRole-xxxxxxxxxxxx needs to have permissions to assume your Admin role) and then run docker push command.
I ran into a problem with AWS instance when I was trying to import self signed SSL certificate to IAM console following this tutorial -> https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-ssl.html
Basically tutorial is made to self sign a certificate and upload it to IAM user to have HTTPS application for testing purposes.
I SSH to my instance and ran all those commands, but in the end when I need to import it I get the error that my account is not authorized...
An error occurred (AccessDenied) when calling the
UploadServerCertificate operation: User:
arn:aws:sts::xxxxxxxxx:assumed-role/aws-elasticbeanstalk-ec2-role/xxxxxxx
is not authorized to perform: iam:UploadServerCertificate on resource:
arn:aws:iam::xxxxxxxxx:server-certificate/elastic-beanstalk-x509
I'm logged in as a ec2-user into the instance because I didn't find a way to log in with any other user...
I tried running command as sudo and nothing changes. On a similar post I have seen that I need to create a specific IAM user to which I need to append specific group policy to have "IAMFullAccess" policy. But I don't understand how can I specify that I want to run this command as this user since I am logged in as ec2-user on SSH...
You need to do some reading: https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-ec2.html
Create an IAM role with Upload permission
Add a trust policy to the role that it will allow it to be assumed by your EC2 instance
Attach the role to the EC2 instance
From your error it seems that you are using Elastic Beanstalk. This means that you already have a role that is assumed by your EC2. Find this role (xxxxx in the error message) and add the appropriate permissions.
Okay I have managed to add the certificate to the instance...
aws iam list-server-certificates {
"ServerCertificateMetadataList": [
{
"ServerCertificateId": "id",
"ServerCertificateName": "elastic-beanstalk-x509",
"Expiration": "2022-10-21T13:07:11Z",
"Path": "/",
"Arn": "arn",
"UploadDate": "2021-10-21T13:42:39Z"
}
] }
I also added Listener and proces on "Modify Application Load Balancer" but the site is still not responding to HTTPS requests... Any idea?
I am trying to configure AWS ECS using awsvpc mode with an IAM role to use specifically for tasks. Our ECS instances are of fargate launch types. After specifying a Task IAM role in the task configuration, we ssh into our task and try to run awscli commands and get the following error:
Unable to locate credentials. You can configure credentials by running "aws configure".
In order to troubleshoot, we ran the same docker image in a container with an EC2 launch type and when we ran the same awscli command, it errors by saying the assumed role does not have sufficient permissions. We noticed that this was because it was assuming the container instance IAM role, rather than the Task IAM role.
Based on the documentation here, it is clear that when using awsvpc networking mode, we need to set the ECS_AWSVPC_BLOCK_IMDS agent configuration variable to true in the agent configuration file and restart the agent in order for our instances to assume the Task IAM role rather than the container instance IAM role.
For the time being, for performance testing purposes, we need to deploy with the Fargate launch type and according to the docs, the container agent should be installed automatically for Fargate:
The Amazon ECS container agent is installed on the AWS managed infrastructure used for tasks using the Fargate launch type. If you are only using tasks with the Fargate launch type no additional configuration is needed and the content in this topic does not apply.
However, we still need to be able to assume our task IAM role. Is there a way to update the necessary environment variable in the AWS-managed agent configuration file so as to allow the assuming of the task IAM role? Or is there another way to allow this?
When creating the task definition for your Fargate Task, are you assigning a Task Role ARN? There are two IAM ARNs needed. The Execution Role ARN is the IAM role to start the container in your Fargate cluster and uses permissions to setup the CloudWatch logs and possibly pulling an image from ECR. The Task Role ARN is the IAM Role that the container has. Make sure the Task Role ARN has the ECS Trust Relationship.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
Code that uses the AWS Node SDK doesn't seem to be able to gain the role permissions of the ECS task.
If I run the code on an EC2 ECS instance, the code seems to inherit the role on the instance, not of the task.
If I run the code on Fargate, the code doesn't get any permission.
By contrast, any bash scripts that run within the instance seem to have the proper permissions.
Indeed, the documentation doesn't mention this as an option for the node sdk, just:
Loaded from IAM roles for Amazon EC2 (if running on EC2),
Loaded from the shared credentials file (~/.aws/credentials),
Loaded from environment variables,
Loaded from a JSON file on disk,
Hardcoded in your application
Is there any way to have your node code gain the permissions of the ECS task?
This seems to be the logical way to pass permissions to your code. It works beautifully with code running on an instance.
The only workaround I can think of is to create one IAM user per ECS service and pass the API Key/Secret as environmental variables in the task definition. However, that doesn't seem very secure since it would be visible in plain text to anyone with access to the task definition.
Your question is missing a lot of details on how you setup your ECS Cluster plus I am not sure if the question is for ECS or for Fargate specifically.
Make sure that you are using the latest version of the SDK. Javascript supports ECS and Fargate task credentials.
Often there is confusion about credentials on ECS. There is the IAM role that is assigned to the Cluster EC2 instances and the IAM role that is assigned to ECS tasks.
The most common problem is the "Trust Relationship" has not been setup on the ECS Task Role. Select your IAM role and then the "Trust Relationships" tab and make sure that it looks like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "",
"Effect": "Allow",
"Principal": {
"Service": "ecs-tasks.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
In addition to the standard Amazon ECS permissions required to run tasks and services, IAM users also require iam:PassRole permissions to use IAM roles for tasks.
Next verify that you are using the IAM role in the task definition. Specify the correct IAM role ARN in the Task Role field. Note that this different than Task Execution Role (which allows containers to pull images and publish logs).
Next make sure that your ECS Instances are using the latest version of the ECS Agent. The agent version is listed on the "ECS Instances" tab under the right hand side column "Agent version". The current version is 1.20.3.
Are you using an ECS optimized AMI? If not, add --net=host to your docker run command that starts the agent. Review this link for more information.
I figured it out. This was a weird one.
A colleague thought it would be "safer" if we call Object.freeze on proccess.env. This was somehow interfering with the SDK's ability to access the credentials.
Removed that "improvement" and all is fine again. I think the lesson is "do not mess with process.env".
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html#docker-singlecontainer-dockerrun-privaterepo
Following the instructions here to connect to a private docker hub container from Elastic Beanstalk, but it stubbornly refuses to work. It seems like when calling docker login in Docker 1.12 the resulting file has no email property, but it sounds like aws expects it so I create a file called dockercfg.json that looks like this:
{
"https://index.docker.io/v1/": {
"auth": "Y2...Fz",
"email": "c...n#gmail.com"
}
}
The relevant piece of my Dockerrun.aws.json file looks like this:
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
And I have the file uploaded at the root of the S3 bucket. Why do I still get errors that say Error: image c...6/w...t:23 not found. Check snapshot logs for details. I am sure the names are right and that this would work if it was a public repository. The full error is below. I am deploying from GitHub with Circle CI if it makes a difference, happy to provide any other information needed.
INFO: Deploying new version to instance(s).
WARN: Failed to pull Docker image c...6/w...t:23, retrying...
ERROR: Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
ERROR: [Instance: i-06b66f5121d8d23c3] Command failed on instance. Return code: 1 Output: (TRUNCATED)...b-project
Error: image c...6/w...t:23 not found
Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-06b66f5121d8d23c3'. Aborting the operation.
ERROR: Failed to deploy application.
ERROR: Failed to deploy application.
EDIT: Here's the full Dockerrun file. Note that %BUILD_NUM% is just an int, I can verify that works.
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
"Image": {
"Name": "c...6/w...t:%BUILD_NUM%",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
EDIT: Also, I have verified that this works if I make this Docker Hub container public.
OK, let's do this;
Looking at the same doc page,
With Docker version 1.6.2 and earlier, the docker login command creates the authentication file in ~/.dockercfg in the following format:
{
"server" :
{
"auth" : "auth_token",
"email" : "email"
}
}
You already got this part correct I see. Please double check the cases below one by one;
1) Are you hosting the S3 bucket in the same region?
The Amazon S3 bucket must be hosted in the same region as the
environment that is using it. Elastic Beanstalk cannot download files
from an Amazon S3 bucket hosted in other regions.
2) Have you checked the required permissions?
Grant permissions for the s3:GetObject operation to the IAM role in
the instance profile. For details, see Managing Elastic Beanstalk
Instance Profiles.
3) Have you got your S3 bucket info in your config file? (I think you got this too)
Include the Amazon S3 bucket information in the Authentication (v1) or
authentication (v2) parameter in your Dockerrun.aws.json file.
Can't see your permissions or your env region, so please double check those.
If that does not work, i'd upgrade to Docker 1.7+ if possible and use the corresponding ~/.docker/config.json style.
Depending on your Docker version, this file is saved as either ~/.dockercfg or *~/.docker/config.json
cat ~/.docker/config.json
Output:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i"
}
}
}
Important:
Newer versions of Docker create a configuration file as shown above with an outer auths object. The Amazon ECS agent only supports dockercfg authentication data that is in the below format, without the auths object. If you have the jq utility installed, you can extract this data with the following command:
cat ~/.docker/config.json | jq .auths
Output:
{
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i",
"email": "email#example.com"
}
}
Create a file called my-dockercfg using the above content.
Upload the file into the S3 bucket with the specified key(my-dockercfg) in the Dockerrun.aws.json file.
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"bucket": "elasticbeanstalk-us-west-2-618148269374",
"key": "my-dockercfg"
}
}