http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html#docker-singlecontainer-dockerrun-privaterepo
Following the instructions here to connect to a private docker hub container from Elastic Beanstalk, but it stubbornly refuses to work. It seems like when calling docker login in Docker 1.12 the resulting file has no email property, but it sounds like aws expects it so I create a file called dockercfg.json that looks like this:
{
"https://index.docker.io/v1/": {
"auth": "Y2...Fz",
"email": "c...n#gmail.com"
}
}
The relevant piece of my Dockerrun.aws.json file looks like this:
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
And I have the file uploaded at the root of the S3 bucket. Why do I still get errors that say Error: image c...6/w...t:23 not found. Check snapshot logs for details. I am sure the names are right and that this would work if it was a public repository. The full error is below. I am deploying from GitHub with Circle CI if it makes a difference, happy to provide any other information needed.
INFO: Deploying new version to instance(s).
WARN: Failed to pull Docker image c...6/w...t:23, retrying...
ERROR: Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
ERROR: [Instance: i-06b66f5121d8d23c3] Command failed on instance. Return code: 1 Output: (TRUNCATED)...b-project
Error: image c...6/w...t:23 not found
Failed to pull Docker image c...6/w...t:23: Pulling repository docker.io/c...6/w...t
Error: image c...6/w...t:23 not found. Check snapshot logs for details.
Hook /opt/elasticbeanstalk/hooks/appdeploy/pre/03build.sh failed. For more detail, check /var/log/eb-activity.log using console or EB CLI.
INFO: Command execution completed on all instances. Summary: [Successful: 0, Failed: 1].
ERROR: Unsuccessful command execution on instance id(s) 'i-06b66f5121d8d23c3'. Aborting the operation.
ERROR: Failed to deploy application.
ERROR: Failed to deploy application.
EDIT: Here's the full Dockerrun file. Note that %BUILD_NUM% is just an int, I can verify that works.
{
"AWSEBDockerrunVersion": "1",
"Authentication": {
"Bucket": "elasticbeanstalk-us-west-2-9...4",
"Key": "dockercfg.json"
},
"Image": {
"Name": "c...6/w...t:%BUILD_NUM%",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
EDIT: Also, I have verified that this works if I make this Docker Hub container public.
OK, let's do this;
Looking at the same doc page,
With Docker version 1.6.2 and earlier, the docker login command creates the authentication file in ~/.dockercfg in the following format:
{
"server" :
{
"auth" : "auth_token",
"email" : "email"
}
}
You already got this part correct I see. Please double check the cases below one by one;
1) Are you hosting the S3 bucket in the same region?
The Amazon S3 bucket must be hosted in the same region as the
environment that is using it. Elastic Beanstalk cannot download files
from an Amazon S3 bucket hosted in other regions.
2) Have you checked the required permissions?
Grant permissions for the s3:GetObject operation to the IAM role in
the instance profile. For details, see Managing Elastic Beanstalk
Instance Profiles.
3) Have you got your S3 bucket info in your config file? (I think you got this too)
Include the Amazon S3 bucket information in the Authentication (v1) or
authentication (v2) parameter in your Dockerrun.aws.json file.
Can't see your permissions or your env region, so please double check those.
If that does not work, i'd upgrade to Docker 1.7+ if possible and use the corresponding ~/.docker/config.json style.
Depending on your Docker version, this file is saved as either ~/.dockercfg or *~/.docker/config.json
cat ~/.docker/config.json
Output:
{
"auths": {
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i"
}
}
}
Important:
Newer versions of Docker create a configuration file as shown above with an outer auths object. The Amazon ECS agent only supports dockercfg authentication data that is in the below format, without the auths object. If you have the jq utility installed, you can extract this data with the following command:
cat ~/.docker/config.json | jq .auths
Output:
{
"https://index.docker.io/v1/": {
"auth": "zq212MzEXAMPLE7o6T25Dk0i",
"email": "email#example.com"
}
}
Create a file called my-dockercfg using the above content.
Upload the file into the S3 bucket with the specified key(my-dockercfg) in the Dockerrun.aws.json file.
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"bucket": "elasticbeanstalk-us-west-2-618148269374",
"key": "my-dockercfg"
}
}
Related
I am implementing a Blue/Green deployment using aws-code-deploy orb. My infrastructure is being implemented using terraform which consists of the following resources.
S3 bucket → stores the appspec.yml which is used to create the deployment.
VPC for networking ( It was easier to spin mine up for this demo. Too lazy to navigate the Legitscript networking lol )
An application Load balancer, 2 Listener Groups and 2 Target Groups. On initial deployment of infrastructure, go to EC2 → Target groups and you will see TG1 will have a healthy target associated with it but TG2 will not. It will change once we implement the Blue green deployment.
ECS → A cluster, service and task definition will be available.
CodeDeploy → CodeDeploy application and deployment group.
This is my terraform file for S3 resource :-
resource "aws_s3_bucket" "bucket" {
bucket = "blue-green-cd-ls"
}
resource "aws_s3_object" "appspec" {
bucket = aws_s3_bucket.bucket.id
key = "appspec.yaml"
content = templatefile("${path.module}/appspec.yaml.tpl", {
task_definition_arn = var.task_definition_arn
})
}
Which successfuly creates the S3 bucket with the appspec.yml file in it. I am trying to create a deployment using CircleCI and my config.yml looks like this :-
version: 2.1
orbs:
aws-cli: circleci/aws-cli#3.1.3
aws-code-deploy: circleci/aws-code-deploy#2.0.0
jobs:
deploy:
executor: aws-cli/default
steps:
- checkout
- aws-cli/setup
- aws-code-deploy/deploy-bundle:
application-name: "blue-green"
bundle-bucket: "blue-green-cd-ls"
bundle-key: "appspec.yaml"
deployment-group: "blue-green-ls"
bundle-type: "YAML"
deployment-config: "CodeDeployDefault.ECSAllAtOnce"
workflows:
build-and-deploy:
jobs:
- deploy
But my deployment keeps on failing with the following error :-
Deployment failed!
{
"deploymentInfo": {
"applicationName": "blue-green",
"deploymentGroupName": "*************",
"deploymentConfigName": "CodeDeployDefault.ECSAllAtOnce",
"deploymentId": "d-85LKXCPMJ",
"revision": {
"revisionType": "S3",
"s3Location": {
"bucket": "blue-green-cd-ls",
"key": "appspec.yaml.YAML",
"bundleType": "YAML"
}
},
"status": "Failed",
"errorInformation": {
"code": "INVALID_REVISION",
"message": "The AppSpec file cannot be located in the specified S3 bucket. Verify your AppSpec file is present and that the name and key value pair specified for your S3 bucket are correct. The S3 bucket must be in your current region"
I double checked and the S3 bucket is definitely in the right region i.e. us-east-1. Anyone has any ideas what might be wrong? Thank you.
I'm trying to run backstage on AWS Fargate, but I'm facing a problem fetching the service catalog that is hosted on a private gitlab.
Both gitlab and backstage are running on the same vpc and same private subnets.
When I run backstage both locally with yarn dev and docker, backstage can fetch the catalog with no problem. But when I run a task in Fargate, he can't fetch it.
The error presented on backstage is:
When I call the entites api on /api/catalog/entities, the return is:
"status": {
"items": [
{
"type": "backstage.io/catalog-processing",
"level": "error",
"message": "Error: Unable to read url, Error: Could not get GitLab project ID for: https://gitlab.srv-cld.xxx.com.br/x/x/backstage-architecture/-/blob/master/systems/user/services/document-service/api.yaml, Error: GitLab Error 'undefined', undefined",
"error": {
"name": "Error",
"message": "Unable to read url, Error: Could not get GitLab project ID for: https://gitlab.srv-cld.xxx.com.br/x/x/backstage-architecture/-/blob/master/systems/user/services/document-service/api.yaml, Error: GitLab Error 'undefined', undefined"
}
}
...
]
}
The catalog on app-config.yml is:
catalog:
import:
entityFilename: catalog-info.yaml
pullRequestBranchName: backstage-integration
rules:
- allow: [Component, System, API, Resource, Location]
locations:
- type: url
target: https://gitlab.aaa.bbb.com.br/arquitetura/exemplos/backstage-architecture/-/blob/master/architecture.yaml
And the integrations is:
integrations:
gitlab:
- host: gitlab.aaa.bbb.com.br
apiBaseUrl: https://gitlab.aaa.bbb.com.br/api/v4
token: ${GITLAB_TOKEN}
I really can't understand why backstage can't get the project id when running on fargate.
Just to close the question, it was not a problem with backstage or self hosted gitlab. It was a problem mapping the env GITLAB_TOKEN in the task definition.
I'm trying to follow the AWS Tutorial for iOS. However, when adding the GraphQL API I keep getting an error: "An error occurred when pushing the resources to the cloud Missing Region in config." This occurs when running sudo amplify push after adding the api to "update resources in the cloud." error image
I tried adding a Region key-value pair to my awsconfiguration.json but still received the same behavior:
{
"UserAgent": "aws-amplify/cli",
"Version": "0.1.0",
"Region": "us-east-1",
"IdentityManager": {
"Default": {}
}
}
I wasn't able to find any useful info with a google search or here in stack overflow. Any help would be greatly appreciated.
After installing the amplify-cli did you run amplify configure or amplify configure project?
This should prompt you to login in your aws console and also asks you to specify a region.
I'm new to AWS and I'm trying to deploy a multicontainer Docker application to Elastic Beanstalk.
My Dockerrun.aws.json file is very simple, and it's the only thing that's uploaded to EB:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "mycontainer",
"image": "somethingsomething.eu-central-1.amazonaws.com/myimage",
"essential": true,
"memory": 128
}
]
}
In http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker.container.console.html it says that when using a Docker image uploaded to Amazon ECR:
You do, however, need to provide your instances with permission to
access the images in your Amazon ECR repository by adding permissions
to your environment's instance profile. You can attach the
AmazonEC2ContainerRegistryReadOnly managed policy to the instance
profile to provide read-only access to all Amazon ECR repositories in
your account
When deploying the application, it raises the following error:
ECS task stopped due to: Essential container in task exited.
(myimage: CannotPullContainerError: AccessDeniedException: User:
arn:aws:sts::xxx:assumed-role/aws-elasticbeanstalk-ec2-role/i-xyz
is not authorized to perform: ecr:GetAuthorizationToken on resource: *
status code: 400, request id: 4143c35d-)
I added the AWSElasticBeanstalkReadOnlyAccess to the aws-elasticbeanstalk-ec2-role, but it doesn't change anything...
Help?!
I'm not sure where it's written, but I needed to actually add the AmazonEC2ContainerRegistryReadOnly policy to aws-elasticbeanstalk-ec2-role. AmazonEC2ContainerRegistryReadOnly contains the GetAuthorizationToken action.
per https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/iam-instanceprofile.html#iam-instanceprofile-addperms
open https://console.aws.amazon.com/iam/home#roles
Choose aws-elasticbeanstalk-ec2-role
On the Permissions tab, choose Attach policies.
select AmazonEC2ContainerRegistryReadOnly
Choose Attach policy
Previously I used a single docker container elastic beanstalk environment. It was able to use my login credentials that are stored on S3 to download a container from a private docker hub repository.
However, I created a new multi container docker environment and since then I always get the error:
change="{TaskArn:arn:aws:ecs:eu-west-1:188125317072:task/dbf02781-8140-422a-9b81-93d83441747d
ContainerName:aws-first-test Status:4
Reason:CannotPullContainerError:
Error: image test/awstest:latest not found ExitCode:<nil> PortBindings:[] SentStatus:NONE}"
(I'm using exactly the same container that worked before)
The container does exist and the environment is in the same location as the login credentials (ireland)
My Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"Bucket": "docker-ireland",
"Key": ".dockercfg"
},
"containerDefinitions": [
{
"name": "aws-first-test",
"image": "test/awstest",
"memory": 250
},
{
"name": "aws-second-test",
"image": "test/awstest",
"memory": 250
}
]
}
The Dockerrun.aws.json is case sensitive and in version 2.0 the keys authentication, bucket and key are changed to lower case.
This answer is from the amazon aws forums: https://forums.aws.amazon.com/message.jspa?messageID=667098
In my case this error was caused because I had something like the following in my S3 config file:
{
"server" :
{
"auth" : "*****",
"email" : "*****"
}
}
Not kidding, i had the keyword "server" instead of registry url service (https://index.docker.io/v1/ for docker).
I must´ve copied from some blog or documentation idk. Feeling dump already.