I’m having a hard time to pull Docker image from private Gitlab registry to AWS MultiContainer ElasticBeanstalk environment.
I have added .dockercfg into S3 in the same region as my cluster and also allowed to aws-elasticbeanstalk-ec2-role IAM role to get data from S3.
ElasticBeanstalk always return error CannotPullContainerError: API error (500)
My .dockercfg is in this format:
{
"https://registry.gitlab.com" : {
"auth" : “my gitlab deploy token“,
"email" : “my gitlab token name“
}
}
Inside Dockerrun.aws.json I have added following
"authentication": {
"bucket": "name of my bucket",
"key": ".dockercfg"
},
When I try to login via docker login -u gitlabtoken-name -p token it works perfectly.
The gitlab deploy token is not the auth key.
To generate a proper auth key I usually do the following:
docker run -ti docker:dind sh -c "docker login -u name -p deploy-token registry.gitlab.com && cat /root/.docker/config.json"
and it'll print something like:
{
"auths": {
"registry.gitlab.com": {
"auth": "your-auth-key"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.0 (linux)"
}
}
Then, as per elasticbeanstalk docs "Using Images From a Private Repository
", you should take just what it's needed.
Hope this'll help you!
Related
I get an error on push my local Dockerimage to my private ECR:
My IAM-User has AmazonEC2ContainerRegistryFullAccess rights and my EC2 too.
$ aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin xx.dkr.ecr.eu-central-1.amazonaws.com
...
Login Succeeded
$ aws ecr describe-repositories
{
"repositories": [
{
"repositoryUri": "xx.dkr.ecr.eu-central-1.amazonaws.com/my_repo",
"imageScanningConfiguration": {
"scanOnPush": false
},
"encryptionConfiguration": {
"encryptionType": "AES256"
},
"registryId": "xx",
"imageTagMutability": "MUTABLE",
"repositoryArn": "arn:aws:ecr:eu-central-1:xx:repository/my_repo",
"repositoryName": "my_repo",
"createdAt": 1650817284.0
}
]
}
$ docker pull hello-world
$ docker tag hello-world:latest xx.dkr.ecr.eu-central-1.amazonaws.com/hello-world:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
xx.dkr.ecr.eu-central-1.amazonaws.com/hello-world latest feb5d9fea6a5 7 months ago 13.3kB
hello-world latest feb5d9fea6a5 7 months ago 13.3kB
and now i get the error on push my image:
$ docker push xx.dkr.ecr.eu-central-1.amazonaws.com/hello-world:latest
The push refers to repository [xx.dkr.ecr.eu-central-1.amazonaws.com/hello-world]
e07ee1baac5f: Retrying in 1 second
EOF
Any suggestions?
The profile-trick from https://stackoverflow.com/a/70453287/10243980 works NOT.
Many thanks
One of my working example is the following
aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.eu-central-1.amazonaws.com
docker build -t dolibarr .
docker tag dolibarr:latest 123456789012.dkr.ecr.eu-central-1.amazonaws.com/dolibarr:latest
docker push 123456789012.dkr.ecr.eu-central-1.amazonaws.com/dolibarr:latest
Compared to your commands, it looks very similar. So now, please check, if your user is able to push to the repository itself (ecr:PutImage). Probably this is the main issue.
A good solution to find more help is the following Pushing an image to ECR, getting "Retrying in ... seconds"
My policy for my Docker image role, I am using, is the following (terraform style):
{
Action = [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetAuthorizationToken",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart",
]
Effect = "Allow"
Resource = "*"
}
Try to adjust your policy and remove the "Principal" entry. This is not necessary.
Another possible reason could has nothing to do with the policy:
Do you use some local proxy? I experienced some issues with using Proxy Servers for all public endpoints, like ECR, S3, etc. I disabled to use for those domains and it worked (depends on using VPN, or something similar).
You need to create a repository with the name hello-world. It is explained at the begining of Pushing a Docker image ecr docs.
I've been looking for a way to deploy jhipster microservices to AWS. It seems like jhipster registry provides an easy way to monitor jhipster microservices but I am yet to find a way to deploy jhipster registry to AWS. Cloning jhipster-registry GitHub repo and running jhipster aws returns "Error: Sorry deployment for this database is not possible".
Alternatively, creating a Docker image with mvn compile jib:buildTar and using generated target/jib-image.tar as an AWS Beanstalk app version also fails because it's missing Dockerfile.
What's a good way to deploy jhipster registry to AWS Beanstalk and subsequently use it for monitoring other jhipster microservices deployed to AWS Beanstalk?
Thanks!
After some trial and error I ended up doing something like this:
Clone https://github.com/jhipster/jhipster-registry
Build a Docker container locally with ./mvnw package -Pprod verify jib:dockerBuild
Create an ECR registry in AWS console or using AWS CLI as follows: aws --profile [AWS_PROFILE] ecr create-repository --repository-name [ECR_REGISTRY_NAME]
Assuming that v6.3.0 was cloned in step 1, tag the local Docker as follows: image docker tag [IMAGE_ID] [AWS_ACCOUNT].dkr.ecr.[AWS_REGION].amazonaws.com/[ECR_REGISTRY_NAME]:jhipster-registry-6.3.0
Authenticate to ECR as follows: eval $(aws --profile [AWS_PROFILE] ecr get-login --no-include-email --region [AWS_REGION])
Push the local Docker image to ECR as follows: docker push [AWS_ACCOUNT].dkr.ecr.[AWS_REGION].amazonaws.com/[ECR_REGISTRY_NAME]:jhipster-registry-6.3.0
Set up Elastic Beanstalk (EB) CLI
Initialize local EB project as follows: eb init --profile [AWS_PROFILE]
Create Dockerrun.aws.json with the following content:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "[AWS_ACCOUNT].dkr.ecr.[AWS_REGION].amazonaws.com/[ECR_REGISTRY_NAME]:jhipster-registry-6.3.0",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 8761
}
]
}
Run jhipster-locally as follows: eb local run --port 8761
Verify that you can access jhipster-registry locally as follows: eb local open
Create a new EB environment running the Docker image from the ECR as follows: eb create [EB_ENV_NAME] --instance-types t2.medium --keyname [EC2_KEY_PAIR_NAME] \ --vpc.id [VPC_ID] --vpc.ec2subnets [EC2_SUBNETS] --vpc.publicip --vpc.elbpublic --vpc.securitygroups [CUSTOM_ELB_SG]
Access remote jhipster-registry as follows: eb open
In the below json received after talking to AWS ECR end point service:
{
"repository": {
"repositoryArn": "arn:aws:ecr:us-west-2:11122233334444:repository/some_app_image",
"registryId": "11122233334444",
"repositoryName": "some_app_image",
"repositoryUri": "11122233334444.dkr.ecr.us-west-2.amazonaws.com/some_app_image",
"createdAt": 11111111554.0,
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": false
}
}
}
after running command: aws ecr describe-repositories --repository-names some_app_image
How to term 11122233334444.dkr.ecr.us-west-2.amazonaws.com? Is it an ECR end point?
You would refer to it as your registry URL. More information on terminology at the ECR user docs
The value in repositoryUri is what you would use in a command like docker pull. So in this example you would say docker pull 11122233334444.dkr.ecr.us-west-2.amazonaws.com/some_app_image to download your image.
I am trying to pass in SSL certificate to AWS SSM parameter store
the SSL certificate is password protected as well
my question is how do i retrieve this as a certificate file inside the containers in ECS? I do know how to use SSM parameter store to store secret environment variables BUT how do i use it to create a secret file to a location on containers? We have a string and a file here, how does SSM manage files?
Thanks
I'm not aware of a way to create a file from SSM, but I expect your ENTRYPOINT in the Docker container could handle this logic
Task Definition Snippet
{
"containerDefinitions": [{
"secrets": [{
"name": "MY_SSM_CERT_FILE",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:MY_SSM_CERT_FILE"
},
{
"name": "MY_SSM_CERT_FILE_LOCATION",
"valueFrom": "arn:aws:secretsmanager:region:aws_account_id:secret:MY_SSM_CERT_FILE_LOCATION"
}]
}]
}
entrypoint.sh
echo "$MY_SSM_CERT_FILE" >> $MY_SSM_CERT_FILE_LOCATION
// Run rest of the logic for application
Dockerfile
FROM ubuntu:16.04
COPY ./entrypoint.sh .entrypoint.sh
ENTRYPOINT ["./entrypoint.sh"]
Why don't you use AWS Secret Manager which can complement AWS SSM? I think secrets manager supports secrets file:
$ aws secretsmanager create-secret --name TestSecret --secret-string file://secret.txt # The Secrets Manager command takes the --secret-string parameter from the contents of the file
see this link for further information:
https://docs.aws.amazon.com/secretsmanager/latest/userguide/best-practices.html
The link below shows how you can integrate Secrets manager with SSM
https://docs.aws.amazon.com/systems-manager/latest/userguide/integration-ps-secretsmanager.html
Hope this helps
I am using AWS Elastic Beanstalk to run a multi-container docker build, and have run into issues with getting my private docker repository to work.
I have created a "dockercfg.json" file to hold my auth, thus:
{"https://index.docker.io/v1/":{"auth":"59...22","email":"ra...#...com"}}
and uploaded it to an S3 bucket in the same region as my EB instance, and created a Dockerrun.aws.json file thus:
{
"AWSEBDockerrunVersion": 2,
"authentication": {
"bucket": "hayl-docker",
"key": "dockercfg.json"
},
"containerDefinitions": [
{
"name": "hayl",
"image": "raddishiow/hayl-docker:uwsgi",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 443,
"containerPort": 443
}
]
}
]
}
but I keep getting errors like this:
STOPPED, Reason CannotPullContainerError: Error response from daemon: pull access denied for raddishiow/hayl-docker, repository does not exist or may require 'docker login'
I've verified that AWS is able to access the "docker cfg.json" file. I'm not sure it's using the credentials though...
I have changed the docker repository to public briefly and it pulls successfully, but that's not an option really as the image contains sensitive code that I don't want in the public domain.
The auth token I'm using was created using the docker website, as my local docker config file doesn't store my login details...
I've tried manually base64 encoding my password as docker would do to store it in the config file, but this doesn't work either.
Any help would be greatly appreciated, as I've been tearing my hair out for days over this now.
Turns out the "auth" token must be generated from your username and password encoded into base64, in the format "username:password".