Terminology - ECR repo uri vs ECR repo name - amazon-web-services

In the below json received after talking to AWS ECR end point service:
{
"repository": {
"repositoryArn": "arn:aws:ecr:us-west-2:11122233334444:repository/some_app_image",
"registryId": "11122233334444",
"repositoryName": "some_app_image",
"repositoryUri": "11122233334444.dkr.ecr.us-west-2.amazonaws.com/some_app_image",
"createdAt": 11111111554.0,
"imageTagMutability": "MUTABLE",
"imageScanningConfiguration": {
"scanOnPush": false
}
}
}
after running command: aws ecr describe-repositories --repository-names some_app_image
How to term 11122233334444.dkr.ecr.us-west-2.amazonaws.com? Is it an ECR end point?

You would refer to it as your registry URL. More information on terminology at the ECR user docs

The value in repositoryUri is what you would use in a command like docker pull. So in this example you would say docker pull 11122233334444.dkr.ecr.us-west-2.amazonaws.com/some_app_image to download your image.

Related

Can't push Dockerimages to ECR

I get an error on push my local Dockerimage to my private ECR:
My IAM-User has AmazonEC2ContainerRegistryFullAccess rights and my EC2 too.
$ aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin xx.dkr.ecr.eu-central-1.amazonaws.com
...
Login Succeeded
$ aws ecr describe-repositories
{
"repositories": [
{
"repositoryUri": "xx.dkr.ecr.eu-central-1.amazonaws.com/my_repo",
"imageScanningConfiguration": {
"scanOnPush": false
},
"encryptionConfiguration": {
"encryptionType": "AES256"
},
"registryId": "xx",
"imageTagMutability": "MUTABLE",
"repositoryArn": "arn:aws:ecr:eu-central-1:xx:repository/my_repo",
"repositoryName": "my_repo",
"createdAt": 1650817284.0
}
]
}
$ docker pull hello-world
$ docker tag hello-world:latest xx.dkr.ecr.eu-central-1.amazonaws.com/hello-world:latest
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
xx.dkr.ecr.eu-central-1.amazonaws.com/hello-world latest feb5d9fea6a5 7 months ago 13.3kB
hello-world latest feb5d9fea6a5 7 months ago 13.3kB
and now i get the error on push my image:
$ docker push xx.dkr.ecr.eu-central-1.amazonaws.com/hello-world:latest
The push refers to repository [xx.dkr.ecr.eu-central-1.amazonaws.com/hello-world]
e07ee1baac5f: Retrying in 1 second
EOF
Any suggestions?
The profile-trick from https://stackoverflow.com/a/70453287/10243980 works NOT.
Many thanks
One of my working example is the following
aws ecr get-login-password --region eu-central-1 | docker login --username AWS --password-stdin 123456789012.dkr.ecr.eu-central-1.amazonaws.com
docker build -t dolibarr .
docker tag dolibarr:latest 123456789012.dkr.ecr.eu-central-1.amazonaws.com/dolibarr:latest
docker push 123456789012.dkr.ecr.eu-central-1.amazonaws.com/dolibarr:latest
Compared to your commands, it looks very similar. So now, please check, if your user is able to push to the repository itself (ecr:PutImage). Probably this is the main issue.
A good solution to find more help is the following Pushing an image to ECR, getting "Retrying in ... seconds"
My policy for my Docker image role, I am using, is the following (terraform style):
{
Action = [
"ecr:BatchCheckLayerAvailability",
"ecr:CompleteLayerUpload",
"ecr:GetAuthorizationToken",
"ecr:InitiateLayerUpload",
"ecr:PutImage",
"ecr:UploadLayerPart",
]
Effect = "Allow"
Resource = "*"
}
Try to adjust your policy and remove the "Principal" entry. This is not necessary.
Another possible reason could has nothing to do with the policy:
Do you use some local proxy? I experienced some issues with using Proxy Servers for all public endpoints, like ECR, S3, etc. I disabled to use for those domains and it worked (depends on using VPN, or something similar).
You need to create a repository with the name hello-world. It is explained at the begining of Pushing a Docker image ecr docs.

Docker Hub Login for AWS CodeBuild (Docker Hub Limit)?

This is my Current Setup:
Gets repository from Bitbucket
Builds the docker image using the Amazon Linux 2 AWS managed image
Push the image to ECR
I am now sometimes getting the toomanyrequests error during the docker build phase. So, now I want to login to my docker hub account and get rid of this issue.
How do I go about logging into docker hub account only for the build phase?
Should I use the buildspec.yml for logging in? But that would conflict with the AWS ecr login, right?
That article that Hridiago shared is very helpful.
I have also experienced this issue (It occurred after Docker Hub set limits to the number of unathenticated pulls that could be made per day).
If you have used AWS secrets-manager to store your DockerHub username and password (using key/value pair) your buildspec will look like this (note that my secret is stored as /dockerhub/credentials):
version: 0.2
env:
secrets-manager:
DOCKERHUB_PASS: "/dockerhub/credentials:password"
DOCKERHUB_USERNAME: "/dockerhub/credentials:username"
phases:
install:
commands:
- echo pre_build step...
- docker login --username $DOCKERHUB_USERNAME --password $DOCKERHUB_PASS
- $(aws ecr get-login --no-include-email --region us-east-1)
You will need to ensure that your code build has the correct permissions to access your secrets-manager as mentioned in the article
Julia Cowper's solution should be the accepted answer.
Here is the same solution for terraform with codebuild.
resource "aws_codebuild_project" "builder" {
environment = {
environment_variable {
type = "SECRETS_MANAGER"
name = "DOCKERHUB_USER"
value = "[secret-name]:username"
}
environment_variable {
type = "SECRETS_MANAGER"
name = "DOCKERHUB_PASS"
value = "[secret-name]:password"
}
}
}
and you need you secret to look like
{
"username": [username],
"password": [password],
}
then in the buildspec
pre_build:
commands:
- echo Logging in to Docker Hub...
- echo "$DOCKERHUB_PASS" | docker login --username $DOCKERHUB_USER --password-stdin
AWS secret manager for using authenticated requests for docker is good way, syntax is as below:
version: 0.2
env:
shell: bash
secrets-manager:
DOCKERHUB_USERNAME: DockerHubSecret:dockerhub_username
DOCKERHUB_PASSWORD: DockerHubSecret:dockerhub_password
phases:
pre_build:
commands:
- echo logging in docker hub
- docker login --username $DOCKERHUB_USERNAME --password $DOCKERHUB_PASSWORD

jHipster Registry on AWS Beanstalk

I've been looking for a way to deploy jhipster microservices to AWS. It seems like jhipster registry provides an easy way to monitor jhipster microservices but I am yet to find a way to deploy jhipster registry to AWS. Cloning jhipster-registry GitHub repo and running jhipster aws returns "Error: Sorry deployment for this database is not possible".
Alternatively, creating a Docker image with mvn compile jib:buildTar and using generated target/jib-image.tar as an AWS Beanstalk app version also fails because it's missing Dockerfile.
What's a good way to deploy jhipster registry to AWS Beanstalk and subsequently use it for monitoring other jhipster microservices deployed to AWS Beanstalk?
Thanks!
After some trial and error I ended up doing something like this:
Clone https://github.com/jhipster/jhipster-registry
Build a Docker container locally with ./mvnw package -Pprod verify jib:dockerBuild
Create an ECR registry in AWS console or using AWS CLI as follows: aws --profile [AWS_PROFILE] ecr create-repository --repository-name [ECR_REGISTRY_NAME]
Assuming that v6.3.0 was cloned in step 1, tag the local Docker as follows: image docker tag [IMAGE_ID] [AWS_ACCOUNT].dkr.ecr.[AWS_REGION].amazonaws.com/[ECR_REGISTRY_NAME]:jhipster-registry-6.3.0
Authenticate to ECR as follows: eval $(aws --profile [AWS_PROFILE] ecr get-login --no-include-email --region [AWS_REGION])
Push the local Docker image to ECR as follows: docker push [AWS_ACCOUNT].dkr.ecr.[AWS_REGION].amazonaws.com/[ECR_REGISTRY_NAME]:jhipster-registry-6.3.0
Set up Elastic Beanstalk (EB) CLI
Initialize local EB project as follows: eb init --profile [AWS_PROFILE]
Create Dockerrun.aws.json with the following content:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "[AWS_ACCOUNT].dkr.ecr.[AWS_REGION].amazonaws.com/[ECR_REGISTRY_NAME]:jhipster-registry-6.3.0",
"Update": "true"
},
"Ports": [
{
"ContainerPort": 8761
}
]
}
Run jhipster-locally as follows: eb local run --port 8761
Verify that you can access jhipster-registry locally as follows: eb local open
Create a new EB environment running the Docker image from the ECR as follows: eb create [EB_ENV_NAME] --instance-types t2.medium --keyname [EC2_KEY_PAIR_NAME] \ --vpc.id [VPC_ID] --vpc.ec2subnets [EC2_SUBNETS] --vpc.publicip --vpc.elbpublic --vpc.securitygroups [CUSTOM_ELB_SG]
Access remote jhipster-registry as follows: eb open

Auth into ECR in a Jenkinsfile so I can pull an image to run the build in?

The situation here is that we have an app that's currently being built on a Jenkins slave with a certain version of node installed on it. We want to standardize the build environment, and so to do that want to build inside a docker container.
Through my research it definitely seems possible. However, the challenge for us is we want to use custom images we manage ourselves and store in ECR. We don't want to use the ones on docker hub. With that constraint in mind, I'm struggling to authenticate into our ECR within my Jenkinsfile. Ideally I could do something like this:
pipeline {
agent {
docker {
image 'node:7'
registryUrl 'ecr_url.amazonaws.com'
registryCredentialsId 'ecr:us-east-1:iam_role'
}
}
stages {
stage('Build') {
steps {
sh 'command goes here'
}
}
}
}
But the issue here is that our ECR login relies on running a shell command on the Jenkins worker (which has aws cli installed) to log in and access the image. So far I've had no luck authenticating within the Jenkinsfile so I can pull an image to run the build in. Does anyone know if this is possible and if so, how to edit the Jenkinsfile to do it?
You need Authorization token before pulling the image from ECR it's mean you also need to install AWS-CLI on Jenkins server. The best approach is to assign role and run the below command somewhere in your pipeline to get authorization token, if that seems complicated to you you can use ECR plugin below.
Your Docker client must authenticate to Amazon ECR registries as an
AWS user before it can push and pull images. The AWS CLI get-login
command provides you with authentication credentials to pass to
Docker. For more information, see Registry Authentication.
AmazonECR-registry_auth
So you can use JENKINS/Amazon+ECR
Amazon ECR plugin implements a Docker Token producer to convert Amazon
credentials to Jenkins’ API used by (mostly) all Docker-related
plugins. Thank's to this producer, you can select your existing
registered Amazon credentials for various Docker operations in
Jenkins, for sample using CloudBees Docker Build and Publish plugin:
Normally we use this command to obtain token.
$(aws ecr get-login --no-include-email --region us-west-2)
with in pipline you can try
pipeline
{
options
{
buildDiscarder(logRotator(numToKeepStr: '3'))
}
agent any
environment
{
PROJECT = 'tap_sample'
ECRURL = 'http://999999999999.dkr.ecr.eu-central-1.amazonaws.com'
ECRCRED = 'ecr:eu-central-1:tap_ecr'
}
stages
{
stage('Docker image pull')
{
steps
{
script
{
sh("eval \$(aws ecr get-login --no-include-email | sed 's|https://||')")
docker.withRegistry(ECRURL, ECRCRED)
{
docker.image(PROJECT).pull()
}
}
}
}
}
}
You almost had it working.
The trick to using it as an agent on the declarative pipeline is to create an AWS credential with empty Access_key and Secret but setting an IAM role on it.
pipeline {
agent {
docker {
image '<account-id>.dkr.ecr.eu-west-1.amazonaws.com/image/my-image:v1'
args '--entrypoint= '
registryCredentialsId "ecr:eu-west-1:aws-instance-role"
registryUrl "https://<account-id>.dkr.ecr.eu-west-1.amazonaws.com"
}
}
stages {
stage('Test') {
steps {
sh "I'm on an ECR agent"
}
}
}
}
Make sure that you can assume this role, you can an instance role that allows assuming itself.
I've created a medium post describing this process on a cross-account ECR
How to run Jenkins agents with cross-account ECR images using instance roles on EKS.
use aws pipeline steps. It allows for a ecrLogin() where you can specify registry ids if needed. https://plugins.jenkins.io/pipeline-aws/#plugin-content-ecrlogin

AWS ElasticBeanstalk pull Docker image from Gitlab registry

I’m having a hard time to pull Docker image from private Gitlab registry to AWS MultiContainer ElasticBeanstalk environment.
I have added .dockercfg into S3 in the same region as my cluster and also allowed to aws-elasticbeanstalk-ec2-role IAM role to get data from S3.
ElasticBeanstalk always return error CannotPullContainerError: API error (500)
My .dockercfg is in this format:
{
"https://registry.gitlab.com" : {
"auth" : “my gitlab deploy token“,
"email" : “my gitlab token name“
}
}
Inside Dockerrun.aws.json I have added following
"authentication": {
"bucket": "name of my bucket",
"key": ".dockercfg"
},
When I try to login via docker login -u gitlabtoken-name -p token it works perfectly.
The gitlab deploy token is not the auth key.
To generate a proper auth key I usually do the following:
docker run -ti docker:dind sh -c "docker login -u name -p deploy-token registry.gitlab.com && cat /root/.docker/config.json"
and it'll print something like:
{
"auths": {
"registry.gitlab.com": {
"auth": "your-auth-key"
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.09.0 (linux)"
}
}
Then, as per elasticbeanstalk docs "Using Images From a Private Repository
", you should take just what it's needed.
Hope this'll help you!