How to configure custom domain name for Amazon ECR - amazon-web-services

Amazon Elastic Container Repositories (ECR) have quite human-unfriendly URIs, like 99999999999.dkr.ecr.eu-west-1.amazonaws.com. Is it at all possible to configure custom domain name for an ECR?
Simplistic solution would be to create a CNAME record to point to the ECR URI, but this doesn't really work (SSL certificate doesn't match the domain name, passwords generated by aws ecr get-login don't pass, cannot push images tagged with custom domain name...).
Are there other options?

Unfortunately, AWS does not support custom domain names for ECR. You'll have to make do with the auto-generated ones for now. There are 'hacks' about, which centre around Nginx proxies, but it really isn't worth the effort.

If you are using docker-compose, one way to get mitigate the ugly host portion of this is to use ARG in Dockerfile and define your host in .env.
derived/Dockerfile:
ARG REGISTRY_HOST
FROM ${REGISTRY_HOST}/namespace/base:latest
...
.env:
REGISTRY_HOST=99999999999.dkr.ecr.eu-west-1.amazonaws.com
docker-compose.yml:
version: "3.3"
services:
base-service:
image: ${REGISTRY_HOST}/namespace/base:latest
build:
context: base/
...
derived-service:
image: ${REGISTRY_HOST}/namespace/derived:latest
build:
context: derived/
args:
- REGISTRY_HOST=${REGISTRY_HOST}
...
The advantage of putting ${REGISTRY_HOST} in the image declaration is that you can docker-compose push base-service and it will properly push the image to your ECS repo.
None of this is as clean as if AWS allowed custom hostnames for ECS repositories, but it's better than hardcoding that god-awful string in all your Dockerfiles. You could do this without docker-compose as well obviously but you'd have to define the build arg on the command line, something like docker build --build-arg REGISTRY_HOST=99999999999.dkr.ecr.eu-west-1.amazonaws.com ...

Related

Deploy Applications on Amazon ECS Using docker compose

I'm trying to deploy a docker container with multiple services to ECS. I've been following this article which looks great: https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I can get my container to run locally, and I can connect to the ECS context using the AWS CLI; however in the basic example from the article when I run
docker compose up
In order to deploy the image to ECS, I get the error:
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Can't seem to make heads or tails of this. My docker is logged in to ECS using
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
The default IAM user on my aws CLI has AmazonECS_FullAccess as well as "ecs:ListAccountSettings" and "cloudformation:ListStackResources"
I read here: pull access denied repository does not exist or may require docker login mikemaccana 's answer that after Nov 2020 authentication may be required in your YAML file to allow AWS to pull from hub.docker.io (e.g. give aws your Docker hub username and password) but I can't get the 'auth' syntax to work in my yaml file. This is my YAML file that runs tomcat and mariadb locally:
version: "2"
services:
database:
build:
context: ./tba-database
image: tba-database
# set default mysql root password, change as needed
environment:
MYSQL_ROOT_PASSWORD: password
# Expose port 3306 to host. Not for the application but
# handy to inspect the database from the host machine.
ports:
- "3306:3306"
restart: always
webserver:
build:
context: ./tba-webserver
image: tba-webserver
# mount point for application in tomcat
volumes:
- ./target/testPROJ:/usr/local/tomcat/webapps/ROOT
links:
- database:tba-database
# open ports for tomcat and remote debugging
ports:
- "8080:8080"
- "8000:8000"
restart: always
Author of the blog here (thanks for the kind comment!). I haven't played much with the build side of things but I suspect what's happening here is that when you run docker compose up we ignore the build phase and only leverage the image field. What happens next is that the containers being deployed on ECS/Fargate tries to pull the image tba-database (which is where the deploying seems to be complaining because it doesn't exist). You need extra steps to push your image to either GH or ECR before you could bring it life using docker compose up when in the ecs context.
You also probably need to change the compose version ("2" is very old).

Deploy app created with docker-compose to AWS

Final goal: To deploy a ready-made cryptocurrency exchange on AWS.
I have setup a readymade server by 0xProject by running the following command on my local machine:
npx #0x/launch-kit-wizard && docker-compose up
This command creates a docker-compose.yml file which has multiple container definitions and starts the exchange on http://localhost:3001/
I need to deploy this to AWS for which I'm following this Youtube tutorial
I have created a registry user with appropriate permissions
An EC2 instance is created
ECR repository is created
AWS CLI is configured
As per AWS instructions, I'm retrieving an authentication token and authenticating Docker client to registry:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <docker-id-given-by-AWS>.dkr.ecr.us-east-2.amazonaws.com
I'm trying to build the docker image:
docker build -t testdockerregistry .
Now, since in this case, we have docker-compose.yml instead of Dockerfile - when I try to build the image - it throws the following error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: CreateFile C:\Users\hp\Desktop\xxx\Dockerfile: The system cannot find the file specified.
I tried building image from docker-compose itself as per this guide, which fails with the following message:
postgres uses an image, skipping
frontend uses an image, skipping
mesh uses an image, skipping
backend uses an image, skipping
nginx uses an image, skipping
Can anyone please help me with this?
You can use the aws ecs cli-compose command from the ECS CLI.
By using this command it will translate the docker-compose file you create into a ECS Task Definition.
If you're interested in finding out more about the CLI take a read of the AWS documentation here.
Another approach, instead of using the AWS ECS CLI directly, is to use the new docker/compose-cli
This CLI tool makes it easy to run Docker containers and Docker Compose applications in the cloud using either Amazon Elastic Container Service (ECS) or Microsoft Azure Container Instances (ACI) using the Docker commands you already know.
See "Docker Announces Open Source Compose for AWS ECS & Microsoft ACI " from Aditya Kulkarni.
It references "Docker Open Sources Compose for Amazon ECS and Microsoft ACI" from Chris Crone, Engineer #docker:
While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted.
We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:

Unable to update the docker image. Error : repository does not exist or may require 'docker login''

I have deploy watchtower which automatically update running Docker containers inside Docker Swarm.
I run this Docker Swarm on two AWS EC2 servers and use AWS ECR as Docker registry.
to avoid aws ecr get-login I have used Amazon ECR Docker Credential Helper which Automatically gets credentials for Amazon ECR on docker push/docker pull and no need to login ech 12 hours.
Problem is watchtower is throwing a error like :
time="2019-03-12T03:41:10Z" level=info msg="Unable to update container /crmproxy.1.wop3c1u2qktbkab8rukrlrgr6, err='Error response from daemon: pull access denied for 00000000000.dkr..amazonaws.com/crm, repository does not exist or may require 'docker login''. Proceeding to next."
I am sure that is not about login to ECR. I have correctly linked credentials into WATCHTOWER contaiener using docker-compose.yml file.
here is the watchtower configurations on docker-compose.yml file.
watchtower:
image: v2tec/watchtower
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ~/.docker/config.json:/config.json
command: --interval 30
In my research about this issue, I saw others has same problem as me and there is person has fixed it him self but i don't understand it.
this is the what i found : solution that is unclear
I don't exactly know this answer is correct or not. but he has said :
The problem was that I installed docker as root. Now installed with
the ec2-user of the Amazon Linux AMI and working
Please help me to avoid this problem that I'm facing. I tried so many times.
Any help would be adavantage to me.
There's an additional dot in your image url. Might that be the reason for your issue?
00000000000.dkr..amazonaws.com/crm
^
Also, you may just add the ec2-user to the docker group to let it execute docker commands as well: sudo usermod -aG docker ec2-user. No need to reinstall.

Fargate with Docker compose Links

We have an application that uses docker compose that contains links.
I'm trying to deploy this using aws-cli on Amazon Fargate using this command:
ecs-cli compose --project-name myApp --file docker-compose-aws.yml --ecs-params fargate-ecs-params.yml --cluster myCluster --region us-east-1 up --launch-type FARGATE
When my fargate-ecs-params.yml has ecs_network_mode: awsvpc I get the error:
Links are not supported when networkMode=awsvpc
So I've tried changing to ecs_network_mode: awsvpc, however I then get the error:
Fargate only supports network mode ‘awsvpc’
My question is how do I create a task definition for Fargate with a compose file that contains links? Or is this not possible (and in that case then what are my alternatives?)
You can place both container in same task definitons they will automatically linked with each other.
After reading your final comment on the boot sequence and answering that question instead, I solved this (even in non-AWS) using the docker-compose depends.
Simple e.g.
services:
web:
depends_on:
- "web_db"
web_db:
image: mongo:3.6
container_name: my_mongodb
You should be able to remove the deprecated links and just use the hostnames that docker creates from the service container names. e.g. above the website would connect to the hostname: "my_mongodb".

Use ECS repository image as build image in CircleCI

I have been using my Docker-hub account till now in CircleCI, and now for some reason I'm trying to use my ECR repository image in the same place as build image in CircleCI (2.0)
But I see ECR doesn't support public images. So I can't mention my image as below as I did for Dockerhub image,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: <dockerhub-name>/<image>
as,
version: 2
jobs:
build:
working-directory: ~/tmp
docker:
- image: aws-id.dkr.ecr.eu-central-1.amazonaws.com/image
It will throw error,
no basic auth credentials
In a straight forward operation it needs to get authenticated via command,
aws ecr get-login --region <region-name>
and then running,
docker login -u AWS -p <password> -e none https://aws-id.dkr.ecr.eu-central-1.amazonaws.com
I tried putting this commands in Pre-dependency commands section of CircleCI plan settings and didn't work.
Ideas?
What "Pre-dependency commands"? That sounds like you're referring to configuration structure from CircleCI 1.0, which you don't seem to be using.
Because of the way AWS requires you to authenticate with ECR, I wouldn't use an image from there with the docker executor. Either use some random image, and then use setup_remote_docker or use the machine executor.
This doc shows the former, and this one covers the latter.