Deploy app created with docker-compose to AWS - amazon-web-services

Final goal: To deploy a ready-made cryptocurrency exchange on AWS.
I have setup a readymade server by 0xProject by running the following command on my local machine:
npx #0x/launch-kit-wizard && docker-compose up
This command creates a docker-compose.yml file which has multiple container definitions and starts the exchange on http://localhost:3001/
I need to deploy this to AWS for which I'm following this Youtube tutorial
I have created a registry user with appropriate permissions
An EC2 instance is created
ECR repository is created
AWS CLI is configured
As per AWS instructions, I'm retrieving an authentication token and authenticating Docker client to registry:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <docker-id-given-by-AWS>.dkr.ecr.us-east-2.amazonaws.com
I'm trying to build the docker image:
docker build -t testdockerregistry .
Now, since in this case, we have docker-compose.yml instead of Dockerfile - when I try to build the image - it throws the following error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: CreateFile C:\Users\hp\Desktop\xxx\Dockerfile: The system cannot find the file specified.
I tried building image from docker-compose itself as per this guide, which fails with the following message:
postgres uses an image, skipping
frontend uses an image, skipping
mesh uses an image, skipping
backend uses an image, skipping
nginx uses an image, skipping
Can anyone please help me with this?

You can use the aws ecs cli-compose command from the ECS CLI.
By using this command it will translate the docker-compose file you create into a ECS Task Definition.
If you're interested in finding out more about the CLI take a read of the AWS documentation here.

Another approach, instead of using the AWS ECS CLI directly, is to use the new docker/compose-cli
This CLI tool makes it easy to run Docker containers and Docker Compose applications in the cloud using either Amazon Elastic Container Service (ECS) or Microsoft Azure Container Instances (ACI) using the Docker commands you already know.
See "Docker Announces Open Source Compose for AWS ECS & Microsoft ACI " from Aditya Kulkarni.
It references "Docker Open Sources Compose for Amazon ECS and Microsoft ACI" from Chris Crone, Engineer #docker:
While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted.
We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:

Related

Unrecognized command for AWS Secret creation with Docker - ECS deploy

I'm planning to deploy a stack to ECS making use of the (new?) "Deploying Docker containers on ECS"
feature. Though, I make use of GitLab for code versioning and CI/CD pipelines, therefore I want to store my Docker images in the GitLab registry (and they should be private).
I understand that ECS can easily support such a configuration through the x-aws-pull_credentials extension, therefore, following the link above, I make use of a GitLab access token and I try to create a Docker secret as suggested through the command
docker secret create gitLabAccessToken --username <GITLAB_USER> --password <GITLAB_TOKEN>
Though, I get the error:
unknown flag: --username
Why is that? What am I doing wrong?
Thanks in advance.
The problem may be with copy-pasting. Type the command directly.

Error when logging into ECR with Docker login: "Error saving credentials... not implemented"

I'm trying to log in to AWS ECR with the Docker login command. I can get a password with the AWS CLI with the command aws ecr get-login-password but when piping this into the docker login command I get the following error:
Error saving credentials: error storing credentials - err: exit status 1, out: `not implemented`
The command I am running is the one recommended in the AWS ECR documentation:
aws ecr get-login-password --region us-east-1 | docker login --username AWS --password-stdin account_id_redacted.dkr.ecr.us-east-1.amazonaws.com/blog-project
I'm running the latest version of AWS CLI as of this question, 2.0.57.
I'm running Docker version 2.4.0 on macOS 10.14.6
Has anyone else run into this issue, and if so have they found a solution?
I've definitely achieved this in the past, but I wonder if there is an issue between the latest versions of Docker and the AWS CLI...
I'm not 100% sure what the issue was here, but it was something to do with the Docker credentials helper.
I installed the Docker credentials helper for macOS, changed the credsStore parameter in ~/.docker/config.json to osxkeychain. That fixed the issues.
I had similar issue, seems like my ~/.docker/config.json was totally messed after work with multiple repos / hubs.
So I just wiped out all the content in this file leaving it empty and rerun aws ecr get-login-password | docker login ... which automatically populated config with appropriate values.
I had this issue on macOS from
.docker/config.json
remove
"credsStore" : "ecr-login"
This resolved the issue for me
if anybody has the same problem on windows then go to C:\Users folder and in the .docker folder remove the config.json file.
it might fix your problem
I believe this is the intended result (sorta). The point of using amazon-ecr-credential-helper is to not need to use docker login. You should instead configure the AWS CLI with your profile credentials (mine: myprofile). Then, you would just need to slightly modify your scripts.
For example, in ECR the AWS given steps to upload a docker image are:
Retrieve an authentication token and authenticate your Docker client
to your registry. Use the AWS CLI:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com
Note: If you receive an error using the AWS CLI, make sure that you have the latest version of
the AWS CLI and Docker installed.
Build your Docker image using the
following command. For information on building a Docker file from
scratch see the instructions here . You can skip this step if your
image is already built:
docker build -t toy_project .
After the build completes, tag your
image so you can push the image to this repository:
docker tag toy_project:latest XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/toy_project:latest
Run the following command to push this image to your newly created AWS
repository:
docker push XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/toy_project:latest
However, you would want to skip step 1. The reason is that if you configured aws cli (i.e. aws configure --profile myprofile) then your credentials will be stored. So you can skip to step 2.
On the 4th step, you simply need to add AWS_PROFILE, just like below
AWS_PROFILE=myprofile docker push XXXXXXXXX.dkr.ecr.us-east-2.amazonaws.com/toy_project:latest`
With amazon-ecr-credential-helper, you no longer need to use docker login or worry about storing credentials, that is the point of amazon-ecr-credential-helper. However, this may not be the best solution for you if you need to actively use docker login in your scripts.
Note: my ~/.docker/config.json looks like
{
"credsStore": "ecr-login"
}
I was getting the same error while running this command on MacOS.
Error possibly occurred because that particular location didn't have the appropriate permissions for users read/write/execute.
Also while I was doing
% docker ps
It was giving an error as: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
What I did:
% sudo chmod 777 /var/run/docker.sock
This gave all the required permissions to that location.
Hope it would help!

Can I use cloud run with self-hosted/private docker image registry?

When I deployed with my self-hosted(private) Docker image registry, got this error:
This service will require authentication to be invoked.
Deploying container to Cloud Run service [serverless-functions-go] in project [PROJECT_ID] region [us-central1]
X Deploying new service...
. Creating Revision...
. Routing traffic...
Deployment failed
ERROR: (gcloud.beta.run.deploy) Invalid image provided in the revision template. Expected [region.]gcr.io/repo-path[:tag or #digest], obtained dtr.artifacts.xxx.com/xxxxx/xxxx/serverless-functions-go:latest
Before pulling the image from my private docker image registry, I need to use the command like:
docker login [options]
How can I solve this issue?
Can I use cloud run with private docker container registry?
No, not at this time. See "Images you can deploy" in the Cloud Run documentation.

Issue with Docker Login with AWS ECR

I'm following an aws tutorial to deploy a simple application using containers on aws. I'm trying to connect to AWS's ECR using docker and i get a warning message which doesnt allow me to login.
I'm brand new to the world of docker, containers and aws. I was going through aws tutorials to deploy a simple nodejs application using docker containers into aws per the following instructions:
https://aws.amazon.com/getting-started/projects/break-monolith-app-microservices-ecs-docker-ec2/module-one/
Per instructions, i've installed docker, AWS CLI and created a AWS ECR for docker to access. I've basically got till the following step:
Step 4 Build and Push the docker image - Point 2 - getting login
As per point 2, i copy pasted the login details (docker login -u AWS -p ) and ran it and i got the following warning message which isnt allowing me to login or push the docker image to ECR. I tried to research online a lot on what to change. There are lots of articles mentioning the issue but no clear direction as to what exactly to do. I'm not exactly sure where in the command i should use --password-stdin. I've also tried what was provided in the following link [Docker: Using --password via the CLI is insecure. Use --password-stdin but that didnt work either
Expected result:
Login succeeded
Actual result:
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
the warning is fine. have you verified whether the docker push/pull is working ?

how to setup continuos deployment from docker-hub to AWS ECS?

I am setting up a CI/CD pipeline for my micro-services. Currently I use TravisCI to pull the code from Github upon check-in, build the docker image and push it to DockerHub. I tried using docker cloud(previously knows as Tutum), which provides automatic deployment feature to AWS EC2 instance but the deployment sometimes recreates the container and the service endpoint URL changes, which is not desirable.
I am exploring amazon's ECS and its tasks , but I can not find any reference for how to setup continuos deployment to ECS when a new image is pushed to docker hub.
Anybody has any experience doing the setup ?
with ECS you would basically have CI detect a change to docker hub and update your task definition/service.
For this I use the wonderful ecs-deploy script from here:
https://github.com/silinternational/ecs-deploy
After my container has been built and deployed to dockerhub it's simply a matter of:
ecs-deploy -k $AWS_KEY -s $AWS_SECRET -r $AWS_REGION -c $CLUSTER_NAME -n $SERVICE_NAME -i $DOCKER_IMAGE_NAME
and that does it.