I am trying to create cluster in ECS Fargate with a Docker-hub image. To spin the container I have to upload a file config.json in the host so that the path could be mounted as -v /configfile/path:/etc/selenium
In local we can specify the path of json file like below.
docker run -d --name selenium -p 4444:4444 -v /E/config.json:/etc/selenium/
Whereas I am not sure where to upload the config.json file in ECS Faragte and use that path in volume.
It is not possible to do anything related to the host when using Fargate. The whole point of Fargate is that it is a "serverless" container runtime, so you have no access to the underlying server at all.
You need to look into other ways of achieving your requirement in Fargate. For example you could first copy the file to S3, and then configure the entrypoint script in your docker container to download the file from S3 before starting selenium.
Related
So I have a a few operations I need to perform on a running instance - kind of a one-off thing. That involves uploading fixtures to it. The structure is an aws ec2 instance, on which runs a few docker container, including a django container. This the docker container in which I need to upload fixtures. This is all part of a ci/cd pipeline, with a postgres RDS database, nginx proxy, and the django container.
I can scp the fixtures into the AWS EC2 instance. Then I can SSH into the EC2 instance, and from there log into the running container (using docker exec).
That's all fine, but....
How can I access the fixtures that I uploaded to the ec2 instance (the docker host) from within the docker-container?
Note: I'm aware I could commit the fixtures to git, then they would find their way into the docker image and be readily available there. I possible I'd rather scp them directly to ec2 & manage them in command lines (because small changes might be needed to the fixtures, which then means I have to run the whole pipeline and wait a few minutes each time...)
You would want to attach a volume to your Docker container. By attaching a volume essentially you are sharing a folder from the host machine with the running Docker container.
You can attach a volume when you start the container:
docker run -d \
--name=my-container-name \
-v /host/path:/container/path \
myimage:latest
In case of docker compose, you can add a volume like this:
version: "3.9"
services:
myservice:
image: myimage
volumes:
- "/host/path:/container/path"
Final goal: To deploy a ready-made cryptocurrency exchange on AWS.
I have setup a readymade server by 0xProject by running the following command on my local machine:
npx #0x/launch-kit-wizard && docker-compose up
This command creates a docker-compose.yml file which has multiple container definitions and starts the exchange on http://localhost:3001/
I need to deploy this to AWS for which I'm following this Youtube tutorial
I have created a registry user with appropriate permissions
An EC2 instance is created
ECR repository is created
AWS CLI is configured
As per AWS instructions, I'm retrieving an authentication token and authenticating Docker client to registry:
aws ecr get-login-password --region us-east-2 | docker login --username AWS --password-stdin <docker-id-given-by-AWS>.dkr.ecr.us-east-2.amazonaws.com
I'm trying to build the docker image:
docker build -t testdockerregistry .
Now, since in this case, we have docker-compose.yml instead of Dockerfile - when I try to build the image - it throws the following error:
unable to prepare context: unable to evaluate symlinks in Dockerfile path: CreateFile C:\Users\hp\Desktop\xxx\Dockerfile: The system cannot find the file specified.
I tried building image from docker-compose itself as per this guide, which fails with the following message:
postgres uses an image, skipping
frontend uses an image, skipping
mesh uses an image, skipping
backend uses an image, skipping
nginx uses an image, skipping
Can anyone please help me with this?
You can use the aws ecs cli-compose command from the ECS CLI.
By using this command it will translate the docker-compose file you create into a ECS Task Definition.
If you're interested in finding out more about the CLI take a read of the AWS documentation here.
Another approach, instead of using the AWS ECS CLI directly, is to use the new docker/compose-cli
This CLI tool makes it easy to run Docker containers and Docker Compose applications in the cloud using either Amazon Elastic Container Service (ECS) or Microsoft Azure Container Instances (ACI) using the Docker commands you already know.
See "Docker Announces Open Source Compose for AWS ECS & Microsoft ACI " from Aditya Kulkarni.
It references "Docker Open Sources Compose for Amazon ECS and Microsoft ACI" from Chris Crone, Engineer #docker:
While implementing these integrations, we wanted to make sure that existing CLI commands were not impacted.
We also wanted an architecture that would make it easy to add new backends and provide SDKs in popular languages. We achieved this with the following architecture:
I have created a task definition and mounted EFS but i'm not sure if it works..
So i wanted to verify if my EFS file system is mounted to the running container ?
How can i verify ?
One way is to add a file to the folder inside your container:
ssh into the underlying ECS EC2 instance with ssh -i "somekey.pem" ec2-user#ec2-xxxxxxxxx.eu-west-1.compute.amazonaws.com
Run docker ps to get the id of your container.
docker exec -it CONTAINERID /bin/bash to move inside the container. Inside the container create or copy a file to the EFS-folder.
Now go to the EFS console and verify that the Metered size is greater than 0 meaning your file is in EFS.
You can run a command df -h which will list your mounted volumes on linux machines.
I want to mount a file from my host-EC2 to a container which is running on ECS server. Every change that happens on the file that is on the Host, should be updated on the file in container.
What I have tried is as follows:
Dockerfile:
FROM nginx
COPY conf/default.conf /etc/nginx/conf.d/default.conf
volume /etc/nginx/conf.d
RUN ln -sf conf/default.conf /etc/nginx/conf.d/
EXPOSE 80
Then pushed the image on ECR Repo and created task, added volume(source path:conf), mountpoint(/etc/nginx/conf.d) and created service to run the container. However every change that I do in the host server, dir: /conf/default.conf does not work on the container /etc/nginx/conf.d/default.conf
I know there is docker run -v but as I'm using ECR, it runs the container itself through the setup.
Any suggestion would be appreciated.
Just a suggestion: In you build system, copy the file to S3, and then in your docker container, run a script on startup that copies the latest file back from S3 and mounts it. Then you won't have to rebuild/redeploy your docker container constantly.
As far I as I know, boto3 will try to load credentials from the instance metadata service.
If I am running this code inside a EC2 instance I expected to hae no problem. But when my code is dockerized how the boto3 will find the metadata service?
The Amazon ECS agent populates the AWS_CONTAINER_CREDENTIALS_RELATIVE_URI environment variable which can be used to get credentials. These special variables are provided only to process with PID 1. Script that is specified in Dockerfile ENTRYPOINT gets PID 1.
There are many networking modes and details might differ for other networking modes. More information can be found in: How can I configure IAM task roles in Amazon ECS to avoid "Access Denied" errors?
For awsvpc networking mode If you would run printenv with PID 1 you would see something similar to this:
AWS_EXECUTION_ENV=AWS_ECS_FARGATE
AWS_CONTAINER_CREDENTIALS_RELATIVE_URI=/v2/credentials/0f891318-ab05-46fe-8fac-d5113a1c2ecd
HOSTNAME=ip-172-17-0-123.ap-south-1.compute.internal
AWS_DEFAULT_REGION=ap-south-1
AWS_REGION=ap-south-1
ECS_CONTAINER_METADATA_URI_V4=http://169.254.170.2/v4/2c9107c385e04a70b30d3cc4d4de97e7-527074092
ECS_CONTAINER_METADATA_URI=http://169.254.170.2/v3/2c9107c385e04a70b30d3cc4d4de97e7-527074092
It also gets tricky to debug something since after SSH'ing into container you are using PID other than 1 meaning that services that need to get credentials might fail to do so if you run them manually.
ECS task metadata endpoint documentation
Find .aws folder in ~/.aws in your machine and move this to Docker container's /root folder.
.aws contains files which has AWS KEY and AWS PW.
You can easily copy it to currently running container from your local machine by
docker cp ~/.aws <containder_id>:/root