Service discovery within AWS ECS - amazon-web-services

I currently have a Redis, Postgres and a few Golang containers in a project of mine. I've got it all working locally on my machine using docker-compose.
redis:
container_name: redis
build:
context: .
dockerfile: redis/Dockerfile
ports:
- 6379:6379
networks:
- my-network
This allows me in my Golang microservice to use the container name to connect to the Redis container:
&redis.Pool{
Dial: func() (redis.Conn, error) {
return redis.Dial("tcp", "redis:6379")
},
This all works perfectly, however, I want to place these containers within ECR and use ECS. I'm a bit confused as to how to identify my services and communicate with them in AWS. If I set the namespace to say example and then the service discovery name to redis_service within the ECS service is it as simple as using:
&redis.Pool{
Dial: func() (redis.Conn, error) {
return redis.Dial("tcp", "example.redis_service:6379")
},
Any help would be appreciated!

After linking your containers you will be able to create connection between them the same way like it is done when setting it with docker-compose. It is described in AWS documentation for Dockerrun.aws.json v2
links List of containers to link to. Linked containers can discover each other and communicate securely.
That is if you are using Multicontainer Docker.
For more advanced/manual usage of ECS you may also be interested in this blog post

Related

How can I mount configuration file and other files on AWS Fargate

I am trying to run the Telegraf as a docker container on AWS fargate.
I have created the Telegraf image file using Dockerfile and built the image and pushed it to ECR.
Now, I am trying to run this image on AWS fargate.
The main challenge I facing is how to mount the configuration (telegraf.conf) file to the container
which required by container to run it.
I tried following this https://kichik.com/2020/09/10/mounting-configuration-files-in-fargate/ blog by spinning two containers but I have more files that I am passing to the telegraf.conf file.
Fargate provides two options to mount files using the Bind mount and EFS. I am trying to use Bind Mount but I am not sure how to provide the configuration files or mount them.
I am showing below how I run the telegraf container using docker-compose.
telegraf1:
image: telegraf:1.20.0
container_name: telegraf
restart: always
depends_on:
- influxdb
networks:
- analytics
volumes:
- /mnt/telegraf/:/var/lib/telegraf
- ./etc/telegraf/:/etc/telegraf/
env_file:
- secrets.env
environment:
INFLUXDB_URL: http://influxdb:8086
command:
--config-directory /etc/telegraf/telegraf.d
--config /etc/telegraf/telegraf.conf
links:
- influxdb
Now I want to achieve same using AWS fargate but not sure how to provide the volume mount on AWS fargate.
Bind mount on Fargate is good for sharing a folder between multiple containers in a single task, but I'm not aware of any way to load external configuration files in Fargate bind mounts, other than running a sidecar container to download those from S3 on task startup.
I generally see EFS used for mounting a folder with configuration files in Fargate.

Issues with xray as side container in AWS ECS with docker-compose

I'm trying to deploy XRAY as a sidecar-container of my main container in AWS ECS Fargate using docker-compose; but it creates 2 tasks (Service and Xray) instead of 1 task containing both, the service and the xray daemon.
I have done this in the past without issues using cfn but I cannot make it work with docker-compose.
This is my docker-compose file:
version: "3.9"
services:
web:
image: link-to-private-repo/web
ports: ["80:80"]
xray:
image: amazon/aws-xray-daemon
ports:
- 2000:2000/udp
Thanks.
This is not possible today with the current Docker Compose out of the box experience. This need is tracked in this GH issue. Please weigh in in the issue with your use case.

Deploy Applications on Amazon ECS Using docker compose

I'm trying to deploy a docker container with multiple services to ECS. I've been following this article which looks great: https://aws.amazon.com/blogs/containers/deploy-applications-on-amazon-ecs-using-docker-compose/
I can get my container to run locally, and I can connect to the ECS context using the AWS CLI; however in the basic example from the article when I run
docker compose up
In order to deploy the image to ECS, I get the error:
pull access denied, repository does not exist or may require authorization: server message: insufficient_scope: authorization failed
Can't seem to make heads or tails of this. My docker is logged in to ECS using
aws ecr get-login-password --region region | docker login --username AWS --password-stdin aws_account_id.dkr.ecr.region.amazonaws.com
The default IAM user on my aws CLI has AmazonECS_FullAccess as well as "ecs:ListAccountSettings" and "cloudformation:ListStackResources"
I read here: pull access denied repository does not exist or may require docker login mikemaccana 's answer that after Nov 2020 authentication may be required in your YAML file to allow AWS to pull from hub.docker.io (e.g. give aws your Docker hub username and password) but I can't get the 'auth' syntax to work in my yaml file. This is my YAML file that runs tomcat and mariadb locally:
version: "2"
services:
database:
build:
context: ./tba-database
image: tba-database
# set default mysql root password, change as needed
environment:
MYSQL_ROOT_PASSWORD: password
# Expose port 3306 to host. Not for the application but
# handy to inspect the database from the host machine.
ports:
- "3306:3306"
restart: always
webserver:
build:
context: ./tba-webserver
image: tba-webserver
# mount point for application in tomcat
volumes:
- ./target/testPROJ:/usr/local/tomcat/webapps/ROOT
links:
- database:tba-database
# open ports for tomcat and remote debugging
ports:
- "8080:8080"
- "8000:8000"
restart: always
Author of the blog here (thanks for the kind comment!). I haven't played much with the build side of things but I suspect what's happening here is that when you run docker compose up we ignore the build phase and only leverage the image field. What happens next is that the containers being deployed on ECS/Fargate tries to pull the image tba-database (which is where the deploying seems to be complaining because it doesn't exist). You need extra steps to push your image to either GH or ECR before you could bring it life using docker compose up when in the ecs context.
You also probably need to change the compose version ("2" is very old).

Fargate with Docker compose Links

We have an application that uses docker compose that contains links.
I'm trying to deploy this using aws-cli on Amazon Fargate using this command:
ecs-cli compose --project-name myApp --file docker-compose-aws.yml --ecs-params fargate-ecs-params.yml --cluster myCluster --region us-east-1 up --launch-type FARGATE
When my fargate-ecs-params.yml has ecs_network_mode: awsvpc I get the error:
Links are not supported when networkMode=awsvpc
So I've tried changing to ecs_network_mode: awsvpc, however I then get the error:
Fargate only supports network mode ‘awsvpc’
My question is how do I create a task definition for Fargate with a compose file that contains links? Or is this not possible (and in that case then what are my alternatives?)
You can place both container in same task definitons they will automatically linked with each other.
After reading your final comment on the boot sequence and answering that question instead, I solved this (even in non-AWS) using the docker-compose depends.
Simple e.g.
services:
web:
depends_on:
- "web_db"
web_db:
image: mongo:3.6
container_name: my_mongodb
You should be able to remove the deprecated links and just use the hostnames that docker creates from the service container names. e.g. above the website would connect to the hostname: "my_mongodb".

Is there a way to access google cloud SQL via proxy inside docker container

I have multiple docker machines(dev,staging) running on Google Compute Engine which hosts Django servers(this needs access to Google Cloud SQL access). I have multiple Google Cloud SQL instances running, and each instance is used by respective docker machines on my Google Compute Engine instance.
Currently i'm accessing the Cloud SQL by whitelisting my Compute Engine IP. But i dont want to use IPs for obvious reasons ie., i dont use a static ip for my dev machines.
But Now want to use google_cloud_proxy way to gain the access. But How do i do that ! GCP gives multiple ways to access google Cloud SQL instances. But none of them fit my usecase:
I have this option https://cloud.google.com/sql/docs/mysql/connect-compute-engine; but this
only gives my computer engine access to the SQL instance; which i have to access from my Docker.
This doesn't support me to proxy multiple SQL instances on same compute engine machine; I was hoping to do this proxy inside the docker if possible .
So, How do I gain access to the CLoud SQL inside Docker ? If docker compose is a better way to start; How easy is it to implement for kubernetes(i use google container engine for production)
I was able to figure out how to use cloudsql-proxy on my local docker environment by using docker-compose. You will need to pull down your Cloud SQL instance credentials and have them ready. I keep them them in my project root as credentials.json and add it to my .gitignore in the project.
The key part I found was using =tcp:0.0.0.0:5432 after the GCP instance ID so that the port can be forwarded. Then, in your application, use cloudsql-proxy instead of localhost as the hostname. Make sure the rest of your db creds are valid in your application secrets so that it can connect through local proxy being supplied by the cloudsql-proxy container.
Note: Keep in mind I'm writing a tomcat java application and my docker-compose.yml reflects that.
docker-compose.yml:
version: '3'
services:
cloudsql-proxy:
container_name: cloudsql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.11
command: /cloud_sql_proxy --dir=/cloudsql -instances=<YOUR INSTANCE ID HERE>=tcp:0.0.0.0:5432 -credential_file=/secrets/cloudsql/credentials.json
ports:
- 5432:5432
volumes:
- ./credentials.json:/secrets/cloudsql/credentials.json
restart: always
tomcatapp-api:
container_name: tomcatapp-api
build: .
volumes:
- ./build/libs:/usr/local/tomcat/webapps
ports:
- 8080:8080
- 8000:8000
env_file:
- ./secrets.env
restart: always
You can refer to the Google documentation here:
https://cloud.google.com/sql/docs/postgres/connect-admin-proxy#connecting-docker
That will show you how to run the proxy on a container. Then you can use docker-compose as per the answer #Dan suggested here: https://stackoverflow.com/a/48431559/14305096
docker run -d \
-v PATH_TO_KEY_FILE:/config \
-p 127.0.0.1:5432:5432 \
gcr.io/cloudsql-docker/gce-proxy:1.19.1 /cloud_sql_proxy \
-instances=INSTANCE_CONNECTION_NAME=tcp:0.0.0.0:5432 \
-credential_file=/config
For Mac OS users you can use the following as POSTGRES_HOST:
host.docker.internal
like
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "<DB-NAME>",
"HOST": "host.docker.internal",
"PORT": "<YOUR-PORT>",
"USER": "<DB-USER>",
"PASSWORD": "<DB-USER-PASSWORD>",
},
}
Your localhost will be forwarded into the container.