What is the purpose of this yml file? - amazon-web-services

I seen an example at here https://github.com/aws-samples/amazon-ecs-java-microservices/blob/master/2_ECS_Java_Spring_PetClinic_Microservices/spring-petclinic-rest-system/src/main/docker/spring-petclinic-rest-system.yml
version: '2'
services:
spring-petclinic-rest-system:
image: 730329488449.dkr.ecr.us-west-2.amazonaws.com/spring-petclinic-rest-system
cpu_shares: 100
mem_limit: 524288000
ports:
- "8091:8080"
What is the purpose of this yml file? Is it docker-compose file?

This is a YAML file that is used to configure a Docker Compose application. It defines a single service called "spring-petclinic-rest-system" that uses an image stored in an AWS Elastic Container Registry. The service is allocated a certain number of CPU shares and a maximum amount of memory, and it maps port 8091 on the host to port 8080 in the container. This file is used to spin up the container with the Spring Petclinic Application.

Related

Set ports on a Docker compose file for Amazon ECS

I am following this tutorial to deploy my app to AWS using docker compose.
If I use docker compose up I get this error:
published port can't be set to a distinct value than container port: incompatible attribute
This is the docker-compose.yml:
version: "3"
services:
www:
image: my_image_path:latest
ports:
- "8001:80"
volumes:
- ./www:/var/www/html/
links:
- db
networks:
- default
phpmyadmin:
image: phpmyadmin/phpmyadmin:4.8
links:
- db:db
ports:
- 8000:80
environment:
MYSQL_USER: user
MYSQL_PASSWORD: test
MYSQL_ROOT_PASSWORD: test
I have two services listening to port 80 in the container, so I cannot just use 80:80 in both of them.
Any ideas?
You need to change one of your docker images to listen on another port. Docker-compose deploys to AWS Fargate, and there are some restrictions in Fargate that are preventing your configuration from working:
Multiple containers in a single Fargate task have to listen on distinct ports.
The published port can't be different from the port the container is listening on. If you needed to change the port clients connect to, that can be done in the ALB/Target Group settings, instead of the container settings.
Since one of your images is phpmyadmin I suggest simply adding an environment variable to that image APACHE_PORT: 8000 which will change the port the Apache web server in that container listens on to port 8000. Then you can set the port mapping on that container to 8000:8000 and set the port mapping on your www container to 80:80

How to address another container in the same task definition in AWS ECS on Fargate?

I have an MQTT application which consists of a broker and multiple clients. The broker and each client run in their own container. Locally I am using Docker compose to set up my application:
services:
broker:
image: mqtt-broker:latest
container_name: broker
ports:
- "1883:1883"
networks:
- engine-net
db:
image: database-client:latest
container_name: vehicle-engine-db
networks:
- engine-net
restart: on-failure
networks:
engine-net:
external: false
name: engine-net
The application inside my clients is written in C++ and uses the Paho library. I use the async_client to connect to the broker. It takes two arguments, namely:
mqtt::async_client cli(server_address, client_id);
Hereby, server_address is the IP of the broker + port, and the client_id is the "name" of the client that is connecting. While using the compose file, I can simply use the service name given in the file to address the other containers in the network (here "broker:1883" does the trick). My containers work, and now I want to deploy to AWS Fargate.
In the task definition, I add my containers and give a name to them (the same names like the services in the Docker compose file. However, the client does not seem to be able to connect to the broker, as the deployment fails. I am quite sure that it cannot connect because it cannot resolve the broker IP.
AWS Fargate uses network mode awsvpc which - to my understanding - puts all containers of a task into the same VPC subnet. Therefore, automatic name resolution like in Docker compose would make sense to me.
Has anybody encountered the same problem? How can I resolve it?
Per the documentation, containers in the same Fargate task can address each other on 127.0.0.1 at the container's respective ports.

Run multiple task-definition using docker-compose.yml and ecs-params.yml file in AWS ECS with different lauch type and volume mounting

I have 4 docker images which i want to run on ECS. For my local system i use docker-compose file where i have multiple services.
I want to do similar docker compose on ECS.
I want my database image to run on EC2 and rest on fargate and host the volume of database on EC2 and make sure each container can communicate with each-other using their name.
How do i configure my docker-compose.yml and ecs-param.yml file??
My sample docker-compose.yml file
version: '2.1'
services:
first:
image: first:latest
ports:
- "2222:2222"
depends_on:
database:
condition: service_healthy
second:
image: second:latest
ports:
- "8090:8090"
depends_on:
database:
condition: service_healthy
third:
image: third:latest
ports:
- "3333:3333"
database:
image: database
environment:
MYSQL_ROOT_PASSWORD: abcde
MYSQL_PASSWORD: abcde
MYSQL_USER: user
ports:
- "3306:3306"
volumes:
- ./datadir/mysql:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 5s
retries: 5
I don't see how you connect containers with each other.
depends_on just tells Docker Compose the order to use when
containers are started.
You may have actual connection hardcoded inside containers, that's not good.
Assuming Docker Compose file you shared containers can reach each
other using their aliases from Compose file.
For example third container can use database domain name to reach
database container.
So if you have such names hardcoded in your containers, it will work. However
usually people configure connection points(URLs) as environment variables in
Docker Compose file. In this case there is nothing hardcoded.
Hosting DB volume on EC2 can be a bad idea.
EC2 has 2 types of storage mapping - EBS and instance (based on S3). Instance
storage is destroyed when EC2 instance is terminated. EBS has data preserves on
EBS always.
So you either use EBS storage (and not EC2) or S3 that is not suitable for your
need here.
Hosting DB in a container is very bad idea.
You can find the same info in a description of many DB images in Docker Hub.
Instead you can use MySql as a service using AWS RDS.
The problem you have now has nothing common with AWS and ECS
now.
When you have Docker Compose running fine locally, you will
get the same on ECS side.
You can see example of configuration via Compose file here.

a service in ecs fails to find another service within the same network

I have a very simple docker-compose for locust (python package for load testing). It starts a 'master' service and a 'slave' service. Everything works perfectly locally but when I deploy it to AWS ECS a 'slave' can't find a master.
services:
my-master:
image: chapkovski/locust
ports:
- "80:80"
env_file:
- .env
environment:
LOCUST_MODE: master
my-slave:
image: chapkovski/locust
env_file:
- .env
environment:
LOCUST_MODE: slave
LOCUST_MASTER_HOST: my-master
So apparently I need to refer from my-slave service not to my-master when I am on ECS. What's wrong here?
Everything works perfectly locally but when I deploy it to AWS ECS
a 'slave' can't find a master.
I assume that slave needs to access master both must be in the same task definition to access like this or you can explore service discovery?
"links": [
"master"
]
links
Type: string array
Required: no
The link parameter allows containers to communicate with each other
without the need for port mappings. Only supported if the network mode
of a task definition is set to bridge. The name:internalName construct
is analogous to name:alias in Docker links.
Note
This parameter is not supported for Windows containers or tasks using the awsvpc network mode.
Important
Containers that are collocated on a single container instance may be
able to communicate with each other without requiring links or host
port mappings. Network isolation is achieved on the container instance
using security groups and VPC settings.
"links": ["name:internalName", ...]
container_definition_network

Docker cloudstor:aws volume not migrating its contents between instances

We have AWS EC2 cluster running Docker on Swarm mode.
We are deploying our services using a compose file, and we have data related services bound to volumes with persistent content, as a database for instance.
Since Docker Swarm does service discovery and load balance automatically, and we tend to avoid instance affinity as a good practice, we need our volumes to be automatically migrated along with our data related services.
We tried to use cloudstor:aws plugin to achieve that.
Our actual problem is that our volumes don't move along with our services when they get reallocated.
This is our configuration on our compose file:
version: '3.3'
volumes:
db_data:
driver: "cloudstor:aws"
driver_opts:
size: "20"
ebstype: "gp2"
backing: "relocatable"
services:
db:
image: postgres:9.3-alpine
volumes:
- db_data:/var/lib/postgresql/data