Docker cloudstor:aws volume not migrating its contents between instances - amazon-web-services

We have AWS EC2 cluster running Docker on Swarm mode.
We are deploying our services using a compose file, and we have data related services bound to volumes with persistent content, as a database for instance.
Since Docker Swarm does service discovery and load balance automatically, and we tend to avoid instance affinity as a good practice, we need our volumes to be automatically migrated along with our data related services.
We tried to use cloudstor:aws plugin to achieve that.
Our actual problem is that our volumes don't move along with our services when they get reallocated.
This is our configuration on our compose file:
version: '3.3'
volumes:
db_data:
driver: "cloudstor:aws"
driver_opts:
size: "20"
ebstype: "gp2"
backing: "relocatable"
services:
db:
image: postgres:9.3-alpine
volumes:
- db_data:/var/lib/postgresql/data

Related

What is the purpose of this yml file?

I seen an example at here https://github.com/aws-samples/amazon-ecs-java-microservices/blob/master/2_ECS_Java_Spring_PetClinic_Microservices/spring-petclinic-rest-system/src/main/docker/spring-petclinic-rest-system.yml
version: '2'
services:
spring-petclinic-rest-system:
image: 730329488449.dkr.ecr.us-west-2.amazonaws.com/spring-petclinic-rest-system
cpu_shares: 100
mem_limit: 524288000
ports:
- "8091:8080"
What is the purpose of this yml file? Is it docker-compose file?
This is a YAML file that is used to configure a Docker Compose application. It defines a single service called "spring-petclinic-rest-system" that uses an image stored in an AWS Elastic Container Registry. The service is allocated a certain number of CPU shares and a maximum amount of memory, and it maps port 8091 on the host to port 8080 in the container. This file is used to spin up the container with the Spring Petclinic Application.

How to address another container in the same task definition in AWS ECS on Fargate?

I have an MQTT application which consists of a broker and multiple clients. The broker and each client run in their own container. Locally I am using Docker compose to set up my application:
services:
broker:
image: mqtt-broker:latest
container_name: broker
ports:
- "1883:1883"
networks:
- engine-net
db:
image: database-client:latest
container_name: vehicle-engine-db
networks:
- engine-net
restart: on-failure
networks:
engine-net:
external: false
name: engine-net
The application inside my clients is written in C++ and uses the Paho library. I use the async_client to connect to the broker. It takes two arguments, namely:
mqtt::async_client cli(server_address, client_id);
Hereby, server_address is the IP of the broker + port, and the client_id is the "name" of the client that is connecting. While using the compose file, I can simply use the service name given in the file to address the other containers in the network (here "broker:1883" does the trick). My containers work, and now I want to deploy to AWS Fargate.
In the task definition, I add my containers and give a name to them (the same names like the services in the Docker compose file. However, the client does not seem to be able to connect to the broker, as the deployment fails. I am quite sure that it cannot connect because it cannot resolve the broker IP.
AWS Fargate uses network mode awsvpc which - to my understanding - puts all containers of a task into the same VPC subnet. Therefore, automatic name resolution like in Docker compose would make sense to me.
Has anybody encountered the same problem? How can I resolve it?
Per the documentation, containers in the same Fargate task can address each other on 127.0.0.1 at the container's respective ports.

Docker Compose ECS Service fails when using a provided LoadBalancer

I am deploying a compose to an AWS ECS context with the following docker-compose.yml
x-aws-loadbalancer: "${LOADBALANCER_ARN}"
services:
webapi:
image: ${DOCKER_REGISTRY-}webapi
build:
context: .
dockerfile: webapi/Dockerfile
environment:
ASPNETCORE_URLS: http://+:80
ASPNETCORE_ENVIRONMENT: Development
ports:
- target: 80
x-aws-protocol: http
When I create a loadbalancer using these instructions the loadbalancer assigns the default security group for the default vpc. Which apparently doesn't match the ingress rules for the docker services because if I go and look at the task in ECS I see it being killed over and over for failing an ELB healtcheck.
The only way to fix it is to go into AWS Console and assign the created security group created by docker compose to represent the default network to the loadbalancer. But thats insane.
How do I create a loadbalancer with the correct minimum access security group so it will be able to talk to later created compose generated services?

Run multiple task-definition using docker-compose.yml and ecs-params.yml file in AWS ECS with different lauch type and volume mounting

I have 4 docker images which i want to run on ECS. For my local system i use docker-compose file where i have multiple services.
I want to do similar docker compose on ECS.
I want my database image to run on EC2 and rest on fargate and host the volume of database on EC2 and make sure each container can communicate with each-other using their name.
How do i configure my docker-compose.yml and ecs-param.yml file??
My sample docker-compose.yml file
version: '2.1'
services:
first:
image: first:latest
ports:
- "2222:2222"
depends_on:
database:
condition: service_healthy
second:
image: second:latest
ports:
- "8090:8090"
depends_on:
database:
condition: service_healthy
third:
image: third:latest
ports:
- "3333:3333"
database:
image: database
environment:
MYSQL_ROOT_PASSWORD: abcde
MYSQL_PASSWORD: abcde
MYSQL_USER: user
ports:
- "3306:3306"
volumes:
- ./datadir/mysql:/var/lib/mysql
healthcheck:
test: ["CMD", "mysqladmin" ,"ping", "-h", "localhost"]
timeout: 5s
retries: 5
I don't see how you connect containers with each other.
depends_on just tells Docker Compose the order to use when
containers are started.
You may have actual connection hardcoded inside containers, that's not good.
Assuming Docker Compose file you shared containers can reach each
other using their aliases from Compose file.
For example third container can use database domain name to reach
database container.
So if you have such names hardcoded in your containers, it will work. However
usually people configure connection points(URLs) as environment variables in
Docker Compose file. In this case there is nothing hardcoded.
Hosting DB volume on EC2 can be a bad idea.
EC2 has 2 types of storage mapping - EBS and instance (based on S3). Instance
storage is destroyed when EC2 instance is terminated. EBS has data preserves on
EBS always.
So you either use EBS storage (and not EC2) or S3 that is not suitable for your
need here.
Hosting DB in a container is very bad idea.
You can find the same info in a description of many DB images in Docker Hub.
Instead you can use MySql as a service using AWS RDS.
The problem you have now has nothing common with AWS and ECS
now.
When you have Docker Compose running fine locally, you will
get the same on ECS side.
You can see example of configuration via Compose file here.

a service in ecs fails to find another service within the same network

I have a very simple docker-compose for locust (python package for load testing). It starts a 'master' service and a 'slave' service. Everything works perfectly locally but when I deploy it to AWS ECS a 'slave' can't find a master.
services:
my-master:
image: chapkovski/locust
ports:
- "80:80"
env_file:
- .env
environment:
LOCUST_MODE: master
my-slave:
image: chapkovski/locust
env_file:
- .env
environment:
LOCUST_MODE: slave
LOCUST_MASTER_HOST: my-master
So apparently I need to refer from my-slave service not to my-master when I am on ECS. What's wrong here?
Everything works perfectly locally but when I deploy it to AWS ECS
a 'slave' can't find a master.
I assume that slave needs to access master both must be in the same task definition to access like this or you can explore service discovery?
"links": [
"master"
]
links
Type: string array
Required: no
The link parameter allows containers to communicate with each other
without the need for port mappings. Only supported if the network mode
of a task definition is set to bridge. The name:internalName construct
is analogous to name:alias in Docker links.
Note
This parameter is not supported for Windows containers or tasks using the awsvpc network mode.
Important
Containers that are collocated on a single container instance may be
able to communicate with each other without requiring links or host
port mappings. Network isolation is achieved on the container instance
using security groups and VPC settings.
"links": ["name:internalName", ...]
container_definition_network