Cross communication between docker containers in AWS Beanstalk - amazon-web-services

Is there any way to have bidirectional communication between docker containers on AWS Beanstalk?
The stack im trying to get working is pretty standard: Varnish -> Nginx -> PHP-FPM.
I am using the links specification to specify that nginx should find the hostname "php-app". Nginx finds the php-app hostname, so that works. However I also need the "php-app" to be able to resolve hostname "varnish" so the "php-app" can send PURGE requests for cache invalidation.
Basically now there is only this communication that works:
[varnish:80] -> [nginx:8080] -> [php-app]
However this should be working:
[varnish:80] -> [nginx:8080] -> [php-app] ---PURGE---> [varnish:80]
The php-app basically only needs to know about the IP of the varnish host, however that seems to be impossible.
I know that I can also get the varnish container ip from the HOST, but i want to do the same just from the php-app container:
VARNISH_HASH=`docker ps | grep varnish | sed 's/\|/ /' | awk '{print $1}'`
VARNISH_IP=`docker inspect --format '{{ .NetworkSettings.IPAddress }}' $VARNISH_HASH`
I also tried adding links to the php-app container, but that resulted in errors when deploying, I guess it's because there are then circular dependencies:
"links": [
"varnish"
]
My relevant Dockerrun.aws.json (container deifinition file) looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
.....
],
"containerDefinitions": [
{
"name": "nginx-proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"links": [
"php-app"
],
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"environment": [
{
"name": "NGINX_PORT",
"value": "8080"
}
],
"mountPoints": [ .... ]
},
{
"name": "varnish",
"hostname": "varnish",
"image": "newsdev/varnish:4.1.0",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"links": [
"nginx-proxy",
"php-app"
],
"mountPoints": [ .... ]
},
{
"name": "php-app",
"image": "peec/magento2-php-fpm-aws",
"essential": true,
"memory": 1024,
"environment": [
],
"mountPoints": [ .... ]
}
]
}

Related

Communicate between docker containers in a docker network in AWS Beanstalk

I'm using AWS Beanstalk to deploy my project in 'Multi-container Docker running on 64bit Amazon Linux'
Here's my Dockerrun.aws.json as per documentation
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
]
},
{
"name": "master",
"image": "nithinsgowda/master",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"child"
]
}
]
}
I can access my master container at port 80 from the public internet
Inside my master container I have an API call to be made to the child container
I have tried the below options :
None of them worked
fetch('http://child/api')
fetch('http://child:9000/api')
fetch('http://15.14.13.12:9000/api') //My public DNS for the beanstalk application (Example)
If it was in a local docker-compose environment 'http://child/api' works perfectly fine. But this doesn't work on Beanstalk.
How do I communicate with the child container from my master container ?
I have even tried bindIP attribute and assigned a local IP and tried accessing it with the local IP, it still doesn't work
When looked into the server logs, docker ps was executed by the environment and both containers were up and running, port mappings were also displayed correct.
Here's what you need to specify in Dockerrun.aws.json
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
],
"environment": [
{
"name": "Container",
"value": "child"
}
]
},
The environment variable named Container will be the name given to your container inside the network.
"environment": [
{
"name": "Container",
"value": "child" //Any custom name accepted
}
]
And hence after specifying the environment variable, I can now access the child container as fetch('http://child:9000/api')
Here's AWS official documentation link specifying the above content https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html#create_deploy_docker_v2config_dockerrun

How to deploy multi-container on Elastic Beanstalk (AWS)?

I tried to deploy this app, which consists of a Flask API and a MongoDB database, which is mounted to a volume.
What am I doing wrong? I tried to upload the Dockerrun.aws.json file to Beanstalk, but I keep getting this error:
[Instance: i-0f9dd8d8d30059929] Command failed on instance. An unexpected error has occurred [ErrorCode: 0000000001].
This is my Dockerrun.aws.json file:
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"essential": true,
"image": "nielshoogeveen1990/image-classifier:latest",
"links": [
"db"
],
"name": "api",
"memory": 128,
"portMappings": [
{
"containerPort": 5000,
"hostPort": 5000
}
]
},
{
"essential": true,
"image": "mongo:3.6.4",
"mountPoints": [
{
"containerPath": "/var/lib/mysql/data",
"sourceVolume": "Db-Data"
}
],
"name": "db",
"memory": 128
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "db-data"
},
"name": "Db-Data"
}
]
}

Prisma error when trying to run with elastic beanstalk

I have a prisma project that works fine locally when I run $ docker-compose up. I converted the docker-compose.yml file to Dockerrun.aws.json. But now when i try to run the project locally via $ eb local run I get an error
mysql_1 | Version: '5.7.24' socket: '/var/run/mysqld/mysqld.sock' port: 3306 MySQL Community Server (GPL)
prisma_1 | Exception in thread "main" java.sql.SQLTransientConnectionException: database - Connection is not available, request timed out after 5001ms.
Below is my Dockerrun.aws.json file:
{
"AWSEBDockerrunVersion": "2",
"containerDefinitions": [
{
"environment": [
{
"name": "MYSQL_ROOT_PASSWORD",
"value": "prisma"
}
],
"essential": true,
"memory": 128,
"image": "mysql:5.7",
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "Mysql"
}
],
"name": "mysql",
"portMappings": [
{
"containerPort": 3306,
"hostPort": 3306
}
]
},
{
"environment": [
{
"name": "PRISMA_CONFIG",
"value": "port: 4466\ndatabases:\n default:\n connector: mysql\n host: mysql\n port: 3306\n user: root\n password: prisma\n migrations: true\n"
}
],
"essential": true,
"memory": 128,
"image": "prismagraphql/prisma:1.21",
"name": "prisma",
"portMappings": [
{
"containerPort": 4466,
"hostPort": 4466
}
]
}
],
"family": "",
"volumes": [
{
"host": {
"sourcePath": "mysql"
},
"name": "Mysql"
}
]
}
The error message leads me to believe that there's an issue connecting the prisma container to the mysql instance. If i had to guess it's the PRISMA_CONFIG value but not I'm not 100% sure. Can someone tell me what I'm doing wrong here?
You can not have those /n in there. YAML cares about real carriage and spaces.

HTTPS on Elastic Beanstalk (Docker Multi-container)

I've been looking around and haven't found much content with regards to a best practice when it comes to setting up HTTPS/SSL on Amazon Elastic Beanstalk with a Multi-container Docker environment.
There is a bunch of stuff when it comes to single container configuration, but nothing when it comes to multi-container.
My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "app-frontend",
"host": {
"sourcePath": "/var/app/current/app-frontend"
}
},
{
"name": "app-backend",
"host": {
"sourcePath": "/var/app/current/app-backend"
}
}
],
"containerDefinitions": [
{
"name": "app-backend",
"image": "xxxxx/app-backend",
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/app-backend",
"sourceVolume": "app-backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxxx"
},
{
"name": "PG_USERNAME",
"value": "xx"
},
{
"name": "PG_HOST",
"value": "xxxxx"
}
]
},
{
"name": "app-frontend",
"image": "xxxxxxx/app-frontend",
"memory": 512,
"links": [
"app-backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/app-frontend",
"sourceVolume": "app-frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 80
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "xxxxxx"
}
]
}
],
"family": ""
}
My thinking thus far is I would need to bring an nginx container into the mix in order to proxy the two services and handle things like mapping different domain names to different services.
Would I go the usual route of just setting up nginx and configuring the SSL as normal, or is there a better way, like I've seen for the single containers using the .ebextensions method (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-docker.html) ?
This is more of an idea (I haven't actually done this and not sure if it would work). But the components appear to be all available to create a ALB that could direct traffic to one process or another based on path rules.
Here is what I am thinking that could be done via .ebextensions config files based on the options available from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html:
Use aws:elasticbeanstalk:environment:process:default to make sure the default application port and health check is set the way you intend (let's say port 80 is your default in this case.
Use aws:elasticbeanstalk:environment:process:process_name to create a backend process that goes to your second service (port 4000 in this case).
Create a rule for your backend with aws:elbv2:listenerrule:backend which would use something like /backend/* as the path.
Create the SSL listener with aws:elbv2:listener:443 (example at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-applicationloadbalancer.html) that uses this new backend rule.
I am not sure if additional rules need to be created for the default listener of aws:elbv2:listener:default. It seems like the default might just match /* so in this case anything sent to /backend/* would go to port 4000 container and anything else goes to the port 3000 container.
You will definitely need an nginx container, for the simple fact that a multicontainer ELB setup does not provide one by default. The reason that you see a single container setup on ELB with these .ebextension configs, is that for this type of setup the ELB does provide nginx.
The benefit of having your own nginx container is that you won't need a frontend container (assuming you are serving static files). You can write our nginx config so that you serve static files straight.
Here is my Dockerrun file:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "dist",
"host": {
"sourcePath": "/var/app/current/frontend/dist"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/compose/production/nginx/nginx.conf"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "abc/xyz",
"essential": true,
"memory": 256,
},
{
"name": "nginx-proxy",
"image": "nginx:latest",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"depends_on": ["backend"],
"links": [
"backend"
],
"mountPoints": [
{
"sourceVolume": "dist",
"containerPath": "/var/www/app/frontend/dist",
"readOnly": true
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/nginx.conf",
"readOnly": true
}
]
}
]
}
I also highly recommend to use AWS services for setting up your SSL: Route 53 and Certificate manager. They play nice together and if I understand correctly, it allows you to apply SSL on load balancing level.

How to connect multiple Docker containers in AWS ElasticBeanstalk?

I have a Docker multicontainer configuration meant to run in a ElasticBeanstalk environment.
The EB environment runs in a VPC, in a public subnet, has a single load-balancer and a single instance bound.
It looks like all of the containers are running fine but they cannot communicate with each other even though i defined them as linked containers.
What do I need to do to get all of these containers talking to each other?
My Dockerrun.aws.json looks like this:
"containerDefinitions":
[
{
"name": "proxy",
"image": "nginx",
"essential": true,
"memory": 128,
"portMappings":
[
{
"hostPort": 80,
"containerPort": 80
}
],
"links":
[
"webapp"
],
"mountPoints":
[
{
"sourceVolume": "nginx-conf",
"containerPath": "/etc/nginx/conf.d",
"readOnly": true
},
{
"sourceVolume": "awseb-logs-proxy",
"containerPath": "/var/log/nginx"
}
]
},
{
"name": "webapp",
"image": "jetty",
"memory": 2048,
"essential": true,
"portMappings":
[
{
"hostPort": 8080,
"containerPort": 8080
}
],
"links":
[
"mongodb"
],
"mountPoints":
[
{
"sourceVolume": "jetty-webapp",
"containerPath": "/var/lib/jetty/webapps",
"readOnly": false
},
{
"sourceVolume": "awseb-logs-webapp",
"containerPath": "/var/log/jetty"
}
]
},
{
"name": "mongodb",
"image": "mongo",
"memory": 1024,
"essential": true,
"portMappings":
[
{
"hostPort": 27017,
"containerPort": 27017
}
],
"mountPoints":
[
{
"sourceVolume": "mongodb-data",
"containerPath": "/data/db",
"readOnly": false
}
]
}
]
In 2017, Use the container definition: links with the name of the Docker container you want to connect to. Docker's built-in network bridge will make the connections from there.
In my case, it had nothing to do with the security groups since all I am exposing publicly is 80 for the Nginx proxy.
It came down to using the names in my /etc/host (webapp, mongodb), instead of the IP, that were created for the containers.
This fixes my connection from Nginx to Jetty and Jetty to MongoDB.