AWS Elastic Beanstalk Docker environment variables - amazon-web-services

How to pass environment variables to Docker containers running in AWS Elastic Beanstalk multiple docker container configuration(different for different containers)?

Use the environment key in your container descriptions.
{
"containerDefinitions": [
{
"name": "myContainer",
"image": "something",
"environment": [
{
"name": "MY_DB_PASSWORD",
"value": "password"
}
],
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 500,
"cpu": 10
}]
}
For more information, see: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html

Related

Communicate between docker containers in a docker network in AWS Beanstalk

I'm using AWS Beanstalk to deploy my project in 'Multi-container Docker running on 64bit Amazon Linux'
Here's my Dockerrun.aws.json as per documentation
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
]
},
{
"name": "master",
"image": "nithinsgowda/master",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"child"
]
}
]
}
I can access my master container at port 80 from the public internet
Inside my master container I have an API call to be made to the child container
I have tried the below options :
None of them worked
fetch('http://child/api')
fetch('http://child:9000/api')
fetch('http://15.14.13.12:9000/api') //My public DNS for the beanstalk application (Example)
If it was in a local docker-compose environment 'http://child/api' works perfectly fine. But this doesn't work on Beanstalk.
How do I communicate with the child container from my master container ?
I have even tried bindIP attribute and assigned a local IP and tried accessing it with the local IP, it still doesn't work
When looked into the server logs, docker ps was executed by the environment and both containers were up and running, port mappings were also displayed correct.
Here's what you need to specify in Dockerrun.aws.json
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
],
"environment": [
{
"name": "Container",
"value": "child"
}
]
},
The environment variable named Container will be the name given to your container inside the network.
"environment": [
{
"name": "Container",
"value": "child" //Any custom name accepted
}
]
And hence after specifying the environment variable, I can now access the child container as fetch('http://child:9000/api')
Here's AWS official documentation link specifying the above content https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html#create_deploy_docker_v2config_dockerrun

Port Mappings from environment variables in AWS ECS Task Definition

Is there a way to specify container port from environment variable in AWS ECS Task Definition?
This is in my task-definition.json which is used by Github Actions
"containerDefinitions": [
{
"portMappings": [
{
"containerPort": 3037 <=== Can this come from environment variable defined below?
}
],
"essential": true,
"environment": [
{
"name": "PORT",
"value": "3037"
}
]
}
],
"requiresCompatibilities": ["EC2"]

Jenkins on AWS elastic beanstalk: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied

I am trying to run a dockerised Jenkins and postgres database on AWS elastic beanstalk in a multi-container t2.micro environment:
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "postgres-db",
"image": "postgres:9.5-alpine",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 5432,
"containerPort": 5432
}
]
},
{
"name": "jenkins-blueocean",
"image": "<account_id>.dkr.ecr.ap-southeast-2.amazonaws.com/<image>:latest",
"essential": true,
"memory": 256,
"mountPoints": [
{
"sourceVolume": "jenkins-data",
"containerPath": "/var/jenkins_home"
}
],
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"postgres-db"
]
}
],
"volumes": [
{
"name": "jenkins-data",
"host": {
"sourcePath": "/var/jenkins-data"
}
}
]
}
AWS shows it deploys fine but the logs for jenkins-blueocean container has that error:
/var/log/containers/jenkins-blueocean-7ce78063214b-stdouterr.log
touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
Am I missing something to allow jenkins access to the volume?
Thanks in advance!
Not 100% sure if this is the right path but we ended up following the .ebextensions method of running commands to setup the volume path to allow the jenkins user from the jenkins-blueocean container full access to do its thing.
mkdir -p /var/jenkins-data
chmod 777 /var/jenkins-data
This was because the permissions on the location in the docker instance has r-x rights for other users, with root user having rwx.

ASP.NET Core for AWS ECS requires VIRTUAL_HOST

I'm deploying an ASP.NET Core Web API app as a docker image to AWS ECS, so use a task definition file for that.
It turns out the app only works if I specify environment variable VIRTUAL_HOST with the public DNS of my EC2 instance (as highlighted here: http://docs.servicestack.net/deploy-netcore-docker-aws-ecs), see taskdef.json below:
{
"family": "...",
"networkMode": "bridge",
"containerDefinitions": [
{
"image": "...",
"name": "...",
"cpu": 128,
"memory": 256,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"protocol": "http"
}
],
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com"
}
]
}
]
}
Once the app is deployed to AWS ECS, I hit the endpoints - eg http://ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com/v1/ping
with the actual public DNS of my EC2 instance in VIRTUAL_HOST all works fine
without the env variable I'm getting "503 Service Temporarily Unavailable" from nginx/1.13.0
and if I put an empty string to VIRTUAL_HOST I'm getting a "502 Bad Gateway" from nginx/1.13.0.
Now, I'd like to avoid specifying virtual host in the taskdef file - is that possible? Is my problem ASP.NET Core related or nginx related?
Amazon ECS have a secret management system using Amazon S3. You have to create a secret in your ECS interface, and then you will be able to reference it in your configuration, as an environment variable.
{
"family": "...",
"networkMode": "bridge",
"containerDefinitions": [
{
"image": "...",
"name": "...",
"cpu": 128,
"memory": 256,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"protocol": "http"
}
],
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "SECRET_S3_VIRTUAL_HOST"
}
]
}
]
}
Store secrets on Amazon S3, and use AWS Identity and Access Management (IAM) roles to grant access to those stored secrets in ECS.
Full blog post
You could also make your own NGinx Docker image, which will already contain the environment variable.
FROM nginx
LABEL maintainer YOUR_EMAIL
ENV "VIRTUAL_HOST" "ec2-xx-xxx-xxxxxx.compute1.amazonaws.com"
And you would just have to build it, ship it privately and then use it for your configuration.

HTTPS on Elastic Beanstalk (Docker Multi-container)

I've been looking around and haven't found much content with regards to a best practice when it comes to setting up HTTPS/SSL on Amazon Elastic Beanstalk with a Multi-container Docker environment.
There is a bunch of stuff when it comes to single container configuration, but nothing when it comes to multi-container.
My Dockerrun.aws.json looks like this:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "app-frontend",
"host": {
"sourcePath": "/var/app/current/app-frontend"
}
},
{
"name": "app-backend",
"host": {
"sourcePath": "/var/app/current/app-backend"
}
}
],
"containerDefinitions": [
{
"name": "app-backend",
"image": "xxxxx/app-backend",
"memory": 512,
"mountPoints": [
{
"containerPath": "/app/app-backend",
"sourceVolume": "app-backend"
}
],
"portMappings": [
{
"containerPort": 4000,
"hostPort": 4000
}
],
"environment": [
{
"name": "PORT",
"value": "4000"
},
{
"name": "MIX_ENV",
"value": "dev"
},
{
"name": "PG_PASSWORD",
"value": "xxxx"
},
{
"name": "PG_USERNAME",
"value": "xx"
},
{
"name": "PG_HOST",
"value": "xxxxx"
}
]
},
{
"name": "app-frontend",
"image": "xxxxxxx/app-frontend",
"memory": 512,
"links": [
"app-backend"
],
"command": [
"npm",
"run",
"production"
],
"mountPoints": [
{
"containerPath": "/app/app-frontend",
"sourceVolume": "app-frontend"
}
],
"portMappings": [
{
"containerPort": 3000,
"hostPort": 80
}
],
"environment": [
{
"name": "REDIS_HOST",
"value": "xxxxxx"
}
]
}
],
"family": ""
}
My thinking thus far is I would need to bring an nginx container into the mix in order to proxy the two services and handle things like mapping different domain names to different services.
Would I go the usual route of just setting up nginx and configuring the SSL as normal, or is there a better way, like I've seen for the single containers using the .ebextensions method (http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/https-singleinstance-docker.html) ?
This is more of an idea (I haven't actually done this and not sure if it would work). But the components appear to be all available to create a ALB that could direct traffic to one process or another based on path rules.
Here is what I am thinking that could be done via .ebextensions config files based on the options available from http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/command-options-general.html:
Use aws:elasticbeanstalk:environment:process:default to make sure the default application port and health check is set the way you intend (let's say port 80 is your default in this case.
Use aws:elasticbeanstalk:environment:process:process_name to create a backend process that goes to your second service (port 4000 in this case).
Create a rule for your backend with aws:elbv2:listenerrule:backend which would use something like /backend/* as the path.
Create the SSL listener with aws:elbv2:listener:443 (example at http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/environments-cfg-applicationloadbalancer.html) that uses this new backend rule.
I am not sure if additional rules need to be created for the default listener of aws:elbv2:listener:default. It seems like the default might just match /* so in this case anything sent to /backend/* would go to port 4000 container and anything else goes to the port 3000 container.
You will definitely need an nginx container, for the simple fact that a multicontainer ELB setup does not provide one by default. The reason that you see a single container setup on ELB with these .ebextension configs, is that for this type of setup the ELB does provide nginx.
The benefit of having your own nginx container is that you won't need a frontend container (assuming you are serving static files). You can write our nginx config so that you serve static files straight.
Here is my Dockerrun file:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "dist",
"host": {
"sourcePath": "/var/app/current/frontend/dist"
}
},
{
"name": "nginx-proxy-conf",
"host": {
"sourcePath": "/var/app/current/compose/production/nginx/nginx.conf"
}
}
],
"containerDefinitions": [
{
"name": "backend",
"image": "abc/xyz",
"essential": true,
"memory": 256,
},
{
"name": "nginx-proxy",
"image": "nginx:latest",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
}
],
"depends_on": ["backend"],
"links": [
"backend"
],
"mountPoints": [
{
"sourceVolume": "dist",
"containerPath": "/var/www/app/frontend/dist",
"readOnly": true
},
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
},
{
"sourceVolume": "nginx-proxy-conf",
"containerPath": "/etc/nginx/nginx.conf",
"readOnly": true
}
]
}
]
}
I also highly recommend to use AWS services for setting up your SSL: Route 53 and Certificate manager. They play nice together and if I understand correctly, it allows you to apply SSL on load balancing level.