According to Amazon's Elastic Beanstalk Multicontainer Docker Configuration docs, I should be able to use mount points to get NGINX logs in the location where Elastic Beanstalk expects them.
Volumes from the container instance to mount and the location on the container file system at which to mount them. Mount volumes containing application content so your container can read the data you upload in your source bundle, as well as log volumes for writing log data to a location where Elastic Beanstalk can gather it.
Elastic Beanstalk creates log volumes on the container instance, one for each container, at /var/log/containers/containername. These volumes are named awseb-logs-containername and should be mounted to the location within the container file structure where logs are written.
For example, the following mount point maps the nginx log location in the container to the Elastic Beanstalk–generated volume for the nginx-proxy container.
{
"sourceVolume": "awseb-logs-nginx-proxy",
"containerPath": "/var/log/nginx"
}
I have done this for two NGINX containers in my app, but EB doesn't deliver the NGINX logs when I request logs.
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "gs-api-nginx-versioning",
"image": "829481521991.dkr.ecr.us-east-1.amazonaws.com/gs-api-nginx-versioning:latest",
"memory": 128,
"essential": true,
"mountPoints": [
{
"sourceVolume": "awseb-logs-gs-api-nginx-versioning",
"containerPath": "/var/log/nginx"
}
],
"portMappings": [
{
"hostPort": 80,
"containerPort": 80
},
{
"hostPort": 443,
"containerPort": 443
}
],
"links": [
"gs-api-v1-nginx:v1"
]
},
{
"name": "gs-api-v1-nginx",
"image": "829481521991.dkr.ecr.us-east-1.amazonaws.com/gs-api-v1-nginx:latest",
"memory": 128,
"essential": true,
"mountPoints": [
{
"sourceVolume": "awseb-logs-gs-api-v1-nginx",
"containerPath": "/var/log/nginx"
}
],
"links": [
"gs-api-v1-atlas-vms:atlas-vms",
"gs-api-v1-auth:auth",
"gs-api-v1-clean-zip:clean-zip",
"gs-api-v1-data-alerts:data-alerts",
"gs-api-v1-proc-events:proc-events",
"gs-api-v1-sites:sites"
]
},
<additional container configs>
]
}
Can anyone see what I'm missing here that's causing the logs not to be gathered?
Related
I'm using AWS Beanstalk to deploy my project in 'Multi-container Docker running on 64bit Amazon Linux'
Here's my Dockerrun.aws.json as per documentation
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
]
},
{
"name": "master",
"image": "nithinsgowda/master",
"essential": true,
"memory": 512,
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"child"
]
}
]
}
I can access my master container at port 80 from the public internet
Inside my master container I have an API call to be made to the child container
I have tried the below options :
None of them worked
fetch('http://child/api')
fetch('http://child:9000/api')
fetch('http://15.14.13.12:9000/api') //My public DNS for the beanstalk application (Example)
If it was in a local docker-compose environment 'http://child/api' works perfectly fine. But this doesn't work on Beanstalk.
How do I communicate with the child container from my master container ?
I have even tried bindIP attribute and assigned a local IP and tried accessing it with the local IP, it still doesn't work
When looked into the server logs, docker ps was executed by the environment and both containers were up and running, port mappings were also displayed correct.
Here's what you need to specify in Dockerrun.aws.json
"containerDefinitions": [
{
"name": "child",
"image": "nithinsgowda/child",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 9000,
"containerPort": 9000
}
],
"links": [
"master"
],
"environment": [
{
"name": "Container",
"value": "child"
}
]
},
The environment variable named Container will be the name given to your container inside the network.
"environment": [
{
"name": "Container",
"value": "child" //Any custom name accepted
}
]
And hence after specifying the environment variable, I can now access the child container as fetch('http://child:9000/api')
Here's AWS official documentation link specifying the above content https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html#create_deploy_docker_v2config_dockerrun
Below is my AWS Task Definition for ECS.
I need every EC2 instance of this task to have port 3026 publicly accessible to the world. How can I modify this JSON to do that?
Currently, after service is running this task, I manually find the EC2 instance(s) and then I manually add a security group that allows ingress from 0.0.0.0/0 on that port.
But I really want to know how to make this JSON do it so I no longer have to do it manually.
{
"family": "myproj",
"requiresCompatibilities": [
"EC2"
],
"containerDefinitions": [
{
"memory": 500,
"memoryReservation": 350,
"name": "myproj",
"image": "blah.dkr.ecr.us-east-1.amazonaws.com/myproj:latest",
"essential": true,
"portMappings": [
{
"hostPort": 3026,
"containerPort": 8000,
"protocol": "tcp"
}
],
"entryPoint": [
"./entrypoint_deployment.sh"
],
"environment" : [
{ "name" : "DB_HOST", "value" : "blah.blah.us-east-1.rds.amazonaws.com" }
],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/myproj",
"awslogs-region": "us-east-1",
"awslogs-stream-prefix": "ecs"
}
}
}
]
}
My suggested approach is to configure an ECS service associated to your task, and then use an Application Load Balancer (ALB) to route the public traffic to this service.
This guide should help you: https://aws.amazon.com/blogs/compute/microservice-delivery-with-amazon-ecs-and-application-load-balancers/
Another (cheaper) option is to use the EC2 instance metadata API provided by Amazon, read the instance_id value from that API and use the "aws-cli" utility to update the security group when your container starts. A script like this should work (to run inside the container):
export SECURITY_GROUP=sg-12345678
export INSTANCE_ID=$(curl http://169.254.169.254/latest/meta-data/instance-id)
aws ec2 modify-instance-attribute --instance-id INSTANCE_ID --groups $SECURITY_GROUP
You need to set the SECURITY_GROUP accordingly and have the aws ec2 utility installed in the docker image of the task that you are running.
Furthermore, you need to change the ENTRYPOINT of your task docker image to run the script, for example:
"entryPoint": [
"./script_to_setup_SG.sh && ./entrypoint_deployment.sh"
],
I am trying to run a dockerised Jenkins and postgres database on AWS elastic beanstalk in a multi-container t2.micro environment:
Dockerrun.aws.json
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "postgres-db",
"image": "postgres:9.5-alpine",
"essential": true,
"memory": 256,
"portMappings": [
{
"hostPort": 5432,
"containerPort": 5432
}
]
},
{
"name": "jenkins-blueocean",
"image": "<account_id>.dkr.ecr.ap-southeast-2.amazonaws.com/<image>:latest",
"essential": true,
"memory": 256,
"mountPoints": [
{
"sourceVolume": "jenkins-data",
"containerPath": "/var/jenkins_home"
}
],
"portMappings": [
{
"hostPort": 80,
"containerPort": 8080
}
],
"links": [
"postgres-db"
]
}
],
"volumes": [
{
"name": "jenkins-data",
"host": {
"sourcePath": "/var/jenkins-data"
}
}
]
}
AWS shows it deploys fine but the logs for jenkins-blueocean container has that error:
/var/log/containers/jenkins-blueocean-7ce78063214b-stdouterr.log
touch: cannot touch '/var/jenkins_home/copy_reference_file.log': Permission denied
Can not write to /var/jenkins_home/copy_reference_file.log. Wrong volume permissions?
Am I missing something to allow jenkins access to the volume?
Thanks in advance!
Not 100% sure if this is the right path but we ended up following the .ebextensions method of running commands to setup the volume path to allow the jenkins user from the jenkins-blueocean container full access to do its thing.
mkdir -p /var/jenkins-data
chmod 777 /var/jenkins-data
This was because the permissions on the location in the docker instance has r-x rights for other users, with root user having rwx.
I'm deploying an ASP.NET Core Web API app as a docker image to AWS ECS, so use a task definition file for that.
It turns out the app only works if I specify environment variable VIRTUAL_HOST with the public DNS of my EC2 instance (as highlighted here: http://docs.servicestack.net/deploy-netcore-docker-aws-ecs), see taskdef.json below:
{
"family": "...",
"networkMode": "bridge",
"containerDefinitions": [
{
"image": "...",
"name": "...",
"cpu": 128,
"memory": 256,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"protocol": "http"
}
],
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com"
}
]
}
]
}
Once the app is deployed to AWS ECS, I hit the endpoints - eg http://ec2-xx-xxx-xxx-xxx.compute-1.amazonaws.com/v1/ping
with the actual public DNS of my EC2 instance in VIRTUAL_HOST all works fine
without the env variable I'm getting "503 Service Temporarily Unavailable" from nginx/1.13.0
and if I put an empty string to VIRTUAL_HOST I'm getting a "502 Bad Gateway" from nginx/1.13.0.
Now, I'd like to avoid specifying virtual host in the taskdef file - is that possible? Is my problem ASP.NET Core related or nginx related?
Amazon ECS have a secret management system using Amazon S3. You have to create a secret in your ECS interface, and then you will be able to reference it in your configuration, as an environment variable.
{
"family": "...",
"networkMode": "bridge",
"containerDefinitions": [
{
"image": "...",
"name": "...",
"cpu": 128,
"memory": 256,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"hostPort": 0,
"protocol": "http"
}
],
"environment": [
{
"name": "VIRTUAL_HOST",
"value": "SECRET_S3_VIRTUAL_HOST"
}
]
}
]
}
Store secrets on Amazon S3, and use AWS Identity and Access Management (IAM) roles to grant access to those stored secrets in ECS.
Full blog post
You could also make your own NGinx Docker image, which will already contain the environment variable.
FROM nginx
LABEL maintainer YOUR_EMAIL
ENV "VIRTUAL_HOST" "ec2-xx-xxx-xxxxxx.compute1.amazonaws.com"
And you would just have to build it, ship it privately and then use it for your configuration.
How to pass environment variables to Docker containers running in AWS Elastic Beanstalk multiple docker container configuration(different for different containers)?
Use the environment key in your container descriptions.
{
"containerDefinitions": [
{
"name": "myContainer",
"image": "something",
"environment": [
{
"name": "MY_DB_PASSWORD",
"value": "password"
}
],
"portMappings": [
{
"containerPort": 80,
"hostPort": 80
}
],
"memory": 500,
"cpu": 10
}]
}
For more information, see: http://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html