I have some docker images stored in ECR, and I'm trying to deploy them to ElasticBeanstalk. They're being deployed fine, but they're not picking up any of the environment variables from the host. If I deploy just the default multi-container docker setup, the containers do pick up the environment variables (set with eb setenv). Even trying to run locally they do not pick up the environment variables. Has anyone else experienced this and found a solution?
You can define the container environment variables in your Dockerrun.aws.json file.
For example, the following entry defines an environment variable with the name APP and the value PYTHON:
"environment": [
{
"name": "APP",
"value": "PYTHON"
}
],
Ref- http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html
Related
I'm struggling with the best way to pass secret config data to my Node.Js app.
The problem is that I need it to run both on my local machine, within the CI environment for testing and staging, as well as on production (AWS).
I was thinking to use docker secrets as it's described here:
https://medium.com/better-programming/how-to-handle-docker-secrets-in-node-js-3aa04d5bf46e
The problem is that it only works if you run Docker as a service (via Swarm), which I could do locally, but not on AWS ECS and not on CI. (Or am I missing something there?)
Then I could also use Amazon secrets, but how would I get to them on my CI environment and on the local environment? Or if I don't have the internet?
Isn't there a way to make like a separate file or something that I could use for every environment no matter whether it's my local one running via docker run or the CI one or the AWS ECS one?
Isn't there a way to make like a separate file or something that I
could use for every environment no matter whether it's my local one
running via docker run or the CI one or the AWS ECS one? Or if I don't have the internet?
Targeting N-Environment with a condition like No Internet is something that one hardly relay on AWS service for keeping secret like store parameter etc.
What I can suggest is to use dot env which will be environment independent, all you need to handle different environment from different sources, for example
Pull from s3 when running on staging and production on AWS
Bind local .env when working on dev machine to handle No internet condition
Pull from s3 or generate dynamic dot env for CI
So deal with this each environment consumes proper Dot ENV file you can add logic in Docker entrypoint.
#!/bin/sh
if [ "${NODE_ENV}" == "production" ];then
# in production we are in aws and we can pull dot env file from s3
aws s3 cp s3://mybucket/production/.env /app/.env
elif [ "${NODE_ENV}" == "staging" ];then
# in staging we also assume that we are in aws and we can pull dot env file from s3
aws s3 cp s3://mybucket/staging/.env /app/.env
elif [ "${NODE_ENV}" == "ci" ];then
# generate dynamic ENV or pull from s3
aws s3 cp s3://mybucket/ci/.env
else
echo "running against local env, please dot env file like docker run -it $PWD/.env:/app/.env ..."
fi
echo "Startin node application"
exec node "$#
Enable encryption on s3, plus only production env should able to pull production env file, a more strong policy will lead to a more secure mechanism.
For local setup you can try
docker run -it -e NODE_ENV="local" --rm $PWD/.env:/app/.env myapp
The most common way of passing environmental specific configuration to an application running in a Docker container is to use environment variable as proposed as the third factor of Twelve-Factor Apps.
With this your application should read all configuration, including secrets, from environment variables.
If you are running locally and outside of a Docker container you can manually set these environment variables, run a script that exports them to your shell or use a dotenv style helper for your language that will automatically load envrionment variables from an environment file and expose them as environment variables to your application so you can fetch them with process.env.FOO, os.environ["FOO"], ENV['HOSTNAME'] or however your application's language accesses environment variables.
When running in a Docker container locally you can avoid packaging your .env file into the image and instead just inject the environment variables from the environment file by using the --env-file argument to docker run or instead just individually inject the environment variables by hand with --env.
When these are just being accessed locally then you just need to make sure you don't store any secrets in source control so would add your .env file or equivalent to your .gitignore.
When it comes to running in CI you will need to have your CI system store these secret variables securely and then inject them at runtime. In Gitlab CI, for example, you would create the variables in the project CI/CD settings, these are then stored encrypted in the database and are then injected transparently in plain text at runtime to the container as environment variables.
For deployment to ECS you can store non secret configuration directly as environment variables in the task definition. This leaves the environment variables readable by anyone with read only access to your AWS account which is probably not what you want for secrets. Instead you can create these in SSM Parameter Store or Secrets Manager and then refer to these in the secrets parameter of your task definition:
AWS documentation includes this smallish example of a task definition that gets secrets from Secrets Manager:
{
"requiresCompatibilities": [
"EC2"
],
"networkMode": "awsvpc",
"containerDefinitions": [
{
"name": "web",
"image": "httpd",
"memory": 128,
"essential": true,
"portMappings": [
{
"containerPort": 80,
"protocol": "tcp"
}
],
"logConfiguration": {
"logDriver": "splunk",
"options": {
"splunk-url": "https://sample.splunk.com:8080"
},
"secretOptions": [
{
"name": "splunk-token",
"valueFrom": "arn:aws:secretsmanager:us-east-1:awsExampleAccountID:secret:awsExampleParameter"
}
]
},
"secrets": [
{
"name": "DATABASE_PASSWORD",
"valueFrom": "arn:aws:ssm:us-east-1:awsExampleAccountID:parameter/awsExampleParameter"
}
]
}
],
"executionRoleArn": "arn:aws:iam::awsExampleAccountID:role/awsExampleRoleName"
}
I've deployed an Aurelia application to AWS Elastic Beanstalk via AWS ECR and have run into some difficulty. The docker container, when run locally, works perfectly (see below for Dockerfile).
FROM nginx:1.15.8-alpine
COPY dist /usr/share/nginx/html
The deployment works quite well, however when I navigate to the AWS provided endpoint http://docker-tester.***.elasticbeanstalk.com/ I get 502 Bad Gateway
nginx/1.12.1.
I can't figure out what might be the issue. The docker container in question is a simple Hello World example created via the au new command; it's nothing fancy at all.
Below is my Dockerrun.aws.json file
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "***.dkr.ecr.eu-central-1.amazonaws.com/tester:latest",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8080"
}
],
"Logging": "/var/log/nginx"
}
My Elastic Beanstalk configuration is rather small with an EC2 instance type of t2.micro. I'm using the free tier as an opportunity to learn.
I greatly appreciate any help, or links to some reading that may point in the right direction.
It has nothing to do with your aurelia application. You are missing EXPOSE statement (which is mandatory) in your Dockerfile. You can change it like this.
FROM nginx:1.15.8-alpine
EXPOSE 80
COPY dist /usr/share/nginx/html
If you try to run it without EXPOSE, you will get an error
ERROR: ValidationError - The Dockerfile must list ports to expose on the Docker container. Specify at least one port, and then try again.
You should test your application before pushing it to ElasticBeanstalk
install eb cli (assuming that you have pip, if not then you need to install it as well)
pip install awsebcli --upgrade --user
then initialize local repository for deployment
eb init -p docker <application-name>
and you can test it
eb local run --port <port-number>
So I am using Travis CI to automatically deploy my application to AWS Elasticbeanstalk environment. I have this issue that I need to update the nginx.conf file that is located in the host machine files.
Im running a Single container Docker image inside that host machine.
How can I copy or link the nginx.conf file from docker container to host machines nginx.conf file.
Currently my Dockerrun.aws.json looks like that:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "some:image:url:here",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8001"
}
],
"Volumes": [
{
"HostDirectory": "/etc/nginx/nginx.conf",
"ContainerDirectory": "/home/node/app/nginx.conf"
}
]
}
When I tried to use dockerrunversion: 2, it gave me an error on the build that version is wrong.
How can I link those two files with Single Container Docker application?
The "Volumes" key is used to map full volumes, not individual files. See Dockerrun.aws.json file specifications for an explanation.
I know of 2 ways you can solve this problem: 1) Custom AMI or 2) use a Dockerfile with your Dockerrun.aws.json.
1. Build a Custom AMI
The idea behind building a custom AMI is to launch an instance from one of Amazons existing AMIs. You make the changes you need to it (in your case, change the nginx.conf). Finally you create a new AMI from this instance and it will be available to you when you create your environment in Elastic Beanstalk. Here are the detailed steps to create your own AMI and how to use it with Elastic Beanstalk.
2. Use a Dockerfile with your Dockerrun.aws.json
If you dont build your own AMI, you can copy your conf file with the help of a Dockerfile. Dockerfile is a text file that provides commands to Elastic Beanstalk to run to build your custom image. The Docerfile reference details the commands that can be added to a Dockerfile to build your image. You are going to need to to use the Copy command or if the file is simple, you can use Run and echo to build it like in the example here.
Once you create your Dockerfile, you will need to put the Dockerfile and your Dockerrun.aws.json into a directory and create a zip file with both. Provide this to Elastic Beanstalk as your source bundle. Follow this guide to build the source bundle correctly.
I have a node.js application packaged in a docker image hosted in a public repository.
I have deployed that image in an AWS Beanstalk docker application successfully.
The problem is that I was expecting the Beanstalk application to be automatically updated when I update the image in the public repository, as the following configuration sugggests.
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "peveuve/dynamio-payment-service",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8000"
}
],
"Logging": "/var/log/dynamio"
}
The Dockerfile is very simple:
FROM node:4.2.1-onbuild
# Environment variables
ENV NODE_ENV test
ENV PORT 8000
# expose application port outside
EXPOSE $PORT
The Amazon documentation is pretty clear on that:
Optionally include the Update key. The default value is "true" and
instructs Elastic Beanstalk to check the repository, pull any updates
to the image, and overwrite any cached images.
But I have to update the Beanstalk application manually by uploading a new version of the Dockerrun.aws.json descriptor. Did I miss something? Is it supposed to work like that?
You can use the aws command-line tool to trigger the update:
aws elasticbeanstalk update-environment --application-name [your_app_name] --environment-name [your_environment_name] --version-label [your_version_label]
You specify the version that contains the Dockerrun.aws.json file, that way a new version won't be added to the application. In this case the Dockerrun file works as the "source" for the application, but it only tells aws to pull the docker image, so it would be redundant to create new versions for the application in Elastic Beanstalk (unless you use specifically tagged docker images in the Dockerrun file)
Links:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_image.html
http://docs.aws.amazon.com/elasticbeanstalk/latest/api/API_UpdateEnvironment.htm
The documentation should be more clear. What they are saying is with update=true:
EBS will do a docker pull before it does a docker run when the application is first started. It will not continually poll docker hub.
In contrast, issuing a docker run without first doing a docker pull will always use the locally stored version of that machine, which may not always be the latest.
In order to acheive what you want, you'll need to set up a webhook on Docker Hub, that calls an application you control, that rebuilds your ELB app.
My restrictions are:
Django is to be deployed using uWSGI with nginx
Django app is to use postgresql that is hosted on RDS
the dockerfile will use ubuntu:14.04 as the container OS
This is what I have for docker setup:
https://github.com/simkimsia/aws-docker-django
It contains a dockerfile and other configuration files. I have tested it on linux box. It works.
This is what I have tried. I logged into AWS console and selected Elastic Beanstalk and then selected create new application using docker as environment.
A new environment is created and it prompts me to upload and deploy.
I zipped up all the files you see in https://github.com/simkimsia/aws-docker-django and uploaded the zip file.
I got error with deploying.
I have also subsequently tried with using the following json file.
{
"AWSEBDockerrunVersion": "1",
"Volumes": [
{
"ContainerDirectory": "/var/app",
"HostDirectory": "/var/app"
}
],
"Logging": "/var/eb_log"
}
I have answers such as this but they will go against at least one of the 3 restrictions I have.
How do I go about achieving deployment on AWS beanstalk using Docker?
Have you been able to run any docker images on Elastic Beanstalk? I was having several issues, but eventually documented my solution here: https://github.com/dkarchmer/aws-eb-docker-django
It does not use nginx but that should all be on your Dockerfile, so you should be able to reverse engineer my example and ideally just use your own Dockerfile instead of mine.