Deploy docker on AWS beanstalk with docker composer - amazon-web-services

I'm trying to deploy multiple node.js micro services on AWS beanstalk, and I want them to be deployed on the same instance. It's my first time to deploy multiple services, so there're some failures I need someone to help me out. So, I tried to package them in a docker container first. Meanwhile I'm using docker composer to manage the structure. It's up and running locally in my virtual machine, but when I deployed it on to beanstalk, I met a few problems.
What I know:
I know I have to choose to deploy as multi-container docker.
The best practice to manage multiple node.js services is using docker composer.
I need a dockerrun.aws.json for node.js app.
I need to create task definition for that ecs instance.
Where I have problems:
I can only find dockerrun.aws.json and task_definition.json
template for php, so I can't verify if my configuration for node.js
in those two json files are in correct shape.
It seems like docker-compose.yml, dockerrun.aws.json and task_definition.json are doing similar jobs. I must keep
task_definition, but do I still need dockerrun.aws.json?
I tried to run the task in ecs, but it stopped right away. How can I check the log for the task?
I got:
No ecs task definition (or empty definition file) found in environment
because my task will always stop immediately. If I can check the log, it will be much easier for me to do trouble shooting.
Here is my task_definition.json:
{
"requiresAttributes": [],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:231440562752:task-definition/ComposerExample:1",
"status": "ACTIVE",
"revision": 1,
"containerDefinitions": [
{
"volumesFrom": [],
"memory": 100,
"extraHosts": null,
"dnsServers": null,
"disableNetworking": null,
"dnsSearchDomains": null,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80,
"protocol": "tcp"
}
],
"hostname": null,
"essential": true,
"entryPoint": null,
"mountPoints": [
{
"containerPath": "/usr/share/nginx/html",
"sourceVolume": "webdata",
"readOnly": true
}
],
"name": "nginxexpressredisnodemon_nginx_1",
"ulimits": null,
"dockerSecurityOptions": null,
"environment": [],
"links": null,
"workingDirectory": null,
"readonlyRootFilesystem": null,
"image": "nginxexpressredisnodemon_nginx",
"command": null,
"user": null,
"dockerLabels": null,
"logConfiguration": null,
"cpu": 99,
"privileged": null
}
],
"volumes": [
{
"host": {
"sourcePath": "/ecs/webdata"
},
"name": "webdata"
}
],
"family": "ComposerExample"
}

I had a similar problem and it turned out that I archived the containing folder directly in my Archive.zip file, thus giving me this structure in the Archive.zip file:
RootFolder
- Dockerrun.aws.json
- Other files...
It turned out that by archiving only the RootFolder's content (and not the folder itself), Amazon Beanstalk recognized the ECS Task Definition file.
Hope this helps.

For me, it was simply a case of ensuring the name of the file matched the exact casing as described in the AWS documentation.
dockerfile.aws.json had to be exactly Dockerfile.aws.json

Similar problem. What fixed it for me was using the CLI tools instead of zipping myself, just running eb deploy worked.

For me codecommit was no. Then after adding the Dockerrun.aws.json in git it works.

I got here due to the error. What my issue was is that I was deploying with a label using:
eb deploy --label MY_LABEL
What you need to do is deploy with ':
eb deploy --label 'MY_LABEL'

I've had this issue as well. For me the problem was that Dockerrun.aws.json wasn't added in git. eb deploy detects the presence of git.
I ran eb deploy --verbose to figure this out:
INFO: Getting version label from git with git-describe
INFO: creating zip using git archive HEAD
It further lists all the files that'll go in to the zip, Dockerrun.aws.json isn't there.
git status reports this:
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
Dockerrun.aws.json
nothing added to commit but untracked files present (use "git add" to track)
Adding the file to git and committing helped.
In my specific case I could just remove the .git directory in a scripted deploy.

In my case, I had not committed the Dockerrun.aws.json file after creating it, so using eb deploy failed with the same error.

Related

error launching simple web app in docker container on AWS Elastic Beanstalk

That is what I get when I follow the instructions at the official Docker tutorial here: tutorial link
I uploaded my Dockerrun.aws.json file and followed all other instructions.
The logs show nothing even when I click Request:
If anyone has a clue as to what I need to do, ie. why would not having a default VPC even matter here? I have only used my AWS account to set up Linux Machine EC2 instances for a Deep Learning nanodegree at Udacity in the past (briefly tried to set up a VPC just for practice but am sure I deleted/terminated everything when I found out that is not included in the free tier).
The author of the official tutorial forgot to add that you have to add the tag to the image name in the Dockerrun.aws.json file per below in gedit or other editor where :firsttry is the tag:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "hockeymonkey96/catnip:firsttry",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
It works:

AWS ElasticBeanstalk Multidocker is not creating an ECS task with a correct Cmd

I am deploying my node app on AWS ElasticBeanstalk using the multidocker option. The app appears to deploy successfully, however, I notice that the app (inside my docker) is not actually running.
When I docker inspect <container_id> the running container, see that "Cmd": null. If I inspect the ECS task definition created by beanstalk, I also see "command": null"/
However, if I run the container manually (via docker run -d myimage:latest), I see "Cmd": ["node", "server.js"] and the application serves correctly. This is the correct CMD that is included inside my Dockerfile.
How come my ECS task definition does not read the CMD from my docker image correctly? Am I suppose to add a command to my Dockerrun.aws.json? I couldn't find any documentation for this.
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "node-app",
"host": {
"sourcePath": "/var/app/current/node-app"
}
}
],
"containerDefinitions": [
{
"name": "node-app",
"image": "1234565.dkr.ecr.us-east-1.amazonaws.com/my-node-app:testing",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
]
}
]
}
I have the same issue. For me, it turned out that the entrypoint did take care of running the command. However the issue remains, but it might be interesting to see what your entrypoint looks like when you inspect the image and when you inspect the container.
See also:
What is the difference between CMD and ENTRYPOINT in a Dockerfile?

Could not launch environment: Application version is unusable and cannot be used with an environment

I am following this tutorial https://prakhar.me/docker-curriculum/ and I am trying to create and EBS component.
For Application version I am uploading a file called Dockerrun.aws.json with the following content:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "myDockerHubId/catnip",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
However, I am getting this problem:
Error
Could not launch environment: Application version is unusable and cannot be used with an environment
Any idea why the configuration file is not good?
I'm pretty sure this image
myDockerHubId/catnip
Does not exist on dockerhub.
Make sure You try to use an existent docker image from the dockerhub.

AWS ECS leader commands (django migrate)

We are currently deploying our Django APP on AWS Elastic Beanstalk. There we execute the django db migrations using container commands, where we assure we only run migrations on one instance by using the "leader_only" restriction.
We are considering to move our deployment to AWS EC2 Container Service. However, we cannot figure out a way to enforce the migrate to only be run on one container when new image is being deployed.
Is it possible to configure leader_only commands in AWS EC2 Container Service?
There is a possibility to use ECS built-in functionality to handle deployments that involve migrations. Basically, the idea is the following:
Make containers fail their health checks if they are running against an unmigrated database, e.g. via making a custom view and checking for the existence of the migrations execution plan executor.migration_plan(executor.loader.graph.leaf_nodes())
status = 503 if plan else 200
Make a task definition that does nothing more then just migrates the database and make sure it is scheduled for execution with the rest of the deployment process.
The result is deployment process will try to bring one new container. This new container will fail health checks as long as database is not migrated and thus will block all the further deployment process (so you will still have old instances running to serve requests). Once migration is done - health check will now succeed, so the deployment will unblock and proceed.
This is by far the most elegant solution I was able to find in terms of running Django migrations on Amazon ECS.
Source: https://engineering.instawork.com/elegant-database-migrations-on-ecs-74f3487da99f
Look at using Container Dependency in your task definition to make your application container wait for a migration container to successfully complete. Here's a brief example of the container_definitions component of a task definition:
{
"name": "migration",
"image": "my-django-image",
"essential": false,
"command": ["python3", "mange.py migrate"]
},
{
"name": "django",
"image": "my-django-image",
"essential": true,
"dependsOn": [
{
"containerName": "migration",
"condition": "SUCCESS"
}
]
}
The migration container starts, runs the migrate command, and exits. If successful, then the django container is launched. Of course, if your service is running multiple tasks, each task will run in this fashion, but once migrations have been run once, additional migrate commands will be a no-op, so there's no harm.
For the ones using task definitions JSON all we need to do is flag a container as not essential on containerDefinitions
{
"name": "migrations",
"image": "your-image-name",
"essential": false,
"cpu": 24,
"memory": 200,
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "your-logs-group",
"awslogs-region": "your-region",
"awslogs-stream-prefix": "your-log-prefix"
}
},
"command": [
"python3", "manage.py", "migrate"
],
"environment": [
{
"name": "ENVIRON_NAME",
"value": "${ENVIRON_NAME}"
}
]
}
I flagged this container as "essential": false.
"If the essential parameter of a container is marked as true, and that container fails or stops for any reason, all other containers that are part of the task are stopped. If the essential parameter of a container is marked as false, then its failure does not affect the rest of the containers in a task. If this parameter is omitted, a container is assumed to be essential."
source: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
Honestly, I have not figured this out. I have encountered exactly the same limitation on ECS (as well as others which made me abandon it, but this is out of topic).
Potential workarounds:
1) Run migrations inside your init script. This has the flaw that it runs on every node at the time of the deployment. (I assume you have multi-replicas)
2) Add this as a step of your CI flow.
Hope I helped a bit, in case I come up with another idea, I'll revert back here.
It's not optimal, but you can simply run it as a command in task definition
"command": ["/bin/sh", "-c", "python manage.py migrate && gunicorn -w 3 -b :80 app.wsgi:application"],

Multidocker environment, amazon beanstalk, mounting volumes

What I'm trying to achieve: I have a docker container which contains a CMS, that CMS has a folder named 'assets'. I need the asset folder to be available to other containers, and also for the data to be safe from deletion when containers/images are removed.
How I've attempted to solve it: I have read all about mounting volumes in multi container environments and looked at a bunch of examples and came up with the following dockerrun.aws.json file
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "assets",
"host": {
"sourcePath": "/var/app/current/cms"
}
}
],
"containerDefinitions": [
{
//...
"mountPoints": [
{
"sourceVolume": "assets",
"containerPath": "/var/www/assets",
"readOnly": false
}
]
}
]
}
I can upload this via Beanstalk and everything builds and all boxes are green, however if I login to the EC2 instance and ls /var/app/current the directory is empty. I was expecting to see /var/app/current/cms/assets sitting there...
I think I'm missing a core concept or flag in my build file, any direction or better way of achieving what I'm trying to do would be appreciated.
Try taking a look at this link and try it out.
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/create_deploy_docker_v2config.html
I believe this is similar to what you are asking for.
J