That is what I get when I follow the instructions at the official Docker tutorial here: tutorial link
I uploaded my Dockerrun.aws.json file and followed all other instructions.
The logs show nothing even when I click Request:
If anyone has a clue as to what I need to do, ie. why would not having a default VPC even matter here? I have only used my AWS account to set up Linux Machine EC2 instances for a Deep Learning nanodegree at Udacity in the past (briefly tried to set up a VPC just for practice but am sure I deleted/terminated everything when I found out that is not included in the free tier).
The author of the official tutorial forgot to add that you have to add the tag to the image name in the Dockerrun.aws.json file per below in gedit or other editor where :firsttry is the tag:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "hockeymonkey96/catnip:firsttry",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
It works:
I am deploying my node app on AWS ElasticBeanstalk using the multidocker option. The app appears to deploy successfully, however, I notice that the app (inside my docker) is not actually running.
When I docker inspect <container_id> the running container, see that "Cmd": null. If I inspect the ECS task definition created by beanstalk, I also see "command": null"/
However, if I run the container manually (via docker run -d myimage:latest), I see "Cmd": ["node", "server.js"] and the application serves correctly. This is the correct CMD that is included inside my Dockerfile.
How come my ECS task definition does not read the CMD from my docker image correctly? Am I suppose to add a command to my Dockerrun.aws.json? I couldn't find any documentation for this.
Dockerrun.aws.json:
{
"AWSEBDockerrunVersion": 2,
"volumes": [
{
"name": "node-app",
"host": {
"sourcePath": "/var/app/current/node-app"
}
}
],
"containerDefinitions": [
{
"name": "node-app",
"image": "1234565.dkr.ecr.us-east-1.amazonaws.com/my-node-app:testing",
"essential": true,
"memory": 128,
"portMappings": [
{
"hostPort": 3000,
"containerPort": 3000
}
]
}
]
}
I have the same issue. For me, it turned out that the entrypoint did take care of running the command. However the issue remains, but it might be interesting to see what your entrypoint looks like when you inspect the image and when you inspect the container.
See also:
What is the difference between CMD and ENTRYPOINT in a Dockerfile?
I am following this tutorial https://prakhar.me/docker-curriculum/ and I am trying to create and EBS component.
For Application version I am uploading a file called Dockerrun.aws.json with the following content:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "myDockerHubId/catnip",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "5000"
}
],
"Logging": "/var/log/nginx"
}
However, I am getting this problem:
Error
Could not launch environment: Application version is unusable and cannot be used with an environment
Any idea why the configuration file is not good?
I'm pretty sure this image
myDockerHubId/catnip
Does not exist on dockerhub.
Make sure You try to use an existent docker image from the dockerhub.
I'm trying to deploy multiple node.js micro services on AWS beanstalk, and I want them to be deployed on the same instance. It's my first time to deploy multiple services, so there're some failures I need someone to help me out. So, I tried to package them in a docker container first. Meanwhile I'm using docker composer to manage the structure. It's up and running locally in my virtual machine, but when I deployed it on to beanstalk, I met a few problems.
What I know:
I know I have to choose to deploy as multi-container docker.
The best practice to manage multiple node.js services is using docker composer.
I need a dockerrun.aws.json for node.js app.
I need to create task definition for that ecs instance.
Where I have problems:
I can only find dockerrun.aws.json and task_definition.json
template for php, so I can't verify if my configuration for node.js
in those two json files are in correct shape.
It seems like docker-compose.yml, dockerrun.aws.json and task_definition.json are doing similar jobs. I must keep
task_definition, but do I still need dockerrun.aws.json?
I tried to run the task in ecs, but it stopped right away. How can I check the log for the task?
I got:
No ecs task definition (or empty definition file) found in environment
because my task will always stop immediately. If I can check the log, it will be much easier for me to do trouble shooting.
Here is my task_definition.json:
{
"requiresAttributes": [],
"taskDefinitionArn": "arn:aws:ecs:us-east-1:231440562752:task-definition/ComposerExample:1",
"status": "ACTIVE",
"revision": 1,
"containerDefinitions": [
{
"volumesFrom": [],
"memory": 100,
"extraHosts": null,
"dnsServers": null,
"disableNetworking": null,
"dnsSearchDomains": null,
"portMappings": [
{
"hostPort": 80,
"containerPort": 80,
"protocol": "tcp"
}
],
"hostname": null,
"essential": true,
"entryPoint": null,
"mountPoints": [
{
"containerPath": "/usr/share/nginx/html",
"sourceVolume": "webdata",
"readOnly": true
}
],
"name": "nginxexpressredisnodemon_nginx_1",
"ulimits": null,
"dockerSecurityOptions": null,
"environment": [],
"links": null,
"workingDirectory": null,
"readonlyRootFilesystem": null,
"image": "nginxexpressredisnodemon_nginx",
"command": null,
"user": null,
"dockerLabels": null,
"logConfiguration": null,
"cpu": 99,
"privileged": null
}
],
"volumes": [
{
"host": {
"sourcePath": "/ecs/webdata"
},
"name": "webdata"
}
],
"family": "ComposerExample"
}
I had a similar problem and it turned out that I archived the containing folder directly in my Archive.zip file, thus giving me this structure in the Archive.zip file:
RootFolder
- Dockerrun.aws.json
- Other files...
It turned out that by archiving only the RootFolder's content (and not the folder itself), Amazon Beanstalk recognized the ECS Task Definition file.
Hope this helps.
For me, it was simply a case of ensuring the name of the file matched the exact casing as described in the AWS documentation.
dockerfile.aws.json had to be exactly Dockerfile.aws.json
Similar problem. What fixed it for me was using the CLI tools instead of zipping myself, just running eb deploy worked.
For me codecommit was no. Then after adding the Dockerrun.aws.json in git it works.
I got here due to the error. What my issue was is that I was deploying with a label using:
eb deploy --label MY_LABEL
What you need to do is deploy with ':
eb deploy --label 'MY_LABEL'
I've had this issue as well. For me the problem was that Dockerrun.aws.json wasn't added in git. eb deploy detects the presence of git.
I ran eb deploy --verbose to figure this out:
INFO: Getting version label from git with git-describe
INFO: creating zip using git archive HEAD
It further lists all the files that'll go in to the zip, Dockerrun.aws.json isn't there.
git status reports this:
On branch master
Your branch is up to date with 'origin/master'.
Untracked files:
(use "git add <file>..." to include in what will be committed)
Dockerrun.aws.json
nothing added to commit but untracked files present (use "git add" to track)
Adding the file to git and committing helped.
In my specific case I could just remove the .git directory in a scripted deploy.
In my case, I had not committed the Dockerrun.aws.json file after creating it, so using eb deploy failed with the same error.
I created the following simple Dockerrun file per the instructions here using a public container and it's successfully running a single instance.
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "sbeam/influxdb",
"Update": "true"
},
"Ports": [
{
"ContainerPort": "8086"
}
],
"Volumes": [
{
"HostDirectory": "/data",
"ContainerDirectory": "/data"
}
]
}
However I want the /data directory to be mounted within the EC2 instance as a certain EBS volume. I've found answers (here and here) that indicate a .ebextensions is needed, but since I am not uploading a .zip image for the container, how is this possible? Is it necessary to download the Docker container, add the .ebextensions directory, zip and re-upload?
You will have to follow option 3: "Create a .zip file containing your application files, any application file dependencies, the Dockerfile, and the Dockerrun.aws.json file." as explained in your first link.
In your case the .zip file may only contain the Dockerrun.aws.json and the .ebextensions folder.
In order to attach the EBS volume to your instance, you can check this article: http://blogs.aws.amazon.com/application-management/post/Tx224DU59IG3OR9/Customize-Ephemeral-and-EBS-Volumes-in-Elastic-Beanstalk-Environments. It will give you indications concerning the content of the .config file that will be inside the .ebextensions folder