Deployment on AWS Elastic Beanstalk with Docker fails - amazon-web-services

I'm developing a web application with Play framework and I'm running it on AWS Elastic Beanstalk using a single docker container and a load balancer. Normally, everything is running fine, but when I rebuild the whole environment I get the following error:
Command failed on instance. Return code: 6 Output: (TRUNCATED)... in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:11 nginx: [emerg] host not found in upstream "docker" in /etc/nginx/sites-enabled/elasticbeanstalk-nginx-docker-proxy.conf:24 nginx: configuration file /etc/nginx/nginx.conf test failed.
When I log into the EC2 I can see that no docker image is running and therefore the Nginx server cannot start. I cannot see any other error in the logs (or maybe I don't know where to look). The strange thing is that the same version worked fine before rebuilding the environment.
I'm using the following Dockerfile for the deployment:
FROM java
COPY <app_folder> /opt/<app_name>
WORKDIR /opt/<app_name>
CMD [ "/opt/<app_name>/bin/<app_name>", "-mem", "512", "-J-server" ]
EXPOSE 9000
Any ideas what the problem could be or where to check for more details?

I had this same problem. elasticbeanstalk-nginx-docker-proxy.conf is referring to proxy_pass http://docker but the definition of that is missing. You need to add something like
# List of application servers
upstream docker {
server 127.0.0.1:8080; # your app
}
(Make sure it's outside of the server directive.)

I have just been working through the same challenge (deploying an updated Docker image to Elastic Beanstalk). And it depends on what you want to do exactly, but what I found out is that (once you have the eb cli setup) you can just use the eb deploy command to push out your code changes without worrying about the image at all.
Granted you'd still want to push your image up to your repo for sharing purposes (with other developers), OR if you actually need to change the environment configuration for some reason... but if you're just looking to push code look into eb deploy
As far as the specifics of your error unfortunately I can't be of much help there. Good luck!

Related

Deploying simple docker app with docker-compose on Elastic Beanstalk

I have a simple docker app that is able to run for me locally via docker-compose up, and when I send the .yml file to my friend, they are also able to get it up and running on their local machine. However, when I try to deploy it on Elastic Beanstalk, I get errors (specifically, something related to error:open /var/pids/eb-docker-compose-log.pid: no such file or directory, as I'll show below). I've tried to upload multiple times to Elastic Beanstalk, with the same errors. This is a custom app, but they are the same errors I got when I was trying to follow the instructions on https://docker-curriculum.com/#docker-on-aws. Here is the docker-compose.yml for my current app:
version: "3"
services:
server:
image: mfatigati/shop-server
container_name: shop-server
ports:
- "4000:4000"
client:
image: mfatigati/shop-client
depends_on:
- server
ports:
- "3000:3000"
mfatigati/shop-server and mfatigati/shop-client are both Node.JS apps, i.e., FROM node:16 in their Dockerfile.
To deploy this on AWS, I go to my EB console, and then:
Click "Create Application" to take me to the create app screen
Choose "Docker" as the platform
Choose "Upload local code", and upload the above-mentioned .yml file.
Click "Create Application"
Based on the notes here, I think this should be all I need to do (maybe I'm wrong about that?), but I get errors every time that point me to the eb.engine.log file. I've pasted what seems to be the relevant section below, as it is the only section that mentions errors, and it also reflects what appears in the AWS GUI console. The main problem seems reflected by the bit about error:open /var/pids/eb-docker-compose-log.pid: no such file or directory:
2022/02/14 14:17:23.619888 [ERROR] update processes [cfn-hup eb-docker-events healthd eb-docker-compose-events eb-docker-compose-log docker] pid symlinks failed with error Read pid source file /var/pids/eb-docker-compose-log.pid failed with error:open /var/pids/eb-docker-compose-log.pid: no such file or directory
2022/02/14 14:17:23.619901 [ERROR] An error occurred during execution of command [app-deploy] - [Track pids in healthd]. Stop running the command. Error: update processes [cfn-hup eb-docker-events healthd eb-docker-compose-events eb-docker-compose-log docker] pid symlinks failed with error Read pid source file /var/pids/eb-docker-compose-log.pid failed with error:open /var/pids/eb-docker-compose-log.pid: no such file or directory
2022/02/14 14:17:23.619905 [INFO] Executing cleanup logic
2022/02/14 14:17:23.620005 [INFO] CommandService Response: {"status":"FAILURE","api_version":"1.0","results":[{"status":"FAILURE","msg":"Engine execution has encountered an error.","returncode":1,"events":[{"msg":"Instance deployment failed. For details, see 'eb-engine.log'.","timestamp":1644848243,"severity":"ERROR"}]}]}
Any insight would be greatly appreciated! I'm pasting some screenshots below, in case that helps.
GUI corresponding to step 2; GUI corresponding to step 3; GUI errors
I had the same error:open /var/pids/eb-docker-compose-log.pid: no such file or directory error happening for my Docker Compose app when trying to deploy it to my Elastic Beanstalk environment; I'm not sure if my solution will be the same solution you need, but I hope it points you in the right direction (and helps future devs facing a similar problem).
What caused the error for me:
This ...eb-docker-compose-log.pid: no such file... error was a false error that was triggered by a separate issue; my separate error was actually a problem with my application code not finding the environment variables set in my Elastic Beanstalk environment. See below for how I found the problem, and what I did to fix it.
How I found my real problem:
I downloaded the Full Logs:
go to your EB environment
click Logs on the left nav
click the Request Logs dropdown button (at the top right)
click Full Logs
click the Download link once the full logs are ready
Inside of the logs, I found the real problem in the eb-docker/containers/eb-current-app/eb-stdouterr.log file, the issue being that my application code wasn't able to find the environment variables that were setup in my Elastic Beanstalk Software configuration.
In case you're curious what my error said:
panic: required key ONE_OF_MY_ENV_KEYS missing value
(I also had a couple other errors in this log that I fixed, but fixing the error shown above is what ended up solving the ...eb-docker-compose-log.pid: no such file... error).
How I fixed this error:
I turns out that if you use docker-compose.yml, while setting up your environment variables in your Elastic Beanstalk Software configuration, you have to make sure you use the .env file that Elastic Beanstalk creates for you; otherwise (from my own testing), EB only see's/uses the environment variable keys/values you specify in your own .env file or environment: list you can specify in docker-compose.yml.
NOTE: see the Elastic Beanstalk "Environment properties and Environment Variables" and "Referencing environment variables in containers" sections in the docs here, in particular this bit:
"Elastic Beanstalk generates a Docker Compose environment file called .env in the root directory of your application project. This file stores the environment variables you configured for Elastic Beanstalk.
Note
If you include a .env file in your application bundle, Elastic Beanstalk will not generate an .env file."
I solved my problem by updating my docker-compose.yml file to point to the supposed .env file that EB would create for me (by adding env_file: .env to my services that needed it), i.e.:
version: "3"
services:
my_service1:
# ...
env_file: .env
my_service2:
# ...
env_file: .env

Starting NginX with my modified nginx.conf on ECS

I have an environment in AWS with an ECS cluster, an EFS source and some services running on this cluster.
One of my services is the NginX web server which I use to serve our site and our services. As a solution to keep some sensitive and static configuration files we have chosen the EFS service. So, each service creates a volume from this EFS and mount it every time a container starts.
The problem is with NginX. I want to store my nginx.conf file into an EFS folder and after the NginX service starts, we want the container to copy this file at /etc/nginx/ folder in order for my NginX server to start with my configuration.
I've tried to build my own image including my configuration with success but this is not what we want.That means that we should build a new image every time we want to change a line on nginx.conf.
I've tried to create a script to run every time the container starts and copy my configuration but i didn't manage to make it play on ECS. Either the NginX failed to reload, either the syntax is wrong, either the file is not available.
#!/bin/bash
cp /efs/nginx.conf /etc/nginx/
nginx -s reload
Ι considered to find out how to create a cron job to run every X minutes and copy my nginx.conf to etc/nginx but this seems to be a stupid approach.
I made like 60 different task definitions revisions in order to find out how this CMD Environment option works on ECS. Of course the most of them has to do with the syntax and i get bach errors like "invalid option: bash" or "invalid option: /tmp/1.sh" etc
Samples:
1.Command ["cp","/efs/nginx.conf /etc/nginx/"]
2.Entry point ["nginx","-g","daemon off;"]
Command ["cp /efs/nginx.conf /etc/nginx/"]
Entry point: ["nginx","-g","daemon off"]
Command: ["/bin/sh","cp","/efs/nginx.conf/","/etc/nginx/"]
Command ["[\"cp\"","\"/efs/nginx.conf\"","\"/etc/nginx/\"]","[\"nginx\"","\"-g\"","\"daemon off;\"]"]
Command ["cp /efs/nginx.conf /etc/nginx/","nginx -g daemon off;"]
Command ["cp","/efs/nginx.conf /etc/nginx/","nginx -g daemon off;"]
-
Does anyone knows or does anyone already implement this solution on ECS?
To replace /etc/nginx/nginx.conf with a modified one from a binded volume?
Thanks in advance
SOLUTION:
As I mention at my question above, I'd like to use a static nginx.conf file, which will be into an EFS folder, into my nginx service container.
My task definition is simple like this
FROM nginx
EXPOSE 80
RUN mkdir /etc/nginx/html
Through ECS task definition I create a volume and then a mounting point which is an easy process and works fine. The problem was in the entrypoint field which supposed to include my script's directory and to my script itself.
At ECS task definition Environment entrypoint field i putted
sh,-c,/efs/docker-cmd-nginx.sh
and my script is just the following
#!/bin/dash
cp /efs/nginx.conf /etc/nginx/ &&
nginx -g "daemon off;"
PS: The problem probably was at:
my script which I didn't use double quotes at the daemon off; part but I was using double quotes on the whole line nginx -g daemon off;
my script was trying to reload nginx which was not even running yet.
my attempt to put the commands seperately at my task's entrypoint was wrong, syntax-wise for sure and maybe strategy-wise as well.

Elastic Beanstalk Procfile for go

I'm trying to deploy my go restful server program to EC2 Linux using Elastic Beanstalk. The document says that I need to create a Procfile at the root. So I did. Here are the steps:
Build my go program myapp.go to using
$ go build -o myapp -i myapp.go
Create a Procfile with exact name at the root with
web: myapp
Zip up the Procfile and the myapp image to a myapp.zip file.
Upload to the server via Elastic Beanstalk console. But I keep getting Degraded health and warning with
WARN Process termination taking longer than 10 seconds.
Any suggestions. By the way, I tried to use the same procfile procedure on the simple application.go zip file came from the Elastic Beanstalk example library. It didn't work either.
I was finally able to get a Go application to deploy with Elastic Beanstalk using the eb client. There are a few things that EB requires:
The name of your main file should be application.go.
Make sure your app is listening on port 5000.
You'll need a Procfile in the main root with
web: bin/application
You'll need a Buildfile with
make: ./build.sh
And finally you'll need a build.sh file with
#!/usr/bin/env bash
# Stops the process if something fails
set -xe
# All of the dependencies needed/fetched for your project.
# FOR EXAMPLE:
go get "github.com/gin-gonic/gin"
# create the application binary that eb uses
GOOS=linux GOARCH=amd64 go build -o bin/application -ldflags="-s -w"
Then if you run eb deploy (after creating your initial eb repository), it should work. I wrote a whole tutorial for deploying a Gin application on EB here. The section specifically on deploying with Elastic Beanstalk is here.

How to deploy Play on Amazon Beanstalk keeping /public editable for a single page application?

I am looking for alternative methods of deploying a Play application to Elastic Beanstalk. It is a single page app that relies on Ember.js. It would be nice to be able to edit the the contents of the /public folder so I don't need to rebuild the docker image every time something is fixed on the Ember side that doesn't affect the Play app itself.
I am currently using sbt's docker:stage command and zipping the generated docker folder along with this Dockerfile and Dockerrun.
Dockerfile
FROM java:latest
WORKDIR /opt/docker
ADD stage /
RUN ["chown", "-R", "daemon:daemon", "."]
EXPOSE 9000
USER daemon
ENTRYPOINT ["bin/myapp", "-Dconfig.resource=application-prod.conf"]
CMD []
Dockerrun
{
"AWSEBDockerrunVersion": "1",
"Ports": [{ "ContainerPort": "9000" }],
"Volumes": []
}
Once I zip the file I upload it using Beanstalk console. But this involves rebuilding the app every time a typo is fixed on the front end. It is annoying because it means all the updated front end code has to wait until I get a chance to push it up so the boss can view it and give feedback. It would be nice if there was a way to have the /public folder (Play just serves /public/index.html) accessible so the front end dev could access it directly for his edits.
Ideally I would like some method that can be used for both development and production. I don't know the requirements imposed by Beanstalk so it can properly spin up extra instances of the app when needed. Maybe something where when the instance starts it does git pull on the backend repo and git pull on the front end repo, then runs my custom build script for ember to generate the /dist folder and move into Play's /public folder and create gzips of each file. Then start the play app. Then let the front end Dev ssh into the development instance and do git pull and ember build as needed for his edits.
It would also be nice for the development server for the Play server to be run using run or ~run so I can just do git pull and have it rebuild the backend.
Or maybe I am approaching this in the completely wrong way. I have never done any of this before so I am sort of guessing my way through all of it.
Thanks for any suggestions and pointers in the correct direction.
Adam
Edit
Since we are really only using Play as a RESTful API would it be better to just run a nginx/Apache server on something like EC2 then use Beanstalk to handle the Play App without it serving any content besides API calls. I would assume the EC2 nginx could be pretty tiny since only the first access would pull files from the http server. After that it is all API calls. Then we run the Play app from Beanstalk so it can handle load balancing for the API. This at least saves me from rebuilding the image for front end edits. Would this be a more correct setup?

Best way to deploy play2 app using Amazon Beanstalk

I found fragmented instructions here and some other places about deploying Play2 app on amazon ec2. But did not find any neat way to deploy using Beanstalk.
Play is a nice framework and AWS beanstalk is one of the most popular services then why is there no official instruction to do this?
Has anyone found any better solution?
Deploying a Play2 app on elastic beanstalk is now easy with Docker Containers in combination with sbt's experimental docker feature.
In build.sbt specify the exposed docker ports:
dockerExposedPorts in Docker := Seq(9000)
You should automate the following steps, but you can try this out manually to test that it works:
Generate a Dockerfile for the project by running the command: sbt docker:stage.
Go to the ./target/docker/ directory.
Create an elastic beanstalk Dockerrun.aws.json file with the following contents:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
Zip up everything in that directory, let's say into a file called play2-test-docker.zip. The zip file should contain the files: Dockerfile, Dockerrun.aws.json, and files/* directory.
Go to aws beanstalk console and create a new application using the m3.medium or any instance type with enough memory for the jvm to run. Any instance with too little memory will result in a JVM error.
Select "Docker Container" in the Predefined Configuration dropdown.
In the application selection screen, select "Upload" and select the zip file you created earlier. Launch the app and then go brew some tea. This can take a very long time. Minutes. Subsequent deployments of the same app version should be slightly quicker.
Once the app is running and green in the aws console, click on the app's url and you should see the welcome screen of the application (or whatever your index file is).
Here's my solution that doesn't require any additional services/containers like Docker or Jenkins.
Create a dist folder in the root of your Play application's directory. Create a Procfile file containing the following contents and put it in the dist folder (EB requires port 5000):
web: ./bin/YOUR_APP_FILE_NAME -Dhttp.port=5000 -Dconfig.file=conf/application.conf
The YOUR_APP_FILE_NAME is the name of the executable in the bin directory, which is inside the .zip created by activator dist.
After running activator dist, you can just upload the created zip file into Elastic Beanstalk and it will automatically deploy the app. You also put whatever .ebextension folders and configuration files into the dist folder that you require for Elastic Beanstalk configuration. Ex. I have dist/.ebextensions/nginx/conf.d/proxy.conf for NGINX reverse proxy settings or dist/.ebextensions/env.config for environment variables.
Edit 2016: There's now a much better way to deploy your Playframework apps onto ElasticBeanstalk using the new Java SE containers.
Here's an article that walks you through deploying step by step using Jenkins to build and deploy your project:
https://www.davemaple.com/articles/deploy-playframework-elastic-beanstalk-jenkins/
You can use custom AMIs that I keep updated here:
https://github.com/davemaple/playframework-nginx-elastic-beanstalk
These run Nginx + Playframework and support standard zip files created using "activator dist".
We also saw this as being too much of a pain and have added native Play 2 support to Boxfuse to address this.
You can now simply do boxfuse run my-play-app-1.0.zip -env=prod and this will automatically:
create a minimal AMI tailor-made for your Play 2 app
create an elastic IP
create a security group with the correct permissions
launch an instance of your app
All future updates are performed as blue/green deployments with zero downtime.
This also works with Elastic Load Balancers and Auto-Scaling Groups and the Boxfuse free tier is designed to fit the AWS free tier.
You can read more about it here: https://boxfuse.com/blog/playframework-aws
Disclaimer: I'm the founder and CEO of Boxfuse
I had some problems with other solutions found here and there. I guess that the problem is that I'm developing on Play 2.4.
Anyway, I could deploy the app to Beanstalk using Typesafe Activator and Docker:
In build.sbt I added this lines:
import com.typesafe.sbt.packager.docker.{ExecCmd, Cmd}
// [...]
dockerCommands := Seq(
Cmd("FROM","java:openjdk-8-jre"),
Cmd("MAINTAINER","myname"),
Cmd("EXPOSE","9000"),
Cmd("ADD","stage /"),
Cmd("WORKDIR","/opt/docker"),
Cmd("RUN","[\"chown\", \"-R\", \"daemon\", \".\"]"),
Cmd("RUN","[\"chmod\", \"+x\", \"bin/myapp\"]"),
Cmd("USER","daemon"),
Cmd("ENTRYPOINT","[\"bin/myapp\", \"-J-Xms128m\", \"-J-Xmx512m\", \"-J-server\"]"),
ExecCmd("CMD")
)
I went to the project's directory and ran this command in the terminal
$ ./activator clean docker:stage
I opened the [project]/target/dockerdirectory and created the file Dockerrun.aws.json. This was its content:
{
"AWSEBDockerrunVersion": "1",
"Ports": [
{
"ContainerPort": "9000"
}
]
}
In the same target/docker directory, I tested the result, built, checked and ran the image:
$ docker build -t myapp .
$ docker images
$ docker run -p 9000:9000 myapp
As everything was ok, I zipped the content:
$ zip -r myapp.zip *
My zip file had Dockerfile, Dockerrun.aws.json and stage/* files
Finally, I created a new Beanstalk app and uploaded the zip created on the last step. I took care of select "Generic Docker" on "Predefined configuration", when I was creating the app.
Beanstalk only supports WAR deployment and Play doesn't officially support WAR deployment. If you want to use EC2 then you should instead just create an EC2 instance and follow the deployment instructions: http://www.playframework.com/documentation/2.2.x/ProductionDist
Deploying play 2.* apps in aws ec2 is very diffrent until you have found this much better way to do it. I mean ansible is promising a great solution to that. though it is still needed to work with new setup of ansible, and its playbook but that must be worthy.
I have found these reads very recently and yet to apply them in my project. I hope following reads will help you to learn more:
Ansible + play + aws ec2
Read it to know more about Ansible to deply play in aws
Thanks!
Hope this will help you to kick your start. Please do share more knowledge you gain during the procedure or if there is any simple way to solve this complicated deployment problem.