How to run Procfile on EC2 instance for rails application? - ruby-on-rails-4

I want to run Procfile on EC2 instace.
Procfile content
worker: bundle exec sidekiq -e production
On local it is working, I used Foreman and below commands on local system
foreman start -f Procfile
foreman start
But for EC2 I have no idea, tried with foreman on EC2 but after running start command it stuck.
Main purpose is to auto start sidekiq on EC2 instance with load balancer any other suggestions are also welcome.
Currently I have to start sidekiq manually after Auto-scaling in AWS.
sidekiq start command is written in launch-configuration script(user data) All other script commands are working except to start sidekiq
Any help would be appreciated.

Related

ElastiCache Redis, Django celery, how to on EC2 instance AWS?

So I have set up redis with django celery on aws so that when I run the task manually
eg: $ python manage.py shell
from myapp.tasks import my_task
it works fine.
Now of course I want to run that on deploy and perhaps making sure they always run especially on deploy .
But when I start the beat from ym ec2 the same way I do locally it starts but triggers are not happening .
why exactly
I would like to run those at deploy for example :
command : /var/app/current/celery -A project worker -l info
command : /var/app/current/celery -A project beat -l info

Unable to run selenium chrome as ECS Jenkins slave : exec: "-url": executable file not found in $PATH

I am using ecs plugin and EC2 plugin for jenkins.
I have set up a task definition which is mapped to use the latest selenium chrome standalone image.
Jenkins is able to try start spin up the slave, but the ecs slave task slave never reaches the running state. It goes to stopped state with the below error;
container_linux.go:380: starting container process caused: exec: "-url": executable file not found in $PATH
Anyone who knows why this is happening please help.

Sidekiq Upstart on AmazonLinux 2018.03

My goal is to add the sidekiq service to upstart on AmazonLinux 2018.03.
Since I want to upgrade sidekiq to version 6, There are needs to manage the process by OS like upstart.
I put a file to /etc/init/sidekiq.conf from here.
After that, initctl list | grep sidekiq command shows nothing, so I tried sudo initctl reload-configuration, but nothing changed.
status sidekiq command shows status: Unknown job: sidekiq.
What else do I need to do to add the sidekiq service to upstart?

Elastic Beanstalk Procfile for go

I'm trying to deploy my go restful server program to EC2 Linux using Elastic Beanstalk. The document says that I need to create a Procfile at the root. So I did. Here are the steps:
Build my go program myapp.go to using
$ go build -o myapp -i myapp.go
Create a Procfile with exact name at the root with
web: myapp
Zip up the Procfile and the myapp image to a myapp.zip file.
Upload to the server via Elastic Beanstalk console. But I keep getting Degraded health and warning with
WARN Process termination taking longer than 10 seconds.
Any suggestions. By the way, I tried to use the same procfile procedure on the simple application.go zip file came from the Elastic Beanstalk example library. It didn't work either.
I was finally able to get a Go application to deploy with Elastic Beanstalk using the eb client. There are a few things that EB requires:
The name of your main file should be application.go.
Make sure your app is listening on port 5000.
You'll need a Procfile in the main root with
web: bin/application
You'll need a Buildfile with
make: ./build.sh
And finally you'll need a build.sh file with
#!/usr/bin/env bash
# Stops the process if something fails
set -xe
# All of the dependencies needed/fetched for your project.
# FOR EXAMPLE:
go get "github.com/gin-gonic/gin"
# create the application binary that eb uses
GOOS=linux GOARCH=amd64 go build -o bin/application -ldflags="-s -w"
Then if you run eb deploy (after creating your initial eb repository), it should work. I wrote a whole tutorial for deploying a Gin application on EB here. The section specifically on deploying with Elastic Beanstalk is here.

Is supervisord needed for docker+gunicorn+nginx?

I'm running django with gunicorn inside docker, my entry point for docker is:
CMD ["gunicorn", "myapp.wsgi"]
Assuming there is already a process that run the docker when the system starts and restart the docker container when it stops, do I even need to use supervisord? if gunicorn will crash won't it crash the docker and then restart?
The only time you need something like supervisord (or other process supervisor) in a Docker container is if you need to start up multiple independent processes inside the container when the it starts.
For example, if you needed to start both nginx and gunicorn in the same container, you would need to investigate some sort of process supervisor. However, a much more common solution would be to place these two services in two separate containers. A tool like docker-compose helps manage multi-container applications.
If a container exits because the main process exits, Docker will restart that container if you configured a restart policy when you first started it (e.g., via docker run --restart=always ...).
The simple answer is no. And yes you can start both nginx and gunicorn in the same container. You can either create a script which executes everything your container needs to run and start it with CMD at the end of your Dockerfile. Or you can combine everything like so:
CMD (cd /usr/src/app && \
nginx && \
gunicorn wsgi:application --config ../configs/gunicorn.conf)
Hope that helps!