Using puma on a rails app; it sometimes dies without any articular reason; also often dies (does not restart after being stopped) when deployed
What would be a good way to monitor if the process died, and restart it right way ?
Being called within a rails app; I'd be useful to have a way to defines it for any apps.
I did not found any useable ways to do it (looked into systemd, other linux daemons… no success)
Thanks if any feedback
You can use puma control to start/stop puma server. If you know where puma.pid file placed (for Mac it's usually "#{Dir.pwd}/tmp/pids/puma.pid") you could do:
bundle exec pumactl -P path/puma.pid stop
To set pid file path or to other options (like daemonizing) you could create puma config. You can found an example here. And then start and stop server just with config file:
bundle exec pumactl -F config/puma.rb start
You can also restart and check status in this way:
bundle exec pumactl -F config/puma.rb restart
bundle exec pumactl -F config/puma.rb status
Related
My goal is to add the sidekiq service to upstart on AmazonLinux 2018.03.
Since I want to upgrade sidekiq to version 6, There are needs to manage the process by OS like upstart.
I put a file to /etc/init/sidekiq.conf from here.
After that, initctl list | grep sidekiq command shows nothing, so I tried sudo initctl reload-configuration, but nothing changed.
status sidekiq command shows status: Unknown job: sidekiq.
What else do I need to do to add the sidekiq service to upstart?
I am running supervisor/celery on an amazon aws server. Attempting to deploy a new application version eventually fails because the celery processes are not started. I have taken a look at the supervisord.conf file to ensure that the programs are included, which they are. At the end of the supervisord.conf file I have the following include:
[include]
files=celeryd.conf
files=flower.conf
I try to restart celery with
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart celeryd-default celeryd-slowtasks
celeryd-defualt and celeryd-slowtaks being the names of the programs listed in celeryd.conf. I get the following error:
celeryd-default: ERROR (no such process)
celeryd-slowtasks: ERROR (no such process)
celeryd-default: ERROR (no such process)
celeryd-slowtasks: ERROR (no such process)
If I run
sudo /usr/local/bin/supervisorctl -c /opt/python/etc/supervisord.conf restart all
I get
flower: stopped
httpd: stopped
httpd: started
flower: started
without any mention of celery. Any idea how to start figuring this issue out?
Check /opt/python/etc/supervisord.conf, you are probably including a folder that you don't expect to be included.
Also ensure that the instance of supervisor that is running is actually using the config file you ex
How do I run StrongLoop's Loopback with Forever so that the app is automatically restarted after ever change?
So far just running forever server/server.js doesn't seem to work...
Maybe you should run it with the watch flag like
forever -w entrypoint.js
Thanks. I have found that the best script is this:
"scripts": {
"start": "forever --verbose --uid \"myapp\" --watch --watchDirectory ./server server/server.js"
},
Each part means:
--verbose: Log all details (useful when developing new routes)
--uid \"myapp\": So that "myapp" will appear when you do a forever list
--watch: Watch for file changes
--watchDirectory ./server: The folder to watch for changes
server/server.js: The app entry point
Additionally I open it with like nohup npm start & so that the process will keep running in the background and the output will be appended to a nohup.out file.
I have never ran into this before because I can always just run the dev server, open up a new tab in terminal and curl from there. I can't do this now because I am running the Django Development server from a Docker container and so if I open a new tab, I will be in the local shell and not the docker container.
How can I leave the development server running and still be able to curl or run other commands?
When I run the development server I'm left with this message:
Django version 1.10.3, using settings 'test.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
and so unable to type any commands.
You can use & to run the server as a background job in the current shell:
$ python manage.py runserver &
[1] <pid>
$
You can use the fg command to get back direct control over the runserver process, then you can stop it as usual using Ctrl+C.
To set a foreground process as a background job, you can pause it using Ctrl+Z, and run the bg command. You can see a list of running backgrounds job in the current shell using the jobs command.
The difference with screen is that this will run the server in the current shell. If you exit the shell, the server will stop as well, while screen uses a separate process that will continue after you exit the current shell.
In a development environment you can do following also.
Let the server run in one terminal window.
Open a new terminal window/tab and run
docker exec -it <Container ID/Name> /bin/bash
It will give you interactive access to your container, i.e. you can execute any command in your container rather than in your local shell.
Type exit to come out container shell to local shell.
I'm trying to deploy my Rails 4 app using Capistrano 3. I'm getting error messages in running the db:migrations (i've been sloppy, sorry). Is there a way to have Capistrano deploy the app (at least the first time) using db:schema:load?
An excerpt of my deploy.rb:
namespace :deploy do
%w[start stop restart].each do |command|
desc 'Manage Unicorn'
task command do
on roles(:app), in: :sequence, wait: 1 do
execute "/etc/init.d/unicorn_#{fetch(:application)} #{command}"
end
end
end
I'm not sure how to override Capistrano 3's default behaviour. Can someone tell me how to add this to my script?
For first time deploys, I generally hack around it by logging into the server, cding into the release directory (which will have the deployed code at this point), and then manually running RAILS_ENV=yourenv bundle exec rake db:setup.
In Capistrano 3.10.1 with a Rails 5.1.6 application,
~/Documents/p.rails/perla-uy[staging]$ bundle exec cap staging deploy:updating
gives me enough to shell-in and run the db:structure:load or db:schema:load task manually. In the secure shell session to the host, switch to the newly created release directory and:
dclo#localhost:~/perla-uy/releases/20180412133715$ bundle install --without development test --deployment
dclo#localhost:~/perla-uy/releases/20180412133715$ bundle exec rails db:schema:load
Shelling into a (successful or failed) deploy that has tried deploy:migrate isn't quite the same.
Note: I have RAILS_ENV=production and RAILS_MASTER_KEY=... set-up by the shell login.