How to check if the Django-background process is running in server? - django

The command to start the background process is,
nohup python manage.py process_tasks &
Similarly in Linux which is the command used to check running status?

are you using celery?
and what version of Django are you using?
Try This:
Docs- http://docs.celeryproject.org/en/latest/userguide/workers.html?highlight=revoke#inspecting-workers
$ celery inspect reserved
$ celery inspect active
$ celery inspect registered
$ celery inspect scheduled

Related

ElastiCache Redis, Django celery, how to on EC2 instance AWS?

So I have set up redis with django celery on aws so that when I run the task manually
eg: $ python manage.py shell
from myapp.tasks import my_task
it works fine.
Now of course I want to run that on deploy and perhaps making sure they always run especially on deploy .
But when I start the beat from ym ec2 the same way I do locally it starts but triggers are not happening .
why exactly
I would like to run those at deploy for example :
command : /var/app/current/celery -A project worker -l info
command : /var/app/current/celery -A project beat -l info

How can I curl 127.0.0.1/8000 while django development server is running?

I have never ran into this before because I can always just run the dev server, open up a new tab in terminal and curl from there. I can't do this now because I am running the Django Development server from a Docker container and so if I open a new tab, I will be in the local shell and not the docker container.
How can I leave the development server running and still be able to curl or run other commands?
When I run the development server I'm left with this message:
Django version 1.10.3, using settings 'test.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
and so unable to type any commands.
You can use & to run the server as a background job in the current shell:
$ python manage.py runserver &
[1] <pid>
$
You can use the fg command to get back direct control over the runserver process, then you can stop it as usual using Ctrl+C.
To set a foreground process as a background job, you can pause it using Ctrl+Z, and run the bg command. You can see a list of running backgrounds job in the current shell using the jobs command.
The difference with screen is that this will run the server in the current shell. If you exit the shell, the server will stop as well, while screen uses a separate process that will continue after you exit the current shell.
In a development environment you can do following also.
Let the server run in one terminal window.
Open a new terminal window/tab and run
docker exec -it <Container ID/Name> /bin/bash
It will give you interactive access to your container, i.e. you can execute any command in your container rather than in your local shell.
Type exit to come out container shell to local shell.

AWS EB, Play Framework and Docker: Application Already running

I am running a Play 2.2.3 web application on AWS Elastic Beanstalk, using SBTs ability to generate Docker images. Uploading the image from the EB administration interface usually works, but sometimes it gets into a state where I consistently get the following error:
Docker container quit unexpectedly on Thu Nov 27 10:05:37 UTC 2014:
Play server process ID is 1 This application is already running (Or
delete /opt/docker/RUNNING_PID file).
And deployment fails. I cannot get out of this by doing anything else than terminating the environment and setting it up again. How can I avoid that the environment gets into this state?
Sounds like you may be running into the infamous Pid 1 issue. Docker uses a new pid namespace for each container, which means first process gets PID 1. PID 1 is a special ID which should be used only by processes designed to use it. Could you try using Supervisord instead of having playframework running as the primary processes and see if that resolves your issue? Hopefully, supervisord handles Amazon's termination commands better than the play framework.
#dkm was having the same issue with my dockerized play app. I package my apps as standalone for production using '$ sbt clean dist` commands. This produces a .zip file that you can deploy to some folder in your docker container like /var/www/xxxx.
Get a bash shell into your container: $ docker run -it <your image name> /bin/bash
Example: docker run -it centos/myapp /bin/bash
Once the app is there you'll have to create an executable bash script I called mine startapp and the contents should be something like this:
Create the script file in the docker container:
$ touch startapp && chmod +x startapp
$ vi startapp
Add the execute command & any required configurations:
#!/bin/bash
/var/www/<your app name>/bin/<your app name> -Dhttp.port=80 -Dconfig.file=/var/www/pointflow/conf/<your app conf. file>
Save the startapp script then from a new terminal and then you must commit your changes to your container's image so it will be available from here on out:
Get the running container's current ID:
$ docker ps
Commit/Save the changes
$ docker commit <your running containerID> <your image's name>
Example: docker commit 1bce234 centos/myappsname
Now for the grand finale you can docker stop or exit out of the running container's bash. Next start the play app using the following docker command:
$ docker run -d -p 80:80 <your image's name> /bin/sh startapp
Example: docker run -d -p 80:80 centos/myapp /bin/sh startapp
Run docker ps to see if your app is running. You see something similar to this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
19eae9bc8371 centos/myapp:latest "/bin/sh startapp" 13 seconds ago Up 11 seconds 0.0.0.0:80->80/tcp suspicious_heisenberg
Open a browser and visit your new dockerized app
Hope this helps...

sudo /etc/init.d/celeryd start generates a "Unknown command: 'celeryd_multi'"

I'm setting up celery to run daemonized, using the variables from my virtual environment. But when I run $ sudo /etc/init.d/celeryd start, I get Unknown command: 'celeryd_multi' Type 'manage.py help' for usage.
I have set the following:
CELERYD_CHDIR="/home/myuser/projects/myproject"
ENV_PYTHON="/home/myuser/.virtualenvs/myproject/bin/python"
CELERYD_MULTI="$ENV_PYTHON $CELERYD_CHDIR/manage.py celeryd_multi"
When I run $ /home/myuser/.virtualenvs/myproject/bin/python /home/myuser/projects/myproject/manage.py celeryd_multi from the command line, it works fine.
Any ideas? I will gladly post any other code you need :)
Thank you!
Maybe you just set a wrong DJANGO_SETTINGS_MODULE:
try: DJANGO_SETTINGS_MODULE="settings" <-> DJANGO_SETTINGS_MODULE="project.settings"
The problem here is that when you run it as your user, virtualenv already has proper environment activated for your user "myuser" and it pulls packages from /home/myuser/.virtualenvs/myproject/...
When you do sudo /etc/init.d/celeryd start you are starting celery as root which probably doesn't have virtualenv activated in /root/.virtualenvs/ if such a thing even exists and thus it looks for python packages in /usr/lib/... where your default python is and consequently where your celery is not installed.
Your options are to either:
Replicate the same virtualenv under root user and start it like you tried with sudo
Keep virtualenv where it is and start celery as your user "myuser" (no sudo) without using init scripts.
Write a script that will su - myuser -c /bin/sh /home/myuser/.virtualenvs/myproject/bin/celeryd to invoke it from init.d as a myuser.
Install supervisor outside of virtualenv and let it do the dirtywork for you
Thoughts:
Avoid using root for anything you don't have to.
If you don't need celery to start on boot then this is fine, wrapped in a script possibly.
Plain hackish to me, but works if you don't want to invest additional 30min to use something else.
Probably best way to handle ALL of your python startup needs, highly recommended.

Why Django application exits when jenkins job ends?

I run the following command using bash to start a Django application without any problems even if I exit from that shell.
python manage.py runfcgi daemonize=true ...
When Jenkins runs same command above, the Django application runs as well as using bash to run. But why the application is killed when the job ends?
I would guess that Jenkins starts a new shell session for each job, and then closes it when the job is complete. This will terminate any processes started in that session.
If you want a process to persist after closing the session, you can start it with nohup:
nohup python manage.py runfcgi daemonize=true ...
I had a similar problem in the past using fabric - the service would terminate even if I set the daemonize flag to true. I used nohup to work around it.
I found a solution here and it works for me
https://wiki.jenkins-ci.org/display/JENKINS/ProcessTreeKiller