Running Django-Celery in Production - django

I've built a Django web application and some Django-Piston services. Using a web interface a user submits some data which is POSTed to a web service and that web service in turn uses Django-celery to start a background task.
Everything works fine in the development environment using manage.py. Now I'm trying to move this to production on a proper apache server. The web application and web services work fine in production but I'm having serious issues starting celeryd as a daemon. Based on these instructions: http://docs.celeryproject.org/en/latest/tutorials/daemonizing.html#running-the-worker-as-a-daemon I've created a celeryconfig.py file and stuck it in the /usr/bin directory (this is where celeryd is location on my arch linux server).
CELERYD_CHDIR="/srv/http/ControllerFramework/"
DJANGO_SETTINGS_MODULE="settings"
CELERYD="/srv/http/ControllerFramework/manage.py celeryd"
However when I try to start celeryd from the command line I get the following error:
"Missing connection string! Do you have "
celery.exceptions.ImproperlyConfigured: Missing connection string! Do you have CELERY_RESULT_DBURI set to a real value?
Not sure where to go from here. Below is my settings.py section as it pertains to this problem:
BROKER_HOST = "localhost"
BROKER_PORT = 5672
BROKER_USER = "controllerFramework"
BROKER_PASSWORD = "******"
BROKER_VHOST = "localhost"

So I ended up having a chat with the project lead on django-celery. Couple of things. First off celery must be run using 'manage.py celeryd'. Secondly, in the settings.py file you have to 'import djcelery'
This import issue may be fixed in the next version but for now you have to do this.

Related

Python Flask Web application not able to deploy on Heroku using Waitress server

I am trying to deploy my Flask application on Heroku PaaS Cloud Service. It, however, works on localhost with no errors. I am using git for version control and pushing to Heroku. Initially, as a web server, I used 'gunicorn' but later came to know it is useful for UNIX distributions. So I resorted my webserver to 'waitress'. I saw some of the posts and tried everything in the Procfile for hosting to Heroku. My Procfile reads like: web: waitress-serve --port=$PORT app:app. I also know that the first app in this line is the package and the second app is the name of the instance best to my knowledge. I even changed from app to myapp, website and I have all the packages including these mentioned are in my VS code editor. But my application is not getting deployed onto Heroku and it gives Application Error. Now when I check the logs using heroku logs --tail it gives me the following error as in the screenshot. Any help would be highly obliged. I am trying this for 23 hours. Heroku Logs
The Procfile looks correct but the problem could be that the file (app.py) and the variable (app) have the same name.
I suggest the following approach, in app.py
# define Flask app
def create_app():
try:
web_app = Flask(__name__)
logging.info('Starting up..')
return web_app
except Exception as e:
logging.exception(e)
# retrieve port
def get_port():
return int(os.environ.get("PORT", 5000))
# start Flask app
if __name__ == '__main__':
web_app = create_app()
web_app.run(debug=False, port=get_port(), host='0.0.0.0')
The application can be launched from the Procfile with
web: waitress-serve --port=$PORT --call 'app:create_app'

What is an efficient way to develop Airflow plugins? (without restarting the webserver for each change)

I am looking for an efficient way to develop plugins within Airflow.
Current behavior: I change something in Python files e.g. test_plugin.py, reload the page in browser and nothing happens until I restart the webserver. This is most annoying and time consuming.
Desired behavior: I change something in Python files and the change is reflected after reloading the app in the browser.
As Airflow is based on Flask and in Flask the desired behavior is achievable by running Flask in debug mode (export FLASK_DEBUG=1, then start Flask app): Can I achieve the Flask behavior somehow in Airflow?
It turns out that this was indeed a bug with the Airflow CLI's webserver --debug mode; future versions will have the expected behavior.
Issue: https://issues.apache.org/jira/browse/AIRFLOW-5867
PR: https://github.com/apache/airflow/pull/6517
In order to run Airflow with live reloading, run the following command (10.7+):
$ airflow webserver --debug
In contrast to the code modification suggested by #herrjeh42, make sure that your configuration does not include unit_test_mode = True in order to enable reloading.
Cheers!
You can force reloading the python code by starting the airflow webserver in debug & reload mode. As of Airflow 1.10.5 I had to modify airflow/bin/cli.py (from my opinion the line is buggy).
old:
app.run(debug=True, use_reloader=False if app.config['TESTING'] else True,
new:
app.run(debug=True, use_reloader=True if json.loads(app.config['TESTING'].lower()) else False,
Change in airflow.cfg
unit_test_mode = True
Start the webserver with
airflow webserver -d

Completing uWSGI/Nginx setup for Django site

So a few months ago I setup a django blog on an Ubuntu server with Digital Ocean using this tutorial
Note: please excuse my links below but SO will not let me post more than 2 links due to my account's lack of points.
digitalocean[ dot] com/community/tutorials/how-to-serve-django-applications-with-uwsgi-and-nginx-on-ubuntu-16-04
the only problem was this was done as a brand new blog and I wanted to put my own one that was on my local computer there but I tried to do this by just uploading files via ssh to copy over the old ones and ended up making a mess and had to scrap it.
I just started again on a new server and have done the basic setup along with cloning my own django blog onto the server from Github as well as installing postgresql and now Im following this:
uwsgi-docs.readthedocs[ dot ]io/en/latest/tutorials/Django_and_nginx.html
So far I have completed the following successfully:
installed uwsgi
run the test.py 'hello world' file successfully with:
uwsgi --http :8000 --wsgi-file test.py
test run the django site on the sever with:
python manage.py runserver my_ip_here:8000
the above appears to be working as I can see the bare basics of my site but not css etc)
done a test run of the site with:
uwsgi --http :8000 --module mysite.wsgi
run collect static which seems to have been successful
installed nginx and I can see the 'Welcome to nginx!' when visiting the ip
created the mysite_nginx.conf file in my main directory
What Im having problems with
this part from the readthedocs tutorial doesn't work for me, when I visit the paths of any of my images:
To check that media files are being served correctly, add an image called media.png to the /path/to/your/project/project/media directory, then visit example.com:8000/media/media.png - if this works, you’ll know at least that nginx is serving files correctly.
running this does NOT return the 'hello world' test that I should see
uwsgi --socket :8001 --wsgi-file test.py
after making the appropriate changes in mysite_nginx.conf this doesn't return anything at :8000 either (my sock file is present though)
uwsgi --socket mysite.sock --wsgi-file test.py
Some other things to add:
my error.log at /var/log/nginx/error.log is empty with no messages in, not sure if this is normal
this is my mysite_nginx.conf file - http://pastebin[ dot ]com/CGcc8unv
when I run this command as specified by the readthedocs tutorial
uwsgi --socket :8001 --wsgi-file test.py
and then go to mysite:8001 I get these errors in the shell
invalid request block size: 21573 (max 4096)...skip
invalid request block size: 21573 (max 4096)...skip
I setup the symlink as the readthedocs tutorial specified and have double checked that.
I do not have an nginx .ini file yet as where Im at in the readthedocs tutorial it hasn't specified to creat that yet.
as I said earlier I can still return my site running some uwsgi commands and I can see the 'welcome to nginx' message at my site/ip.

How to test a python eval statement in UWSGI application's ini config?

As far I can tell, my eval statement within a USWGI's app config isn't working/executing, but I cannot figure out how to test this.
OS: Debian GNU/Linux 7.1 (wheezy)
UWSGI: 1.2.3-debian
Python: 2.7
I'm actually trying to setup Newrelic's application monitoring with the following in my app.ini file (using the application mounting method for a Django app):
[uwsgi]
chdir = /home/app-user/myapp/bin
wsgi-file = django.wsgi
socket = 127.0.0.1:3031
pythonpath = /home/app-user/myapp/src
logto = /var/log/uwsgi/app/myapp.log
enable-threads = true
single-interpreter = true
eval = import newrelic.agent, django.wsgi; newrelic.agent.initialize('/path/to/newrelic.ini'); application = newrelic.agent.wsgi_application()(django.wsgi.application)
My newrelic.ini conf:
log_file = /tmp/newrelic-python-agent.log
After restarting and making some requests to the app (which is up and running as per usual) the newrelic log_file is not even created, and there is nothing in the uwsgi app log or the django log, so I don't know how to tell what is happening in the eval.
I've tried putting outright syntactically incorrect stuff in the eval, but uwsgi still restarts successfully.
Is there a way to validate what's in the eval statement as executed by the uwsgi process?
I'm late to the party, but your problem was that you had wsgi-file option that made eval useless. (Same goes for module option - this is the case I had.)
So, to make uWSGI wrap any WSGI application with a middleware, you just had to remove the offending options. I.e.:
; DON'T USE THIS: wsgi-file=myproject/wsgi.py
; NEITHER THIS: module=myproject.wsgi
eval=import myproject.wsgi, myfancymw; application = myfancymw.wrap(myproject.wsgi.application)
It sounds like there is a lot going on here. You might want to open a ticket with newrelic at
https://support.newrelic.com so they can investigate what is happening in your setup.

Getting broker started with django-celery

This is my first time using Celery so this might be a really easy question. I'm following the tutorial. I added BROKER_URL = "amqp://guest:guest#localhost:5672/" to my settings file. I added the simple task to my app. Now I do "ing the worker process" with
manage.py celeryd --loglevel=info --settings=settings
The settings=settings is needed for windows machines celery-django can't find settings.
I get
[Errno 10061] No connection could be made because the target machine actively refused it. Trying again in 2 seconds...
So it seems like the worker is not able to connect to the broker. Do I have to start the broker? Is it automatically started with manage.py runserver? Do I have to install something besides django-celery? Do I have to do something like manage.py runserver BROKER_URL?
Any pointers would be much appreciated.
You need to install broker first. Or try to use django db.
But i do not recommend use django db in production. Redis is ok. But it maybe problem run it on windows.