I was wondering that i always got the wrong message when i corrected my code, but not for long, it would return the right result of my program. Is that normal?
Sounds like you are not restarting the built-in django development server manually after making the changes:
The development server automatically reloads Python code for each
request, as needed. You don’t need to restart the server for code
changes to take effect. However, some actions like adding files don’t
trigger a restart, so you’ll have to restart the server in these
cases.
As documentation says, sometimes in order to see the changes, you need to restart the server manually.
Also, sometimes Django dev server reloader doesn't see the changes right away and it takes some time for the server to notice changes and trigger restart. If you see this often, restart the server manually.
Also note that in Django 1.7 kernel signals are used to autoreload the server on linux - this should make it pick up the changes and restart faster:
Changed in Django 1.7:
If you are using Linux and install pyinotify, kernel signals will be
used to autoreload the server (rather than polling file modification
timestamps each second). This offers better scaling to large projects,
reduction in response time to code modification, more robust change
detection, and battery usage reduction.
Related
I'm running apache with django and mod_wsgi enabled in 2 different processes.
I read that the second process is a on-change listener for reloading code on change, but for some reason the ready() function of my AppConfig class is being executed twice. This function should only run once.
I understood that running django runserver with the --noreload flag will resolve the problem on development mode, but I cannot find a solution for this in production mode on my apache webserver.
I have two questions:
How can I run with only one process in production or at least make only one process run the ready() function ?
Is there a way to make the ready() function run not in a lazy mode? By this, I mean execute only on on server startup, not on first request.
For further explanation, I am experiencing a scenario as follows:
The ready() function creates a folder listener such as pyinotify. That listener will listen on a folder on my server and enqueue a task on any changes.
I am seeing this listener executed twice on any changes to a single file in the monitored directory. This leads me to believe that both processes are running my listener.
No, the second process is not an onchange listener - I don't know where you read that. That happens with the dev server, not with mod_wsgi.
You should not try to prevent Apache from serving multiple processes. If you do, the speed of your site will be massively reduced: it will only be able to serve a single request at a time, with others queued until the first finishes. That's no good for anything other than a toy site.
Instead, you should fix your AppConfig. Rather than blindly spawning a listener, you should check to see if it has already been created before starting a new one.
You shouldn't prevent spawning multiple processes, because it's good thing, especially on production environment. You should consider using some external tool, separated from django or add check if folder listening is already running (for example monitor persistence of PID file and it's content).
Is there good alternatives to the django developement server (runserver) that are more performant,
especially in concurency and static serving, and that have the auto-reload function, without having to setup a full blown production environment ?
Im working on Windows so gunicorn cannot be used.
You can install and use the rungevent commant. It has auto-reload function and it's more performant than thread-based servers (it is greenlet-oriented). The only caveat is the static file serving: you must install a webserver or proxy like nginx for that.
Are you doing so high bulk tests in ur dev server so you suffer this -specially regarding static files-? If so, then you must emulate, as said, a productive environment (just have an nginx correctly configured pointing to the address:port you use for your rungevent command).
If static files is not your problem, install a rungevent command and try how it works.
No since dev sites are made to handle limited requests, runserver runs fine on a machine that can match the requirements of your app.
If you are dealing with a large scale dev project which your system cannot tolerate, then it's either time to reproduce a production environment or upgrade.
I find it difficult to believe that your application is that bad in terms of performance, again if you are trying to test the behavior of a full production site (in terms of DB entries etc) then its time to emulate the production environment.
If that is not the case, then I would start checking the underlying models / code of the project.
Well, if you don't want to use django dev server you will have to spend some time to setup anyway. But the good part is that you can do it only once. Sequential deploying will take very little time.
Not so much time ago I switched from fastcgi to uWSGI and it made my life much easier.
uWSGI is awesome! It has autoreload (which works both in daemon mode and when launched directly in terminal). When launched in terminal you can use debugger (e.g. pdb) during request just like you do in django dev-server. And of course you can debug with print in simple cases.
I'm using it with nginx which serves both static and uWSGI but it of course can be any server.
The most useful feature for me in this configuration is that you use the same thing both for dev and production. For simple projects after developing you just turn off autoreload and a few other options and it's ready.
I wanted to load some data and keep it in memory as in application scope. Basing on this and other posts in stackoverflow, I've put the required code snippet in settings.py, urls.py, models.py. I also put print statements to see when it gets executed. I see all the print statements in the server log with every request.
The following are the version details:
Linux 2.6.32-358.el6.x86_64
Apache/2.2.15 (Unix)
Django 1.4
Python 2.7.4
Looks like django is re-loading for every request. I also looked into this and confirmed with the admin that MaxRequestsPerChild is NOT 1.
If you are running in mod_wsgi embedded mode, you will have a multi process configuration, thus can take a while to warm up all processes with your code. Also, Apache will kill off idle processes and so you will see process churn. So what you may be seeing is the result of that.
Add to your debug code the printing out of the process ID to confirm this.
The easiest thing to do is use mod_wsgi daemon mode and restrict yourself to a small fixed number of persistent processes.
Also go watch my PyCon talk about this sort of stuff at:
http://lanyrd.com/2013/pycon/scdyzk/
I'm reading http://code.google.com/p/modwsgi/wiki/ReloadingSourceCode but it seems like way too much work, I've been restarting my apache2 server gracefully whenever I make tweaks to Django code as it inconsistently picks up the right files and probably tries to rely on cached .pycs.
I setup Django using mod_wsgi using the steps outlined at this blog post.
It automatically reflects updates (although every now and then, there will be a delay for a few minutes - never figure out why nor is it that much of an inconvenience).
If you are having to restart your Apache server then you can't be using mod_wsgi daemon mode. Use daemon mode and then simply touching the WSGI script file when an atomic set of changes have been completed isn't that hard and certainly safer than a system which restarts arbitrarily when it detects any single change. If you do want automatic restart based on code changes, then that is described in that document as well. For a Django slant on it, read:
http://blog.dscpl.com.au/2008/12/using-modwsgi-when-developing-django.html
http://blog.dscpl.com.au/2009/02/source-code-reloading-with-modwsgi-on.html
What is it about what is documented there which is 'way too much work'?
I run all my Django sites as SCGI daemons. I won't get into the fundamentals of why I do this but this means that when a site is running, there is a set of processes running from the following command:
/websites/website-name/manage.py runfcgi method=threaded host=127.0.0.1 port=3036 protocol=scgi
All is fine until I want to roll out a new release from the VCS (Bazaar in my case). I made an alias script called up that does the following:
alias up='bzr up; killall manage.py'
It is this generic for one simple reason: I'm lazy. I want one command that I can use under any site to update it. I'm logged into the server most of the time anyway, so, I just hop into the root of the right site and call up. Site updates from BZR and restarts.
The first downside of this is it kills all the manage.py processes on the machine. Currently 6 sites and growing rapidly. The second (and potentially worse -- at least for end-users) is it's a severely non-graceful restart. If somebody was uploading an image or doing something else with a long connection time, their request would just die on the vine.
So what I'm looking for is suggestions for a single method that:
Is generic for lazy people like me (eg I can run it from any site root without having to remember which command I need to call; 'up' is perfect in name.
Only kills the current site. I'm only updating the current site, so only this one should die.
Does the restart in a graceful manner. If possible, it should wait until there are no more active connections. I've no idea how feasible this is.
Instead of killing everything with manage.py in the name, could you write a script for each site that only kills manage.py processes from that site? (Edit: just write the scripts and put them in the root of each site (which you cd to anyway) and run those – still only one command to remember)
I don't know enough about SCGI or Bazaar to suggest much more than that... My method (I'm lazy too) uses Mercurial and Fabric for deployment: http://stevelosh.com/blog/entry/2009/1/15/deploying-site-fabric-and-mercurial/ – maybe it will give you an idea you can use?