I have a system where I need full dynamic control of URLs, both before and after the request.
I am using signals for this, and for the pre-request signal (the one I'm having trouble with, I have a piece of middleware like this, which connects to the signal, allows it to check if the current request.path applies to it, and then goes with the first one it gets. This normally works fine, and is fairly elegant):
class PreRouteMiddleWare(object):
def process_request(self, request):
url = request.path.strip('/')
if url == '':
url = '/'
pre_routes = pre_route.send(sender=request, url=url)
for reciever, response in pre_routes:
if response:
return response
return None
Now, to register something that happens "pre" the Django routing stack, I do something like this in the app's models.py:
#receiver(pre_route)
def try_things(sender, url, **kwargs):
try:
thing= Thing.objects.get(url=url)
from myapp.views import myview
return myview(sender, some_args)
except Thing.DoesNotExist:
return False
Which also works great on my dev server.
However, the problem arises in production, where I use uWSGI. I start uWSGI (from upstart) like this:
sudo /usr/local/bin/uwsgi --emperor '/srv/*/uwsgi.ini' --enable-threads --single-interpreter
And my uwsgi.ini looks like this:
[uwsgi]
socket = /srv/new/uwsgi.sock
module = wsgi:app
chdir = /srv/new/myapp
virtualenv = /srv/new
env = DJANGO_SETTINGS_MODULE=myapp.settings
uid = wsgi_new
gid = www-data
chmod = 770
processes = 2
What seems to be happening is for each uWSGI process/thread, they only seem to load models.py on the first request, meaning that the first request for each process will fail to connect the signals. This means that I have n (where n is the number of processes) requests fail completely because models.py is not loaded at startup (as it is in development).
Am I configuring uWSGI wrong? Is there a better way to force signals to be connected at startup?
Django actually lazily loads stuff. Using the development server gives a false sense of security about how things will work in a real WSGI server because the loading of the management commands by the development server forces a lot of early initialisation that doesn't occur with a production server.
You might read:
http://blog.dscpl.com.au/2010/03/improved-wsgi-script-for-use-with.html
which explains the issue as it occurs in mod_wsgi. Same thing will happen for uWSGI.
Ok, turns out that I needed to make my middleware hook process_view as opposed to process_request:
class PreRouteMiddleWare(object):
def process_view(self, request, *args, **kwargs):
url = request.path.strip('/')
if url == '':
url = '/'
pre_routes = pre_route.send(sender=request, url=url)
for reciever, response in pre_routes:
if response:
return response
return None
And now it works great!
Related
I have two requests, which are called from react front end, one request is running in a loop which is returning image per request, now the other request is registering a user, both are working perfect, but when the images request is running in a loop, at the same time I register user from other tab,but that request status shows pending, if I stops the images request then user registered,How can I run them parallel at the same time.
urls.py
url(r'^create_user/$', views.CreateUser.as_view(), name='CreateAbout'),
url(r'^process_image/$', views.AcceptImages.as_view(), name='AcceptImage'),
Views.py
class CreateUser(APIView):
def get(self,request):
return Response([UserSerializer(dat).data for dat in User.objects.all()])
def post(self,request):
payload=request.data
serializer = UserSerializer(data=payload)
if serializer.is_valid():
instance = serializer.save()
instance.set_password(instance.password)
instance.save()
return Response(serializer.data, status=status.HTTP_201_CREATED)
return Response(serializer.errors, status=status.HTTP_400_BAD_REQUEST)
class AcceptImages(APIView):
def post(self,request):
global video
stat, img = video.read()
frame = img
retval, buffer_img = cv2.imencode('.jpg', frame)
resdata = base64.b64encode(buffer_img)
return Response(resdata)
These endpoints I am calling from react,the second endpoint is being calling in a loop and the same time from other tab I register user but it shows status pending and If I stop the image endpoint it then register the user,how can I make these two request run parallel.
I have researched a lot but can't find appropriate solution, there is one solution I using celery, but did not whether it solves my problem if it solves how can I implement above scenerio
You should first determine whether the bottleneck is the frontend or the backend.
frontend: Chrome can make up to 6 requests for the same domain at a time. (Up to HTTP/1.1)
backend: If you use python manage.py runserver, consider using gunicorn or uWSGI. As the Django documentation says, the manage.py command should not be used in production. Modify the process and thread count settings in the gunicorn or uWSGI to 2 or higher and try again.
I have several API's as sources of data, for example - blog posts. What I'm trying to achieve is to send requests to this API's in parallel from Django view and get results. No need to store results in db, I need to pass them to my view response. My project is written on python 2.7, so I can't use asyncio. I'm looking for advice on the best practice to solve it (celery, tornado, something else?) with examples of how to achieve that cause I'm only starting my way in async. Thanks.
A solution is use Celery and pass your request args to this, and in the front use AJAX.
Example:
def my_def (request):
do_something_in_celery.delay()
return Response(something)
To control if a task is finished in Celery, you can put the return of Celery in a variable:
task_run = do_something_in_celery.delay()
In task_run there is a property .id.
This .id you return to your front and use it to monitor the status of task.
And your function executed in Celery must have de decorator #task
#task
do_something_in_celery(*args, **kwargs):
You will a need to control the tasks, like a Redis or RabbitMQ.
Look this URLs:
http://masnun.com/2014/08/02/django-celery-easy-async-task-processing.html
https://buildwithdjango.com/blog/post/celery-progress-bars/
http://docs.celeryproject.org/en/latest/index.html
I found a solution using concurrent.futures ThreadPoolExecutor from futures lib.
import concurrent.futures
import urllib.request
URLS = ['http://www.foxnews.com/',
'http://www.cnn.com/',
'http://europe.wsj.com/',
'http://www.bbc.co.uk/',
'http://some-made-up-domain.com/']
# Retrieve a single page and report the URL and contents
def load_url(url, timeout):
with urllib.request.urlopen(url, timeout=timeout) as conn:
return conn.read()
# We can use a with statement to ensure threads are cleaned up promptly
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
# Start the load operations and mark each future with its URL
future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}
for future in concurrent.futures.as_completed(future_to_url):
url = future_to_url[future]
try:
data = future.result()
except Exception as exc:
print('%r generated an exception: %s' % (url, exc))
else:
print('%r page is %d bytes' % (url, len(data)))
You can also check out the rest of the concurrent.futures doc.
Important!
The ProcessPoolExecutor class has known (unfixable) problems on Python 2 and should not be relied on for mission critical work.
Django==1.11.7
django-tenant-schemas==1.8.0
django-allauth==0.34.0
Multi tenant site using django-tenant-schemas (postgres).
On different tenants, different settings are required.
More specifically, different setting is required for ACCOUNT_EMAIL_VERIFICATION
1 tenant needs ACCOUNT_EMAIL_VERIFICATION = "optional" while another one needs ACCOUNT_EMAIL_VERIFICATION ="mandatory"
Looking in the source code, the setting looks not customisable, it is fixed for the whole django site.
-> How can this be done?
You can compute the settings at runtime, since it's simply a python code.
Set that specific code programmatically, using your preferred way. One example:
# predefine the settings per tenant
ACCOUNT_EMAIL_VERIFICATION_PER_TENANT = {
"tenant_x": "mandatory",
"tenant_y": "optional",
}
# implement get_tenant
def get_tenant():
# here be tenant logic
pass
this_tenant = get_tenant()
ACCOUNT_EMAIL_VERIFICATION = ACCOUNT_EMAIL_VERIFICATION_PER_TENANT[get_tenant()]
Or you can have multiple settings files and join them as you wish. Here's how django does.
Oh and if you want to separate the logic from the settings file and have it run before evaluating the settings perhaps, you can inspect what is the trail of execution when you launch your server (e.g. starting from manage.py and insert your get_tenant logic somewhere in between). Most probably it will be somewhere starting from the wsgi.py file - where the application instance gets created and all the django fun begins.
When it comes to programming, you are always in control.
Solved in following way:
In settings.py:
try:
ACCOUNT_EMAIL_VERIFICATION = os.environ['ACCOUNT_EMAIL_VERIFICATION_OVERRIDE']
except KeyError:
ACCOUNT_EMAIL_VERIFICATION = 'mandatory'
In wsgi.py file of the tenant where e-mail verification is optional:
os.environ['ACCOUNT_EMAIL_VERIFICATION_OVERRIDE'] = 'optional'
wsgi files for the other tenants remain unchanged.
Gave the bounty to Adelin as he suggested to look into the wsgi file.
I stumbled upon this situation, and my dynamic solution is a middleware as follows without any hardcoding tenant's names
from django.conf import settings
from django.db import connection
from django_tenants.utils import get_public_schema_name, get_tenant_model
class TenantSettingsMiddleWare:
def __init__(self, get_response):
self.get_response = get_response
def __call__(self, request):
self.request = request
self.overload_settings()
response = self.get_response(request)
return response
def overload_settings(self):
current_schema_obj = get_tenant_model().objects.get(schema_name=connection.schema_name)
settings.DEFAULT_FROM_EMAIL = 'admin#{}'.format(current_schema_obj.domains.last())
Cheers 🍻🍻🍻
The following code works locally when I use Django's development server, but I am running into intermittent bugs in production with Nginx and Gunicorn.
views.py
def first_view(request):
if request.method == "POST":
# not using a django form in the template, so need to parse the request POST
# create a dictionary with only strings as values
new_post = {key:val for key,val in request.POST.items() if key != 'csrfmiddlewaretoken'}
request.session['new_post'] = new_mappings # save for use within next view
# more logic here (nothing involving views)
return redirect('second_view')
def second_view(request):
if request.method == 'POST':
new_post = request.session['new_post']
# ... more code below
# render template with form that will eventually post to this view
I will sometimes receive a KeyError after posting to the second view. Based on the documentation on when sessions are saved, it seems like the session variable should be saved since it is modifying the session directly. Also, if I take the sessionid provided the error page's debug panel and access the session via Django's API, I can see the 'new_post' session variable
python manage.py shell
>>> from django.contrib.sessions.backends.db import SessionStore
>>> s = SessionStore(session_key='sessionid_from_debug_panel')
>>> s['new_post']
# dictionary with expected post items
Is there something I'm missing? Thanks in advance for your help!
Ok, I finally figured out the issue.
By default Django uses cached sessions when you create a new project using django-admin startproject project_name_here
In the documentation it warns that caching should only be used in production if using the Memcached cache backend since the local-memory cache backend is NOT multi-process safe. https://docs.djangoproject.com/en/1.11/topics/http/sessions/#using-cached-sessions
The documentation also cautions against local memory caching in the deployment checklist: https://docs.djangoproject.com/en/1.11/howto/deployment/checklist/#caches
I changed the SESSION_ENGINE in settings.py to 'django.contrib.sessions.backends.db' and the error went away. https://docs.djangoproject.com/en/1.11/ref/settings/#session-engine
Hope this is helpful to someone else!
I have to assign to work on one Django project. I need to know about the URL say, http://....
Since with ‘urls.py’ we indeed have ‘raw’ information. How I come to know about the complete URL name; mean with
http+domain+parameters
Amit.
Look at this snippet :
http://djangosnippets.org/snippets/1197/
I modified it like this :
from django.contrib.sites.models import RequestSite
from django.contrib.sites.models import Site
def site_info(request):
site_info = {'protocol': request.is_secure() and 'https' or 'http'}
if Site._meta.installed:
site_info['domain'] = Site.objects.get_current().domain
site_info['name'] = Site.objects.get_current().name
else:
site_info['domain'] = RequestSite(request).domain
site_info['name'] = RequestSite(request).name
site_info['root'] = site_info['protocol'] + '://' + site_info['domain']
return {'site_info':site_info}
The if/else is because of different versions of Django Site API
This snippet is actually a context processor, so you have to paste it in a file called context_processors.py in your application, then add to your settings :
TEMPLATE_CONTEXT_PROCESSORS = DEFAULT_SETTINGS.TEMPLATE_CONTEXT_PROCESSORS + (
'name-of-your-app.context_processors.site_info',
)
The + is here to take care that we d'ont override the possible default context processor set up by django, now or in the future, we just add this one to the tuple.
Finally, make sure that you use RequestContext in your views when returning the response, and not just Context. This explained here in the docs.
It's just a matter of using :
def some_view(request):
# ...
return render_to_response('my_template.html',
my_data_dictionary,
context_instance=RequestContext(request))
HTTPS status would be handled differently by different web servers.
For my Nginx reverse proxy to Apache+WSGI setup, I explicitly set a header that apache (django) can check to see if the connection is secure.
This info would not be available in the URL but in your view request object.
django uses request.is_secure() to determine if the connection is secure. How it does so depends on the backend.
http://docs.djangoproject.com/en/dev/ref/request-response/#django.http.HttpRequest.is_secure
For example, for mod_python, it's the following code:
def is_secure(self):
try:
return self._req.is_https()
except AttributeError:
# mod_python < 3.2.10 doesn't have req.is_https().
return self._req.subprocess_env.get('HTTPS', '').lower() in ('on', '1')
If you are using a proxy, you will probably find it useful that HTTP Headers are available in HttpRequest.META
http://docs.djangoproject.com/en/dev/ref/request-response/#django.http.HttpRequest.META
Update: if you want to log every secure request, use the above example with a middleware
class LogHttpsMiddleware(object):
def process_request(self, request):
if request.is_secure():
protocol = 'https'
else:
protocol = 'http'
print "%s://www.mydomain.com%s" % (protocol, request.path)
Add LogHttpsMiddleware to your settings.py MIDDLEWARE_CLASSES