For cacheing reasons I store data in memcache - I'm collecting data with management-commands from Django - if I collect data all the urls are relative - for absolute path I need to pass context to serializer - but I don't have this. How can I solve this problem to get the full path?
For the public view it works well - but if I develop with localhost then I have troubles with the urls.
I tried fake a request object with requestfactory or testclient but not worked :D
Related
I am writing tests for my django application views and i am a beginner at this. I know that before running tests a new database is generated which only contains data that is being created at the time of running of tests but in my view's tests i am making API calls by url on my server which is using my default database not the test database in following way.
def test_decline_activity_valid_permission(self):
url = 'http://myapp:8002/api/v1/profile/' + self.profileUUID + '/document/' + \
self.docUUID + '/decline/'
response = requests.post(
url,
data=json.dumps(self.payload_valid_permission),
headers=self.headers,
)
self.assertEquals(response.status_code, status.HTTP_201_CREATED)
i want to know that if we can use test database for our testing our views or not. And what is difference between using request and using Client?
You could try using Django's LiveServerTestCase. That works like TransactionTestCase but will start up a server on localhost pointing at the test database. It gets started/stopped at the beginning/end of each test.
You could then configure the URL in your test to point at that local server. Django provides self.live_server_url for accessing the URL of the server.
As mentioned in the comments, Django's test client allows you to test views without making real HTTP requests. Whereas the requests library that you're using in your test, will send and receive real HTTP request and responses.
Is it possible to get the http request as bytestring like it gets transferred over the wire if you have a django request object?
Of course the plain text (not encrypted if https gets used).
I would like to store the bytestring to analyze it later.
At best I would like to access the real bytestring. Creating a bytestring from request.META, request.GET and friends will likely not be the same like the original.
Update: it seems that it is impossible to get to the original bytes. Then the question is: how to construct a bytestring which roughly looks like the original?
As others pointed out it is not possible because Django doesn't interact with raw requests.
You could just try reconstructing the request like this.
def reconstruct_request(request):
headers = ''
for header, value in request.META.items():
if not header.startswith('HTTP'):
continue
header = '-'.join([h.capitalize() for h in header[5:].lower().split('_')])
headers += '{}: {}\n'.format(header, value)
return (
'{method} HTTP/1.1\n'
'Content-Length: {content_length}\n'
'Content-Type: {content_type}\n'
'{headers}\n\n'
'{body}'
).format(
method=request.method,
content_length=request.META['CONTENT_LENGTH'],
content_type=request.META['CONTENT_TYPE'],
headers=headers,
body=request.body,
)
NOTE this is not a complete example only proof of concept
The basic answer is no, Django doesn't have access to the raw request, in fact it doesn't even have code to parse raw HTTP request.
This is because Django's (like many other Python web frameworks) HTTP request/response handling is, in it's core, a WSGI application (WSGI specification).
It's the job of the frontend/proxy server (like Apache or nginx) and application server (like uWSGI or gunicorn) to "massage" the request (like transforming and stripping headers) and convert it into an object that can be handled by Django.
As an experiment you can actually wrap Django's WSGI application yourself and see what Django gets to work with when a request comes in.
Edit your project's wsgi.py and add some extremely basic WSGI "middleware":
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
class MyMiddleware:
def __init__(self, app):
self._app = app
def __call__(self, environ, start_response):
import pdb; pdb.set_trace()
return self._app(environ, start_response)
# Wrap Django's WSGI application
application = MyMiddleware(get_wsgi_application())
Now if you start your devserver (./manage.py runserver) and send a request to your Django application. You'll drop into a debugger.
The only thing of interest here is the environ dict. Poke around it and you'll see that it's pretty much the same as what you'll find in Django's request.META. (The contents of the environ dict is detailed in this section of the WSGI spec.)
Knowing this, the best thing you can get is piecing together items form the environ dict to something that remotely resembles an HTTP request.
But why? If you have an environ dict, you have all the information you need to replicate a Django request. There's no actual need to translate this back to a HTTP request.
In fact, as you now known, you don't need a HTTP request at all to call Django's WSGI application. All you need is a environ dict with the required keys and a callable so that Django can relay the response.
So, to analyze requests (and even be able to replay them) you only need to be able to recreate a valid environ dict.
To do so in Django the easiest option would be to serialize request.META and request.body to a JSON dict.
If you really need something that resembles an HTTP request (and you are unable to go a level up to e.g. the webserver to log this information) you'll just have to piece this together from the information available in request.META and request.body, with the caveats that this is not a realistic representation of the original HTTP request.
I am using HTSQL with Django. I use HTSQL shell to check/generate my queries and then use them for rendering data in json and raw formats.
so like, my HTSQL shell url is:
http://127.0.0.1:8000/htsql
so when I want to access data from a table in the HTSQL shell environment, I do,
http://127.0.0.1:8000/htsql/table_name
and to get JSON data,
http://127.0.0.1:8000/htsql/table_name/:json
In background, HTSQL shell fetches this data by using a GET request. So from my client-side Javascipt/jQuery, I initiate a GET request with its URL in above format and get my desired JSON data directly.
Everything was fine when I was using local Django server, but when I deployed my project using Gunicorn and Nginx, it naturally started to block some of my long(actually, pretty long) queries in the GET requests. I searched this problem and found out that Gunicorn allows GET request values ranging from 0 to 8190 characters. So I tweaked my Gunicorn settings for the maximum limit but still the same problem. This was because my queries, when used with several filter values, are exceeding 8190 limit.
So I thought to use POST request as its normally preferred for secure and long requests. So I changed my GET request to POST request and pointed it to the same URL as mentioned above and tried it on my local Django Server(i.e without Gunicorn and Nginx). But now I get "400 BAD REQUEST". With firebug, I checked that the response was "POST requests are not permitted."
I also noticed that the HTSQL_Django Module routes all the request to htsql_django.views.gateway. I had a look to this gateway function in the views.py of htsql_django module but couldn't find any clue.
Is it so, that the HTSQL doesn't accepts POST requests?? How can I fetch/access JSON data from HTSQL using POST request?
I have written a django view that receives data in a JSON format via POST Method, and I return a JSON Response to it.
Though my View is working perfectly on the local Development Server, it is spitting out a 500 Internal Server when I am POST-ing the data onto a NGINX-DJANGO Setup on my Production Server.
Thanks in adavnce on help!
My client-server application is mainly based on special purpose http server that communicates with client in an Ajax like fashion, ie. the client GUI is refreshed upon asynchronous http request/response cycles.
Evolvability of the special purpose http server is limited and as the application grows, more and more standard features are needed which are provided by Django for instance.
Hence, I would like to add a Django application as a facade/reverse-proxy in order to hide the non-standard special purpose server and be able to gain from Django. I would like to have the Django app as a gateway and not use http-redirect for security reasons and to hide complexity.
However, my concern is that tunneling the traffic through Django on the serer might spoil performance. Is this a valid concern?
Would there be an alternative solution to the problem?
Usually in production, you are hosting Django behind a web container like Apache httpd or nginx. These have modules designed for proxying requests (e.g. proxy_pass for a location in nginx). They give you some extras out of the box like caching if you need it. Compared with proxying through a Django application's request pipeline this may save you development time while delivering better performance. However, you sacrifice the power to completely manipulate the request or proxied response when you use a solution like this.
For local testing with ./manage.py runserver, I add a url pattern via urls.py in an if settings.DEBUG: ... section. Here's the view function code I use, which supports GET, PUT, and POST using the requests Python library: https://gist.github.com/JustinTArthur/5710254
I went ahead and built a simple prototype. It was relatively simple, I just had to set up a view that maps all URLs I want to redirect. The view function looks something like this:
def redirect(request):
url = "http://%s%s" % (server, request.path)
# add get parameters
if request.GET:
url += '?' + urlencode(request.GET)
# add headers of the incoming request
# see https://docs.djangoproject.com/en/1.7/ref/request-response/#django.http.HttpRequest.META for details about the request.META dict
def convert(s):
s = s.replace('HTTP_','',1)
s = s.replace('_','-')
return s
request_headers = dict((convert(k),v) for k,v in request.META.iteritems() if k.startswith('HTTP_'))
# add content-type and and content-length
request_headers['CONTENT-TYPE'] = request.META.get('CONTENT_TYPE', '')
request_headers['CONTENT-LENGTH'] = request.META.get('CONTENT_LENGTH', '')
# get original request payload
if request.method == "GET":
data = None
else:
data = request.raw_post_data
downstream_request = urllib2.Request(url, data, headers=request_headers)
page = urllib2.urlopen(downstream_request)
response = django.http.HttpResponse(page)
return response
So it is actually quite simple and the performance is good enough, in particular if the redirect goes to the loopback interface on the same host.