I'm working through this book about TDD with Django.
I get different behaviour from using self.client.get('/') and different one from using self.browser.get('/localhost:8000') seemingly they look
the same but getting different behaviour.
class FirstTest(unittest.TestCase):
def setUp(self):
self.browser = webdriver.Chrome(os.path.join(os.getcwd(), 'chromedriver'))
def test_home_page_returns_correct_html(self):
response = self.client.get('/')
self.assertTemplateUsed(response, 'home.html')
Can anybody explain what's happening here ?
These are two different things.
self.client, is the built-in Django test client. This isn't a real browser, and doesn't even make real requests. It just constructs a Django HttpRequest object and passes it through the request/response process - middleware, URL resolver, view, template - and returns whatever Django produces. It won't parse that response at all, or render it, and won't make other requests driven by the HTML for assets etc.
But webdriver.Chrome is an actual real browser, ie Chrome. Webdriver fires up a headless version of Chrome and drives it to request your web pages. They go through actual HTTP requests and then render in the browser the response; just like a real browser, if the HTML includes links to JS or CSS it will request them and then render them as well.
Related
I am writing tests for my django application views and i am a beginner at this. I know that before running tests a new database is generated which only contains data that is being created at the time of running of tests but in my view's tests i am making API calls by url on my server which is using my default database not the test database in following way.
def test_decline_activity_valid_permission(self):
url = 'http://myapp:8002/api/v1/profile/' + self.profileUUID + '/document/' + \
self.docUUID + '/decline/'
response = requests.post(
url,
data=json.dumps(self.payload_valid_permission),
headers=self.headers,
)
self.assertEquals(response.status_code, status.HTTP_201_CREATED)
i want to know that if we can use test database for our testing our views or not. And what is difference between using request and using Client?
You could try using Django's LiveServerTestCase. That works like TransactionTestCase but will start up a server on localhost pointing at the test database. It gets started/stopped at the beginning/end of each test.
You could then configure the URL in your test to point at that local server. Django provides self.live_server_url for accessing the URL of the server.
As mentioned in the comments, Django's test client allows you to test views without making real HTTP requests. Whereas the requests library that you're using in your test, will send and receive real HTTP request and responses.
Is it possible to get the http request as bytestring like it gets transferred over the wire if you have a django request object?
Of course the plain text (not encrypted if https gets used).
I would like to store the bytestring to analyze it later.
At best I would like to access the real bytestring. Creating a bytestring from request.META, request.GET and friends will likely not be the same like the original.
Update: it seems that it is impossible to get to the original bytes. Then the question is: how to construct a bytestring which roughly looks like the original?
As others pointed out it is not possible because Django doesn't interact with raw requests.
You could just try reconstructing the request like this.
def reconstruct_request(request):
headers = ''
for header, value in request.META.items():
if not header.startswith('HTTP'):
continue
header = '-'.join([h.capitalize() for h in header[5:].lower().split('_')])
headers += '{}: {}\n'.format(header, value)
return (
'{method} HTTP/1.1\n'
'Content-Length: {content_length}\n'
'Content-Type: {content_type}\n'
'{headers}\n\n'
'{body}'
).format(
method=request.method,
content_length=request.META['CONTENT_LENGTH'],
content_type=request.META['CONTENT_TYPE'],
headers=headers,
body=request.body,
)
NOTE this is not a complete example only proof of concept
The basic answer is no, Django doesn't have access to the raw request, in fact it doesn't even have code to parse raw HTTP request.
This is because Django's (like many other Python web frameworks) HTTP request/response handling is, in it's core, a WSGI application (WSGI specification).
It's the job of the frontend/proxy server (like Apache or nginx) and application server (like uWSGI or gunicorn) to "massage" the request (like transforming and stripping headers) and convert it into an object that can be handled by Django.
As an experiment you can actually wrap Django's WSGI application yourself and see what Django gets to work with when a request comes in.
Edit your project's wsgi.py and add some extremely basic WSGI "middleware":
import os
from django.core.wsgi import get_wsgi_application
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'project.settings')
class MyMiddleware:
def __init__(self, app):
self._app = app
def __call__(self, environ, start_response):
import pdb; pdb.set_trace()
return self._app(environ, start_response)
# Wrap Django's WSGI application
application = MyMiddleware(get_wsgi_application())
Now if you start your devserver (./manage.py runserver) and send a request to your Django application. You'll drop into a debugger.
The only thing of interest here is the environ dict. Poke around it and you'll see that it's pretty much the same as what you'll find in Django's request.META. (The contents of the environ dict is detailed in this section of the WSGI spec.)
Knowing this, the best thing you can get is piecing together items form the environ dict to something that remotely resembles an HTTP request.
But why? If you have an environ dict, you have all the information you need to replicate a Django request. There's no actual need to translate this back to a HTTP request.
In fact, as you now known, you don't need a HTTP request at all to call Django's WSGI application. All you need is a environ dict with the required keys and a callable so that Django can relay the response.
So, to analyze requests (and even be able to replay them) you only need to be able to recreate a valid environ dict.
To do so in Django the easiest option would be to serialize request.META and request.body to a JSON dict.
If you really need something that resembles an HTTP request (and you are unable to go a level up to e.g. the webserver to log this information) you'll just have to piece this together from the information available in request.META and request.body, with the caveats that this is not a realistic representation of the original HTTP request.
I've being trying to log in to this web page but I fail every time. This is the code i used
import requests
headers = {'User-Agent': 'Chrome'}
payload = {'_GlobalLoginControl$UserLogin':'myUser','_GlobalLoginControl$Password':'myPass'}
s = requests.Session()
r = s.post('https://www.scadalynx.com/GlobalLogin.aspx',headers=headers,data=payload)
r = s.get('https://www.scadalynx.com/Default.aspx')
print r.url
The result I get from: print r.url is this:
https://www.scadalynx.com/GlobalLogin.aspx?Timeout=Y
You can't.
The main problem is, your payload isn't complete. Check chrome's networking tab. There are much more required payloads.
ScriptMgr:_GlobalLoginControl$UpdatePanel1|_GlobalLoginControl$LoginBtn
ScriptMgr_HiddenField:;;AjaxControlToolkit, Version=4.1.40412.0, Culture=neutral, PublicKeyToken=28f01b0e84b6d53e:en-US:2d0688b9-5fe7-418f-aeb1-6ecaa4dca45f:475a4ef5:effe2a26:751cdd15:5546a2b:dfad98a5:1d3ed089:497ef277:a43b07eb:3cf12cf1
__EVENTTARGET:
__EVENTARGUMENT:
__VIEWSTATE:/wEPDwUKMTQxMjQ3NTE5MA9kFgICAQ8WAh4Ib25zdWJtaXQFkgFpZiAoJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9QYXNzd29yZCcpICE9IG51bGwpICRnZXQoJ19HbG9iYWxMb2dpbkNvbnRyb2xfUGFzc3dvcmQnKS52YWx1ZSA9IGVzY2FwZSgkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX1Bhc3N3b3JkJykudmFsdWUpOxYEAgEPZBYCZg9kFgICBQ9kFgICAg9kFgICAQ9kFgJmD2QWCgIND2QWAgIJDw9kFgIeB29uY2xpY2sFugFqYXZhc2NyaXB0OmlmKCRnZXQoJ19HbG9iYWxMb2dpbkNvbnRyb2xfX0ZvcmdvdFBhc3N3b3JkRU1haWxUZXh0Qm94JykgIT0gbnVsbCkkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX19Gb3Jnb3RQYXNzd29yZEVNYWlsVGV4dEJveCcpLnZhbHVlID0gJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9Vc2VyTG9naW4nKS52YWx1ZTtkAg8PZBYCAgUPEGRkFgBkAhEPD2QWAh8BBZUDamF2YXNjcmlwdDppZigkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX1VzZXJMb2dpblZhbGlkYXRvcicpICE9IG51bGwpJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9Vc2VyTG9naW5WYWxpZGF0b3InKS5lbmFibGVkID0gdHJ1ZTtpZigkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX19Vc2VyTG9naW5SZWd1bGFyRXhwcmVzc2lvblZhbGlkYXRvcicpICE9IG51bGwpJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9fVXNlckxvZ2luUmVndWxhckV4cHJlc3Npb25WYWxpZGF0b3InKS5lbmFibGVkID0gdHJ1ZTtpZigkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX1Bhc3N3b3JkVmFsaWRhdG9yJykgIT0gbnVsbCkkZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX1Bhc3N3b3JkVmFsaWRhdG9yJykuZW5hYmxlZCA9IHRydWU7ZAITD2QWAgIBDw8WAh4HVmlzaWJsZWhkZAIVD2QWBAIBDw9kFgIfAQUtJGdldCgnX0dsb2JhbExvZ2luQ29udHJvbF9QYXNzd29yZCcpLmZvY3VzKCk7ZAILDw9kFgIfAQVhamF2YXNjcmlwdDokZ2V0KCdfR2xvYmFsTG9naW5Db250cm9sX19Gb3Jnb3RQYXNzd29yZEVNYWlsUmVxdWlyZWRGaWVsZFZhbGlkYXRvcicpLmVuYWJsZWQgPSB0cnVlO2QCAg8PFgIfAmhkFgICAw8WAh4LXyFJdGVtQ291bnRmZBgBBR5fX0NvbnRyb2xzUmVxdWlyZVBvc3RCYWNrS2V5X18WAwUpX0dsb2JhbExvZ2luQ29udHJvbCRSZW1lbWJlckxvZ2luQ2hlY2tCb3gFK19HbG9iYWxMb2dpbkNvbnRyb2wkX0ZvcmdvdFBhc3N3b3JkQ2xvc2VCdG4FEV9FcnJvckltYWdlQnV0dG9uIXu7XOl6z8WoghCWdElD7kNBanI=
__VIEWSTATEGENERATOR:ABDC7715
__SCROLLPOSITIONX:0
__SCROLLPOSITIONY:0
__EVENTVALIDATION:/wEdAA8j+x15hTpBOEjDv1LxVan3AUijrFjxy9PpisoGxfMqnNduSMVw1RChh3aZsdCK82jXRUWkWThaqEhU3Gr5iw98GHoUhEtg6gp73QcFIR1tGEGQHmQGQos+5LR8l78kIyNCGm6wvkKBlG3Z3EngFWzmX3gMRUNTCvY9T8lfFGMsRkvp3s0LtAU9sya5EgaP5MNrqxxx0HTfWwHJy49saUYlPDg6OL5q3VoZ6biOkvIG8l/ujxMESq+8VmX4sGwXcQBJxOm7RbAd1IEojVITrtk4hx8VhfPuqTNrqWHRrUAMgBj1ffXkwiR7kcJxJ3ixy43iLukJszI09WI7xsAFyAKxG82PcA==
_GlobalLoginControl$ScrWidth:1536
_GlobalLoginControl$ScrHeight:864
_GlobalLoginControl$UserLogin:asdsad#asdas.com
_GlobalLoginControl$Password:asdasd
_GlobalLoginControl$PasswordStore:
_GlobalLoginControl$HiddenField1:
_GlobalLoginControl$_HiddenSessionContentID:
_ErrorHiddenField:
__ASYNCPOST:true
_GlobalLoginControl$LoginBtn:Login
Probably, you could outsource this (I think it isn't possible, you have to use selenium or get the page first and scrape the informations.
But check this topic: How to make HTTP POST on website that uses asp.net?
We considered that, the login should be pass-through with phantomjs/chrome with selenium, the you should pass the cookies and the headers to requests. After you pass the required informations for requests, you could work with request for the further steps.
I am attempting to build a MongoDB-backed Flask application which serves from the same endpoints:
A HTML web interface by default
A JSON response if Content-Type == application/json
The idea is that both a user consuming my application with a browser and a service consuming my API programatically can both hit http://myapp.com/users/12345 The former is served a HTML response and the latter is served a JSON response.
As I understand this is in keeping with 'pure' REST, in contrast to the tradition of serving the API from a separate path such as http://myapp.com/api/users/12345.
There is no discussion of views in the Eve docs, other than to say that results are served as JSON by default and XML if requested.
Is there any clean way to override this behaviour such that:
The standard Eve JSON response is served if Content-Type == application/json
Otherwise, a view applies a template to the data returned by Eve to generate a HTML response?
This seems like it would be an elegant means of creating an application which is both RESTful and DRY.
You could look at the Eve-Docs extension which implements a HTML /docs endpoint on top of an existing, Eve-powered, MongoDB REST Service.
Remember Eve is a Flask application (a subclass actually), so everything you can do with Flask you can do with Eve too (like decorate rendering functions etc.)
UPDATED: Here's a little example snippet which adds a custom /hello endpoint to a Eve powered API (source). As you can see is pretty much identical to a standard Flask endpoint:
from eve import Eve
app = Eve()
#app.route('/hello')
def hello_world():
return 'Hello World!'
if __name__ == '__main__':
app.run()
My client-server application is mainly based on special purpose http server that communicates with client in an Ajax like fashion, ie. the client GUI is refreshed upon asynchronous http request/response cycles.
Evolvability of the special purpose http server is limited and as the application grows, more and more standard features are needed which are provided by Django for instance.
Hence, I would like to add a Django application as a facade/reverse-proxy in order to hide the non-standard special purpose server and be able to gain from Django. I would like to have the Django app as a gateway and not use http-redirect for security reasons and to hide complexity.
However, my concern is that tunneling the traffic through Django on the serer might spoil performance. Is this a valid concern?
Would there be an alternative solution to the problem?
Usually in production, you are hosting Django behind a web container like Apache httpd or nginx. These have modules designed for proxying requests (e.g. proxy_pass for a location in nginx). They give you some extras out of the box like caching if you need it. Compared with proxying through a Django application's request pipeline this may save you development time while delivering better performance. However, you sacrifice the power to completely manipulate the request or proxied response when you use a solution like this.
For local testing with ./manage.py runserver, I add a url pattern via urls.py in an if settings.DEBUG: ... section. Here's the view function code I use, which supports GET, PUT, and POST using the requests Python library: https://gist.github.com/JustinTArthur/5710254
I went ahead and built a simple prototype. It was relatively simple, I just had to set up a view that maps all URLs I want to redirect. The view function looks something like this:
def redirect(request):
url = "http://%s%s" % (server, request.path)
# add get parameters
if request.GET:
url += '?' + urlencode(request.GET)
# add headers of the incoming request
# see https://docs.djangoproject.com/en/1.7/ref/request-response/#django.http.HttpRequest.META for details about the request.META dict
def convert(s):
s = s.replace('HTTP_','',1)
s = s.replace('_','-')
return s
request_headers = dict((convert(k),v) for k,v in request.META.iteritems() if k.startswith('HTTP_'))
# add content-type and and content-length
request_headers['CONTENT-TYPE'] = request.META.get('CONTENT_TYPE', '')
request_headers['CONTENT-LENGTH'] = request.META.get('CONTENT_LENGTH', '')
# get original request payload
if request.method == "GET":
data = None
else:
data = request.raw_post_data
downstream_request = urllib2.Request(url, data, headers=request_headers)
page = urllib2.urlopen(downstream_request)
response = django.http.HttpResponse(page)
return response
So it is actually quite simple and the performance is good enough, in particular if the redirect goes to the loopback interface on the same host.