Background
I have a service A accessible with HTTP requests. And I have other services that want to invoke these APIs.
Problem
When I test service A's APIs with POSTMAN, every request works fine. But when I user python's requests library to make these request, there is one PUT method that just won't work. For some reason, the PUT method being called cannot receive the data (HTTP body) at all, though it can receive headers. On the other side, the POST method called in the same manner receives the data perfectly.
I managed to achieve my goal simply by using httplib library instead, but I am still quite baffled by what exactly happened here.
The Crime Scene
Route 1:
#app.route("/private/serviceA", methods = ['POST'])
#app.route("/private/serviceA/", methods = ['POST'])
def A_create():
# request.data contains correct data that can be read with request.get_json()
Route 2:
#app.route("/private/serviceA/<id>", methods = ['PUT'])
#app.route("/private/serviceA/<id>/", methods = ['PUT'])
def A_update(id):
# request.data is empty, though request.headers contains headers I passed in
# This happens when sending the request with Python requests library, but not when sending with httplib library or with POSTMAN
# Also, data comes in fine when all other routes are commented out
# Unless all other routes are commented out, this happens even when the function body has only one line printing request.data
Route 3:
#app.route("/private/serviceA/schema", methods = ['PUT'])
def schema_update_column():
# This one again works perfectly fine
Using POSTMAN:
Using requests library from another service:
#app.route("/public/serviceA/<id>", methods = ['PUT'])
def A_update(id):
content = request.get_json()
headers = {'content-type': 'application/json'}
response = requests.put('%s:%s' % (router_config.HOST, serviceA_instance_id) + '/private/serviceA/' + str(id), data=json.dumps(content), headers = headers)
return Response(response.content, mimetype='application/json', status=response.status_code)
Using httplib library from another service:
#app.route('/public/serviceA/<id>', methods=['PUT'])
def update_course(id):
content= request.get_json()
headers = {'content-type': 'application/json'}
conn = httplib.HTTPConnection('%s:%s' % (router_config.HOST, serviceA_instance_id))
conn.request("PUT", "/private/serviceA/%s/" % id, json.dumps(content), headers)
return str(conn.getresponse().read())
Questions
1. What am I doing wrong for the route 2?
2. For route 2, the handler doesn't seem to be executed when either handler is commented out, which also confuses me. Is there something important about Flask that I'm not aware of?
Code Repo
Just in case some nice ppl are interested enough to look at the messy undocumented code...
https://github.com/fantastic4ever/project1
The serviceA corresponds to course service (course_flask.py), and the service calling it corresponds to router service (router.py).
The version that was still using requests library is 747e69a11ed746c9e8400a8c1e86048322f4ec39.
In your use of the requests library, you are using requests.post, which is sending a POST request. If you use requests.put then you would send a PUT request. That could be the issue.
Request documentation
Related
I am having trouble retrieving the full JSON Response when executing a GET request on an API I am building with DRF. If I include pagination and retrieve, say, 100 results then I receive the full JSON Response. If I do not use pagination, and would like to retrieve a few thousand results, then the server simply cuts off at a seemingly random spot and does not return full JSON. For instance, it may return {"hi": "hel instead of returning {"hi":"hello"}. DRF reports this as a 200 response code, so it seems as though it executes properly.
The code for my view looks like:
class RepresentativeListView(generics.ListAPIView):
queryset = models.Representative.objects.all()
serializer_class = serializers.RepresentativeSerializer
The code for my serializer looks like:
class RepresentativeSerializer(serializers.ModelSerializer):
class Meta:
model = models.Representative
fields = (
'bioguide_id',
'stats',
'leadership_score',
'ideology_score',
)
Could anyone understand why a full JSON response would not be coming through?
This might be an issue with your Apache / Nginx / whatever you use web server.
It may close the connection if the application didn't respond on time.
Check your web server logs and the time it takes for the application to render the response vs the server timeout.
this the code I'm using, is there anyway to make it run faster:
src_uri = boto.storage_uri(bucket, google_storage)
for obj in src_uri.get_bucket():
f.write('%s\n' % (obj.name))
This is an example where it pays to use the underlying Google Cloud Storage API more directly, using the Google API Client Library for Python to consume the RESTful HTTP API. With this approach, it is possible to use request batching to retrieve the names of all objects in a single HTTP request (thereby reducing the extra HTTP request overhead) as well as to use field projection with the objects.get operation (by setting &fields=name) to obtain a partial response so that you aren't sending all the other fields and data over the network (or waiting for retrieval of unnecessary data on the backend).
Code for this would look like:
def get_credentials():
# Your code goes here... checkout the oauth2client documentation:
# http://google-api-python-client.googlecode.com/hg/docs/epy/oauth2client-module.html
# Or look at some of the existing samples for how to do this
def get_cloud_storage_service(credentials):
return discovery.build('storage', 'v1', credentials=credentials)
def get_objects(cloud_storage, bucket_name, autopaginate=False):
result = []
# Actually, it turns out that request batching isn't needed in this
# example, because the objects.list() operation returns not just
# the URL for the object, but also its name, as well. If it had returned
# just the URL, then that would be a case where we'd need such batching.
projection = 'nextPageToken,items(name,selfLink)'
request = cloud_storage.objects().list(bucket=bucket_name, fields=projection)
while request is not None:
response = request.execute()
result.extend(response.items)
if autopaginate:
request = cloud_storage.objects().list_next(request, response)
else:
request = None
return result
def main():
credentials = get_credentials()
cloud_storage = get_cloud_storage_service(credentials)
bucket = # ... your bucket name ...
for obj in get_objects(cloud_storage, bucket, autopaginate=True):
print 'name=%s, selfLink=%s' % (obj.name, obj.selfLink)
You may find the Google Cloud Storage Python Example and other API Client Library Examples helpful in figuring out how to do this. There are also a number of YouTube videos on the Google Developers channel such as Accessing Google APIs: Common code walkthrough that provide walkthroughs.
So I have a Django app, which as part of its functionality makes a request (using the requests module) to another server. What I want to do is have a server available for unittesting which gives me canned responses to test requests from the Django app (allowing to test how Django handles the different potential responses).
An example of the code would be:
payload = {'access_key': key,
'username': name}
response = requests.get(downstream_url, params=payload)
# Handle response here ...
I've read that you can use SimpleHTTPServer to accomplish this, but I'm not sure of how I use it to this end, any thoughts would be much appreciated!
Use the mock module.
from mock import patch, MagicMock
#patch('your.module.requests')
def test_something(self, requests_mock):
response = MagicMock()
response.json.return_value = {'key': 'value'}
requests_mock.get.return_value = response
…
requests_mock.get.assert_called_once_with(…)
response.json.assert_called_once()
Much more examples in the docs.
You don't need to (and should not) test the code that makes the request. You want to mock out that part and focus on testing the logic that handles the response.
I'm working on a Django web application which (amongst other things) needs to handle transaction status info sent using a POST request.
In addition to the HTTP security supported by the payment gateway, my view checks request.META['HTTP_REFERER'] against an entry in settings.py to try to prevent funny business:
if request.META.get('HTTP_REFERER', '') != settings.PAYMENT_URL and not settings.DEBUG:
return HttpResponseForbidden('Incorrect source URL for updating payment status')
Now I'd like to work out how to test this behaviour.
I can generate a failure easily enough; HTTP_REFERER is (predictably) None with a normal page load:
def test_transaction_status_succeeds(self):
response = self.client.post(reverse('transaction_status'), { ... })
self.assertEqual(response.status_code, 403)
How, though, can I fake a successful submission? I've tried setting HTTP_REFERER in extra, e.g. self.client.post(..., extra={'HTTP_REFERER': 'http://foo/bar'}), but this isn't working; the view is apparently still seeing a blank header.
Does the test client even support custom headers? Is there a work-around if not? I'm using Django 1.1, and would prefer not to upgrade just yet if at all possible.
Almost right. It's actually:
def transaction_status_suceeds(self):
response = self.client.post(reverse('transaction_status'), {}, HTTP_REFERER='http://foo/bar')
I'd missed a ** (scatter operator / keyword argument unpacking operator / whatever) when reading the source of test/client.py; extra ends up being a dictionary of extra keyword arguments to the function itself.
You can pass HTTP headers to the constructor of Client:
from django.test import Client
from django.urls import reverse
client = Client(
HTTP_USER_AGENT='Mozilla/5.0',
HTTP_REFERER='http://www.google.com',
)
response1 = client.get(reverse('foo'))
response2 = client.get(reverse('bar'))
This way you don't need to pass headers every time you make a request.
During the processing of a request in Django, I need to perform a nested request to the same application. Consider this example, where while processing the sendmail request, I attempt to make another request to the same server to obtain the content of an attachment (the body of the mail and a list of urls whose content to attach are provided to the sendmail view function through POST parameters):
def sendmail(request):
mail = #... create a mail object
for url in urls: # iterate over desired attachments urls
data = urllib.urlopen('http://127.0.0.1:8000' + url).read()
mail.attach(data)
There are several issues with this approach. First, it doesn't work with the development server because it can only process one request at a time: as it is already processing the sendmail request, attempting to read from the given url will block forever.
Second, I have to specify the server's ip and port, which is not very nice.
I would like to do something like that instead:
data = django_get(url).read()
where the hypothetical django_get method would not really make an http request, but instead directly call the django component that takes an url and returns an HttpResponse. That would solve both problems, as there would not be any actual socket connection, and it would not be necessary to include the server/port in the url. How could that be achieved?
The opposite of reverse() is resolve().
Also, this.
This is Ignacio Vazquez-Abrams' answer for the lazy:
from django.core.urlresolvers import resolve
def sendmail(request):
mail = #... create a mail object
for url in urls: # iterate over desired attachments urls
resolved = resolve(url)
request.path = url
mail.attach(resolved.func(request, *resolved.args, **resolved.kwargs))
Put the desired functionality in a separate function that both your sendmail function and the original page's function can call.