My web app is deployed using nginx. I have view like below for the url /incoming/`.
def incoming_view(request):
incoming = request.GET["incoming"]
user = request.GET["user"]
...
When I hit my url /incoming/?incoming=hello&user=nkishore I am getting the response I need. but when I call this url using requests module with below code I am getting an error.
r = requests.get('http://localhost/incoming/?incoming=%s&user=%s'%("hello", "nkishore"))
print r.json()
I have checked the nginx logs and the request I got was /incoming/?incoming=hiu0026user=nkishore so in my view request.GET["user"] is failing to get user.
I am not getting what I am missing here, is it a problem with nginx or any other way to call in requests.
See Requests Docs for how to pass parameters, e.g.
>>> payload = {'key1': 'value1', 'key2': 'value2'}
>>> r = requests.get('https://httpbin.org/get', params=payload)
>>> print(r.url)
https://httpbin.org/get?key2=value2&key1=value1
Internally, Requests will likely escape the & ampersand with &. If you really want to do the URL manually, try as your URL string:
'http://localhost/incoming/?incoming=%s&user=%s'
Related
In tests I am making an api call, which returns 400. It is expected, but I can't find a way to debug this. Does django keep logs in a file somewhere? Or can I enable showing logs please?
res = self.client.post(self.url, data=payload, format='json')
print(res)
// <Response status_code=400, "application/json">
I knot something went wrong but how do I debug the server?
Thanks
You can use response.content to view the final content/error messages that will be rendered on the web-page, as bytestring. docs
>>> response = c.get('/foo/bar/')
>>> response.content
b'<!DOCTYPE html...
If you are returning a json response(which you probably are if using rest framework), you can use response.json() to parse the json. docs
>>> response = client.get('/foo/')
>>> response.json()['name']
'Arthur'
Note: If the Content-Type header is not "application/json", then a ValueError will be raised when trying to parse the response. Be sure to handle it properly.
When I use urllib, urllib2, or requests on Python 2.7, neither one ends up at the same URL as I do when I copy and paste the starting URL into Chrome or FireFox for Mac.
EDIT: I suspect this is because one has to be signed in to vk.com to be redirected. If this is the case, how do I add the sign-in to my script? Thanks!
Starting URL: https://oauth.vk.com/authorize?client_id=PRIVATE&redirect_uri=https://oauth.vk.com/blank.html&scope=friends&response_type=token&v=5.68
Actual final (redirected) URL: https://oauth.vk.com/blank.html#access_token=PRIVATE_TOKEN&expires_in=86400&user_id=PRIVATE
PRIVATE, PRIVATE_TOKEN = censored information
The following is one of several attempts at this:
import requests
APPID = 'PRIVATE'
DISPLAY_OPTION = 'popup' # or 'window' or 'mobile' or 'popup'
REDIRECT_URL = 'https://oauth.vk.com/blank.html'
SCOPE = 'friends' # https://vk.com/dev/permissions
RESPONSE_TYPE = 'token' # Documentation is vague on this. I don't know what
# other options there are, but given the context, i.e. that we want an
# "access token", I suppose this is the correct input
URL = 'https://oauth.vk.com/authorize?client_id=' + APPID + \
'&display='+ DISPLAY_OPTION + \
'&redirect_uri=' + REDIRECT_URL + \
'&scope=' + SCOPE + \
'&response_type=' + RESPONSE_TYPE + \
'&v=5.68'
# with requests
REQUEST = requests.get(URL)
RESPONSE_URL = REQUEST.url
I hope you notice whatever it is that's wrong with my code.
Extra info: I need the redirect because the PRIVATE_TOKEN value is necessary for further programming.
I tried some debugging but neither the interpreter nor IPython print out the debugging info.
Thanks!
The problem is the result of not being signed in in the Python environment.
Solution:
Use twill to create browser in Python and sign in.
Code:
from twill.commands import *
BROWSER = get_browser()
BROWSER.go(URL) # URL is the URL concatenated in the question
RESPONSE_URL = BROWSER.get_url()
I am trying to create an admin command that will simulate some api calls associated with a view but I don't want to hard code the url, for example like that url='http://127.0.0.1:8000/api/viewname', in order to send the request.
If I use the reverse option I can obtain half the url /api/viewname.
If I try to post the request that way
url = reverse('name-of-view')
requests.post(url, data=some_data)
I get
requests.exceptions.MissingSchema: Invalid URL '/api/viewname/': No schema supplied. Perhaps you meant http:///api/viewname/?
Do I have to look whether the server is running on the localhost or is there a more generic way?
requests module needs the absolute url to post to. you need
url = 'http://%s%s' % (request.META['HTTP_HOST'], reverse('name-of-view'))
requests.post(url, data=some_data)
Can I force scrapy to request an URL including commas without encoding it into %2C? The site (phorum) I want to crawl does not accept encoded URLs and redirecting me into root.
So, for example, I have site to parse: example.phorum.com/read.php?12,8
The url is being encoded into: example.phorum.com/read.php?12%2C8=
But when try to request this url, every time, I'm redirected into page with list of topics:
example.phorum.com/list.php?12
In those example URLs 12 is category number, 8 is topic number.
I tried to disable redirecting by disabling RedirectMiddleware:
DOWNLOADER_MIDDLEWARES = {
'scrapy.contrib.downloadermiddleware.redirect.RedirectMiddleware': None,
}
and in spider:
handle_httpstatus_list = [302, 403]
Moreover I tried to rewrite this URL and request it by sub parser:
Rules = [Rule(RegexLinkExtractor(allow=[r'(.*%2C.*)']), follow=True, callback='prepare_url')]
def prepare_url(self, response):
url = response.url
url = re.sub(r'%2C', ',', url)
if "=" in url[-1]:
url = url[:-1]
yield Request(urllib.unquote(url), callback = self.parse_site)
Where parse_site is target parser, which still calls using encoded URL.
Thanks in advance for any feedback
You can try canonicalize=False. Example iPython session:
In [1]: import scrapy
In [2]: from scrapy.contrib.linkextractors.regex import RegexLinkExtractor
In [3]: hr = scrapy.http.HtmlResponse(url="http://example.phorum.com", body="""link""")
In [4]: lx = RegexLinkExtractor(canonicalize=False)
In [5]: lx.extract_links(hr)
Out[5]: [Link(url='http://example.phorum.com/list.php?1,2', text=u'link', fragment='', nofollow=False)]
During the processing of a request in Django, I need to perform a nested request to the same application. Consider this example, where while processing the sendmail request, I attempt to make another request to the same server to obtain the content of an attachment (the body of the mail and a list of urls whose content to attach are provided to the sendmail view function through POST parameters):
def sendmail(request):
mail = #... create a mail object
for url in urls: # iterate over desired attachments urls
data = urllib.urlopen('http://127.0.0.1:8000' + url).read()
mail.attach(data)
There are several issues with this approach. First, it doesn't work with the development server because it can only process one request at a time: as it is already processing the sendmail request, attempting to read from the given url will block forever.
Second, I have to specify the server's ip and port, which is not very nice.
I would like to do something like that instead:
data = django_get(url).read()
where the hypothetical django_get method would not really make an http request, but instead directly call the django component that takes an url and returns an HttpResponse. That would solve both problems, as there would not be any actual socket connection, and it would not be necessary to include the server/port in the url. How could that be achieved?
The opposite of reverse() is resolve().
Also, this.
This is Ignacio Vazquez-Abrams' answer for the lazy:
from django.core.urlresolvers import resolve
def sendmail(request):
mail = #... create a mail object
for url in urls: # iterate over desired attachments urls
resolved = resolve(url)
request.path = url
mail.attach(resolved.func(request, *resolved.args, **resolved.kwargs))
Put the desired functionality in a separate function that both your sendmail function and the original page's function can call.