Django no text in response - django

somewhere in my views.py,I have
def loadFcs(request):
r = requests.get('a url')
res = json.loads(r.text)
#Do other stuff
return HttpResponse('some response')
Now when I call this from my javascript, loadFcs gets called, and probably requests.get gets called asynchronously. So I end up seeing ' TypeError at /loadFcs expected string or buffer' and the trace points to the line with
res = json.loads(r.text)
I also modified my code to check whats the problem, and
def loadFcs(request):
r = requests.get('a url')
res = json.loads(r.text)
if r == None:
print 'r is none'
if r.text == None:
print 'text is None'
#Do other stuff
return HttpResponse('some response')
and noticed that 'text is none'. So I think I need to adjust code so that request.get is synchronous. I think the method execution continues and the return statement is hit even before r.text has some value.
Suggestions?

okay, so I tried the same thing with python command line and it worked BUT not with the same code in my server.
So what was the problem?
apparently, response.text is in some encoding ( UTF8) which my server was not set to receive, so it was just throwing it away and hence null.
Solution: use response.content ( which is raw binary)

Related

Why does Python-Requests return the same data when issuing POST in a loop even after closing sessions?

I'm using Python Requests but need to issue requests/POST to a SOAP API endpoint. However, I keep receiving the same response for multiple requests, even though I rebuild a new request in a loop. I've tried closing sessions/responses to no avail.
Manually issuing a request returns different data. What may I be missing?
Here's a sample of the code:
quoteVariables = """
<QuoteVariables>
{0}
</QuoteVariables>"""
for state, zipcode in states.iteritems():
for key, value in buckets.iteritems():
quotesXMLWithStateZip = buildQuote() #returns long xml string
quotes = quoteVariables.format(quotesXMLWithStateZip)
soapRequest=soapXML.format(state, quotes, value)
headers = {'content-Type':'text/xml;charset=UTF-8', 'SOAPAction':'http://my.url.com', 'Connection':'close'}
with requests.Session() as s:
response = s.post('https://my.url.com/endpoint', headers=headers, data=soapRequest, stream=False)
if response.status_code == 200:
xml = xmltodict.parse(response.text)
#fetch relevant part of xml resposne, ignore soap headers etc.,
else:
print "Failure! Status: "+response.status_code+" Reason: "+response.reason
response.close()
s.close()
#reset xml stuff to prevent stale data from lying around owing to string sharing/copying
quotesXMLWithStateZip = ""
quotes = ""
soapRequest = ""
Surprisingly moving this code into a separate class seems to solve the problem - my guess is something with globals/states seems to throw off the python-requests module (I could be wrong).
Also, moving to a class, the with block becomes optional i.e., whether it exists or not doesn't seem to matter - the code works as expected with/without it.
For example:
responses = {}
for state, zipcode in states.iteritems():
for key, value in buckets.iteritems():
fetcher = MyFetcher()
responses[state] = fetcher.getQuotes(state, zipcode, key, value)
print responses
The code of MyFetcher looks something like:
class MyFetcher:
def getQuotes(self, state, zipcode, key, value):
quotesXMLWithStateZip = self.buildQuote() #returns long xml string
quotes = self.quoteVariables.format(quotesXMLWithStateZip)
soapRequest=self.soapXML.format(state, quotes, value)
headers = {'content-Type':'text/xml;charset=UTF-8', 'SOAPAction':'http://my.url.com', 'Connection':'close'}
with requests.Session() as s: # <- ENTIRELY OPTIONAL
response = s.post('https://my.url.com/endpoint', headers=headers, data=soapRequest, stream=False)
if response.status_code == 200:
xml = xmltodict.parse(response.text)
#fetch relevant part of xml resposne, ignore soap headers etc.,
return parsedResponse
else:
print "Failure! Status: "+response.status_code+" Reason: "+response.reason
return None
response.close() # <- NOT NEEDED
s.close() # <- NOT NEEDED
I can only guess globals causing a problem somehow with the act of building/sending a request and/or receiving a response.

Scrapy webcrawler gets caught in infinite loop, despite initially working.

Alright, so I'm working on a scrapy based webcrawler, with some simple functionalities. The bot is supposed to go from page to page, parsing and then downloading. I've gotten the parser to work, I've gotten the downloading to work. I can't get the crawling to work. I've read the documentation on the Spider class, I've read the documentation on how parse is supposed to work. I've tried returning vs yielding, and I'm still nowhere. I have no idea where my code is going wrong. What seems to happen, from a debug script I wrote is the following. The code will run, it will grab page 1 just fine, it'll get the link to page two, it'll go to page two, and then it will happily stay on page two, not grabbing page three at all. I don't know where the mistake in my code is, or how to alter it to fix it. So any help would be appreciated. I'm sure the mistake is basic, but I can't figure out what's going on.
import scrapy
class ParadiseSpider(scrapy.Spider):
name = "testcrawl2"
start_urls = [
"http://forums.somethingawful.com/showthread.php?threadid=3755369&pagenumber=1",
]
def __init__(self):
self.found = 0
self.goto = "no"
def parse(self, response):
urlthing = response.xpath("//a[#title='Next page']").extract()
urlthing = urlthing.pop()
newurl = urlthing.split()
print newurl
url = newurl[1]
url = url.replace("href=", "")
url = url.replace('"', "")
url = "http://forums.somethingawful.com/" + url
print url
self.goto = url
return scrapy.Request(self.goto, callback=self.parse_save, dont_filter = True)
def parse_save(self, response):
nfound = str(self.found)
print "Testing" + nfound
self.found = self.found + 1
return scrapy.Request(self.goto, callback=self.parse, dont_filter = True)
Use Scrapy rule engine,So that don't need to write the next page crawling code in parse function.Just pass the xpath for the next page in the restrict_xpaths and parse function will get the response of the crawled page
rules=(Rule(LinkExtractor(restrict_xpaths= ['//a[contains(text(),"Next")]']),follow=True'),)
def parse(self,response):
response.url

Word Crawler script not fetching the target words - Python 2.7

I am a newbie to programming. Learning from Udacity. In unit 2, I studied the following code to fetch links from a particular url:
import urllib2
def get_page(url):
return urllib2.urlopen(url).read()
def get_next_target(page):
start_link = page.find('<a href=')
if start_link == -1:
return None, 0
start_quote = page.find('"', start_link)
end_quote = page.find('"', start_quote + 1)
url = page[start_quote + 1:end_quote]
return url, end_quote
def print_all_links(page):
while True:
url, endpos = get_next_target(page)
if url:
print url
page = page[endpos:]
else:
break
print_all_links(get_page('http://en.wikipedia.org'))
It worked perfectly. Today I wanted to modify this code so the script could crawl for a particular word in a webpage rather than URLs. Here is what I came up with:
import urllib2
def get_web(url):
return urllib2.urlopen(url).read()
def get_links_from(page):
start_at = page.find('america')
if start_at == -1:
return None, 0
start_word = page.find('a', start_at)
end_word = page.find('a', start_word + 1)
word = page[start_word + 1:end_word]
return word, end_word
def print_words_from(page):
while True:
word, endlet = get_links_from(page)
if word:
print word
page = page[endlet:]
else:
break
print_words_from(get_web('http://en.wikipedia.org/wiki/America'))
When I run the above, I get no errors, but nothing prints out either. So I added the print keyword -
print print_words_from(get_web('http://en.wikipedia.org/wiki/America'))
When I run, I get None as result. I am unable to understand where am I going wrong. My code probably is messed up, but because there is no error coming up, I am unable to figure out where it is messed up.
Seeking help.
I understand this as you are trying to get it to print the word America for every instance of the word on the Wikipedia page.
You are searching for "america" but the word is written "America". "a" is not equal to "A" which is causing you to find no results.
Also, start_word is searing for 'a', so I adjusted that to search for 'A' instead.
At this point, it was printing 'meric' over and over. I edited your 'word' to begin at 'start_word' rather than 'start_word + 1'. I also adjusted your 'end_word' to be 'end_word+1' so that it prints that last letter.
It is now working on my machine. Let me know if you need any clarification.
def get_web(url):
return urllib2.urlopen(url).read()
def get_links_from(page):
start_at = page.find('America')
if start_at == -1:
return None, 0
start_word = page.find('A', start_at)
end_word = page.find('a', start_word + 1)
word = page[start_word:end_word+1]
return word, end_word
def print_words_from(page):
while True:
word, endlet = get_links_from(page)
if word:
print word
page = page[endlet:]
else:
break
print_words_from(get_web('http://en.wikipedia.org/wiki/America'))

(Cocoa error 3840.)" (JSON text did not start with array or object and option to allow fragments not set.)

I find this error to happen to many other users and although trying many of suggested solutions nothing seems to work, so I'm trying to post my specific case.
I'm trying to save an image from iphone application to my postgresql database using django.
my view looks like this:
def objectimages(request, obj_id):
object = imagesTable.objects.filter(uniqueid=obj_id)
if request.method == 'POST':
value = request.POST.get('image')
f= open('image.txt', 'w+')
f.write(value)
f.close()
object.update(image=value)
return HttpResponse("Post received")
elif request.method == 'GET':
output = serializers.serialize('json',object, fields=('image'),indent=5, use_natural_keys=True)
return HttpResponse(output, content_type="application/json")
the file is just for debug purposes and it seems to write the correct data
also tried return HttpResponse({}, content_type="application/json")
in my application the post request is done using AFNetworking like this:
-(void) saveImageToDB
{
NSString* BaseURLString = POST_IMAGE;
NSData* data = UIImagePNGRepresentation(self.image);
AFHTTPRequestOperationManager *manager = [AFHTTPRequestOperationManager manager];
[manager POST:BaseURLString
parameters:#{#"image":[data base64EncodedString]}
success:^(AFHTTPRequestOperation *operation, id responseObject) {
NSLog(#"image saved successfuly");
}
failure:^(AFHTTPRequestOperation *operation, NSError *error) {
NSLog(#"Error saving image: %#", error)
}];
}
so I added this line of code
manager.responseSerializer = [AFJSONResponseSerializer serializerWithReadingOptions:NSJSONReadingAllowFragments];
and got this error: Invalid value around character 1
I also try to do:
AFJSONRequestSerializer *requestSerializer = [AFJSONRequestSerializer serializer];
[requestSerializer setValue:#"application/json" forHTTPHeaderField:#"Content-Type"];
[requestSerializer setValue:#"application/json" forHTTPHeaderField:#"Accept"];
manager.requestSerializer = requestSerializer;
which ended up always with The request timed out.
please advice what is wrong with my request
You're returning the string Post received as a response to POST requests. This string is not valid JSON (if you wanted to return just a string in your JSON response, the correct JSON representation would be "Post received", with quotes), and your JSON deserializer seems to complain about exactly that. Try serializing your response in both branches of your logic.

AT commands with pyserial not working with receiving sms

This is a code snippet written in python to receive sms via a usb modem. When I run the program all I get is a status message "OK"., but nothing else.How do I fix the issue to print the messages I am receiving?
import serial
class HuaweiModem(object):
def __init__(self):
self.open()
def open(self):
self.ser = serial.Serial('/dev/ttyUSB_utps_modem', 115200, timeout=1)
self.SendCommand('ATZ\r')
self.SendCommand('AT+CMGF=1\r')
def SendCommand(self,command, getline=True):
self.ser.write(command)
data = ''
if getline:
data=self.ReadLine()
return data
def ReadLine(self):
data = self.ser.readline()
print data
return data
def GetAllSMS(self):
self.ser.flushInput()
self.ser.flushOutput()
command = 'AT+CMGL="all"\r'
print self.SendCommand(command,getline=False)
self.ser.timeout = 2
data = self.ser.readline()
print data
while data !='':
data = self.ser.readline()
if data.find('+cmgl')>0:
print data
h = HuaweiModem()
h.GetAllSMS()
In GetAllSMS there are two things I notice:
1) You are using self.ser.readline and not self.Readline so GetAllSMS will not try to print anything (except the first response line) before the OK final response is received, and at that point data.find('+cmgl')>0 will never match.
Is that just the problem?
2) Will print self.SendCommand(command,getline=False) call the function just as it were written as self.SendCommand(command,getline=False)? (Just checking since I do not write python myself)
In any case, you should rework your AT parsing a bit.
def SendCommand(self,command, getline=True):
The getline parameter here is not a very good abstraction. Leave out reading responses from the SendCommand function. You should rather implement proper parsing of the responses given back by the modem and handle that outside. In the general case something like
self.SendCommand('AT+CSOMECMD\r')
data = self.ser.readline()
while ! IsFinalResult(data):
data = self.ser.readline()
print data # or do whatever you want with each line
For commands without any explicit processing of the responses, you can implement a SendCommandAndWaitForFinalResponse function that does the above.
See this answer for more information about a IsFinalResult function.
where you are having problems is here in your GetAllSMS function. Now replace my GeTALLSMS function with yours and see what happens
def GetAllSMS(self):
self.ser.flushInput()
self.ser.flushOutput()
command = 'AT+CMGL="all"\r' #to get all messages both read and unread
print self.SendCommand(command,getline=False)
while 1:
self.ser.timeout = 2
data = self.ser.readline()
print data
or this
def GetAllSMS(self):
self.ser.flushInput()
self.ser.flushOutput()
command = 'AT+CMGL="all"\r' #to get all messages both read and unread
print self.SendCommand(command,getline=False)
self.ser.timeout = 2
data = self.ser.readall() #you can also u read(10000000)
print data
thats all...