My gateway say I need to URL encode all my data, in Django how is this possible?
requests.post(default_gateway.keyword_forwarding_url, data=raw_data,
stream=True, verify=True)
I have tried
import urllib
requests.post(default_gateway.keyword_forwarding_url, data=urllib.urlencode(raw_data),
stream=True, verify=True)
Your data needs to be passed to urlencode() as a dict or in a sequence of paired tuples:
import urllib
formatted_raw_data = {raw_data[0]: raw_data[1], raw_data[2]: raw_data[3]} // or however it needs to be formatted
requests.post(default_gateway.keyword_forwarding_url, data=urllib.urlencode(formatted_raw_data),
stream=True, verify=True)
Related
I am testing an endpoint in the following manner:
from rest_framework.test import APIClient
from django.urls import reverse
import json
client = APIClient()
response = client.post(list_url, {'name': 'Zagyg Co'})
I find that the model object is being created with a name of [Zagyg Co] instead of Zagyg Co.
Inspecting the request object reveals the following:
self._request.META['CONTENT_TYPE']
#=> 'multipart/form-data; boundary=BoUnDaRyStRiNg; charset=utf-8'
self._request.body
#=> b'--BoUnDaRyStRiNg\r\nContent-Disposition: form-data; name="name"\r\n\r\nZagyg Co\r\n--BoUnDaRyStRiNg--\r\n'
self._request.POST
#=> <QueryDict: {'name': ['Zagyg Co']}>
Using JSON like so:
response = client.post(
list_url,
json.dumps({'name': 'Zagyg Co'}),
content_type='application/json',
)
sets the name correctly. Why is this so?
request.data is a Django QueryDict. When the data is sent as a multipart form it handles potential multiple values of the same field by putting it in a list.
Using its dict method or using dictionary access syntax returns the last value stored in the relevant key(s):
request.data['name']
#=> 'Zagyg Co'
request.dict()
#=> {'name': 'Zagyg Co'}
Which is great if it's guaranteed that each key has a single value. For list values there's getlist:
request.data.getlist('name')
#=> ['Zagyg Co']
For mixes of keys with single and multiple values it seems like manual parsing is required.
I'm new to Python but required to write a script for API so trying to read the response from API and put it in a file, version is python2.7
Following is the code
import requests
import json
#URL = "someurl"
# sending get request and saving the response as response object
#response = requests.get(url = URL)
#print(response.status_code)
#print(response.content)
items = json.loads('{"batch_id":"5d83a2d317cb4","names":
{"19202":"text1","19203":"text2"}}')
print(items['names'])
for item in items['names']:
print(item)
Current output is
19202
19203
But I would like to pick text1,text2 and write to a file, can anyone help how to get those values
items is a dictionary. items['names'] is also a dictionary. for item in items['names']: will iterate over keys not values. The item will hold key in the dictionary.
To access the value in that key-value pair, you have to use print items['names'][item] instead of print item. Your code should look something like below.
import requests
import json
#URL = "someurl"
# sending get request and saving the response as response object
#response = requests.get(url = URL)
#print(response.status_code)
#print(response.content)
items = json.loads('{"batch_id":"5d83a2d317cb4","names":
{"19202":"text1","19203":"text2"}}')
print(items['names'])
for item in items['names']:
print(items['names'][item])
>>> print(list(items['names'].values()))
['text1', 'text2']
So you can do like this:
for item in items['names'].values():
print(item)
I'm trying to import data from the link below using a SODA API, and load it in to a dataframe. I've never worked with a SODA API before, can anyone suggest a good module or how to do this?
https://health.data.ny.gov/Health/Medicaid-Potentially-Preventable-Emergency-Visit-P/cr7a-34ka
The code below did the trick:
Code:
import pandas as pd
from sodapy import Socrata
# Unauthenticated client only works with public data sets. Note 'None'
# in place of application token, and no username or password:
client = Socrata("health.data.ny.gov", None)
# First 2000 results, returned as JSON from API / converted to Python list of
# dictionaries by sodapy.
Results = client.get("cr7a-34ka", limit=2000)
# Convert to pandas DataFrame
df = pd.DataFrame.from_records(Results)
For Python, the unofficial sodapy library is a great place to start!
On my server the array appears as follows:
data = [u'Data1', u'Data2', u'Data3']
In Django, I send the data through to the client using:
render(..., {'data': data})
On the client side I try to render in JavaScript using:
{{data}}
and get:
[u'Data1B', u'Data2', u'Data3']
How can I fix this encoding issue?
You need to safe escape the string inorder to work fine
{{data|safe|escape}}
You can also pass your data as a json object. In your view.py
from django.utils import simplejson
...
render(...{'data':simplejson.dumps(data)})
and then in your javascript function
var data = JSON.parse({{data}})
But as #karthikr already said, |safe is your case absolutely sufficient.
When I use the Urllib module, I can call/print/search the html of a website the first time, but when I try again it is gone. How can I keep the html throughout the program.
For example, when I try:
html = urllib.request.urlopen('http://www.bing.com/search?q=Mike&go=&qs=n&form=QBLH&filt=all&pq=mike&sc=8-2&sp=-1&sk=')
search = re.findall(r'Mike',str(html.read()))
search
I get:
['Mike','Mike','Mike','Mike']
But then when I try to do this a second time like so:
results = re.findall(r'Mike',str(html.read()))
I get:
[]
when calling 'result'.
Why is this and how can I stop it from happening/fix it?
Without being very well versed in python, I'm guessing html.read() reads the http stream, so when you call it the second time there is nothing to read.
Try:
html = urllib.request.urlopen('http://www.bing.com/search?q=Mike&go=&qs=n&form=QBLH&filt=all&pq=mike&sc=8-2&sp=-1&sk=')
data = str(html.read())
search = re.findall(r'Mike',data)
search
And then use
results = re.findall(r'Mike',data)
In addition to the correct guess of #rvalik that you can only read a stream once, data = str(html.read()) is incorrect. urlopen returns a bytes object and str returns the display representation of that object. An example:
>>> data = b'Mike'
>>> str(data)
"b'Mike'"
What you should do is either decode the bytes object using the encoding of the HTML page (UTF-8 in this case):
from urllib.request import urlopen
import re
with urlopen('http://www.bing.com/search?q=Mike&go=&qs=n&form=QBLH&filt=all&pq=mike&sc=8-2&sp=-1&sk=') as html:
data = html.read().decode('utf8')
print(re.findall(r'Mike',data))
or search with a bytes object:
from urllib.request import urlopen
import re
with urlopen('http://www.bing.com/search?q=Mike&go=&qs=n&form=QBLH&filt=all&pq=mike&sc=8-2&sp=-1&sk=') as html:
data = html.read()
print(re.findall(rb'Mike',data))