Python requests.request ValueError: too many values to unpack - python-2.7

I'm working on Entity Extraction usin an API call to https://dandelion.eu/ . I'm sending text files and automatically i get back a json file as response. It's not the first time i use this service and it worked really good. Now I started to send a new set of text file with the same parameters I always used but i get this: ValueError: too many values to unpack.
Here is my code:
values={"text":" ",
"min_confidence":"0.6",
"include":"types",
"include":"abstract",
"include":"categories"
}
headers = {'X-Target-URI':'https://api.dandelion.eu',
'Host':'api.dandelion.eu',
'Connection': 'keep-alive',
'Server': 'Apache-Coyote/1.1',
'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8',
}
for roots, dirs, files in os.walk(spath): #spath is specified
for file in files:
if file.startswith("A0"):
with open(file, "r") as f:
text = f.read()
values["text"]= " ".join(text.split())
#api call
url = "https://api.dandelion.eu/datatxt/nex/v1/"
data = urllib.urlencode(values, "utf-8")
response = requests.request("POST", url, data=data, headers=headers, params=token_api)
content = response.json()
print content
ErrorValue: too many values to unpack
Can somebody help me on this? I always used the same code for other api calls and it worked good. I don't know what is wrong now.
​

the API returns more than one value.
please refer to API documentation and see what are the return values.
(you did not mentioned what API raised the err in tour question)

Related

copyleaks sent pdfReport to endpoint as binary on request.body, not as file

I have django view that gets request and I try to send pdf result of copyleak scan.
I get file as request.body and request.FILES is empty.
I have checked copyleaks docs to see if I could pass extra argument as we should pass
enctype="multipart/form-data" in django form to get files in request.FILES, but I did not see anything related.
I can read request body and write it to file, no problem here, but would be great if I directly get pdf file in request FILES.
myobj = json.dumps(
{
"pdfReport": {
"verb": "POST",
"endpoint": "https://aa67-212-47-137-71.in.ngrok.io/en/tool/copyleaks/download/",
},
"completionWebhook": "https://aa67-212-47-137-71.in.ngrok.io/en/tool/copyleaks/complete/",
"maxRetries": 3,
}
)
response = requests.post(
"https://api.copyleaks.com/v3/downloads/file4/export/export16",
headers=headers,
data=myobj,
)
I tried to change Content-Type manually and got error
django.http.multipartparser.MultiPartParserError: Invalid boundary in multipart: None
Bad request (Unable to parse request body): /en/tool/copyleaks/download/

What is the difference in sending data using json.dumps and by normal python dictionary?

i am using an api to do a post method and i used Requests library to perform the post.
data_post = requests.post(url='https://proxy.vox-cpaas.in/api/user',
data={'authtoken': '945e5f0f_ssss_408e_pppp_ellll234122',
'projectid': "pid_a44444fae2_454542_41d4_8630_6454545cdafff12",
'username': "username",
'password': "user.username"})
the above data post worked successfully.
but
data_post = requests.post(url='https://proxy.vox-cpaas.in/api/user',
data=json.dumps({'authtoken': '945e5f0f_ssss_408e_pppp_ellll234122',
'projectid': "pid_a44444fae2_454542_41d4_8630_6454545cdafff12",
'username': "username",
'password': "user.username"}))
the 2nd code didnt worked for me
please someone explain me the difference.
Recommended way of postng data as json with requests - use json parameter:
r = requests.post(url, json=my_dictionary)
this way requests will encode dictionary for you (no need for json.dumps()) and will set correct Content-Type header.
In first example, when you pass python dictionary directly to requests data - result is form-encoded data with "Content-Type": "application/x-www-form-urlencoded".
In the second one - you pass string which contains serialized json to requests data. However, as data is just a string - requests may not identfy that it is json, and not set Content-Type or set it Content-Type: "text/plain", and receiving side may not recognise it.
For correct request with json body you need "Content-Type": "application/json".
You may want to explicitly set content-type in headers:
r = requests.post(
url,
data=json.dumps(my_dictionary),
headers={'content-type': 'application/json'}
)
But requests contains json parameter to simplify these things.

python-requests: POST works, PUT fails - "Authentication credentials were not provided."

Editing again with more updates:
Trying to troubleshoot python-requests to see if something is wrong with a PUT request, but not sure how to proceed.
Below is a snippet of my code:
def API_request(url=None, headers=None, payload=None, update=False):
r = None
if update and headers and payload:
print "put request to %s with %s of %s" % (url, headers, payload)
r = requests.put(url, headers=headers, data=payload)
if headers and payload and not update:
print "post request to %s with %s of %s" % (url, headers, payload)
r = requests.post(url, headers=headers, data=payload)
print r.status_code
print r.text
When the above sends a POST request to create a record, it works. However, whenever it sends a PUT request, I get a 401 error: "Authentication credentials were not provided." This happens across multiple endpoints.
401
{"detail":"Authentication credentials were not provided."}
If I copy/paste the relevant printed output from the above PUT print function into a direct HTTPie request, it works. The below request results in a successful 200 response and updated record on the server:
http --debug PUT [url] < [file containing payload] Authorization:'Token [token]'
If I hard code a simple script that does nothing more than import python and json and PUT the exact same data to the same url using the same headers (printed form the original statement), it works. The below script results in a successful 200 response and updated record on the server:
import requests, json
url = "[my url"
headers = {'Content-Type': 'application/json', 'Authorization': 'Token [my token]'}
data = {[my data]}
payload = json.dumps(data)
r = requests.put(url, headers=headers, data=payload)
print r.status_code
print r.text
I've sent the information from both scripts to https://requestbin.fullcontact.com/ and they look to be identical.
BIG ISSUE:
After an entire day of debugging I figured out that even the requests that were generating a 401 error were successfully hitting the server and updating the appropriate records. Put simply, this would not be possible without a correct and functional authentication. Given that, why would I be getting a 401 error from the PUT request?
Happy to answer any questions in comments.
The above was sending a follow-up GET request (without the header, so it was failing) after successfully sending the PUT request (which was succeeding). I caught this by logging all requests hitting the server.
I figured out why the follow up GET was being sent and corrected that. I still don't understand why the r.code and r.text response from the successful PUT was never printing and hitting the console. Had I seen that, it would have been much easier to find. However, since the main problem is resolved, I can't spend time troubleshooting that.
I should have been seeing both responses - the success and the fail in the console - problem for another time.

How to successfully load bulk contacts into constant contact API via python?

I need to bulk load contacts to particular list via Constant Contact API (http://developer.constantcontact.com/docs/contacts-api/contacts-collection.html?method=POST).
I can successfully add contacts using the same JSON string as below into the following API GUI website(https://constantcontact.mashery.com/io-docs (find Tab POST 'add contact' to collection):
update_contact = {"lists": [{"id": "1"}],"email_addresses": [{"email_address": "yasmin1.abob19955#gmail.com"}],"first_name": "Ronald","last_name": "Martone"}
However when I run the same JSON string in my python code I get error 400 with the error message from my response object as the following:
[{"error_key":"query.param.invalid","error_message":"The query parameter first_name is not supported."},
{"error_key":"query.param.invalid","error_message":"The query parameter last_name is not supported."},{"error_key":"query.param.invalid","error_message":"The query parameter lists is not supported."},{"error_key":"query.param.invalid","error_message":"The query parameter email_addresses is not supported."}]
How can two of the same API calls produce different results?and how do I get my python code to work?
code:
import requests
headers = {
'Authorization': 'Bearer X',
'X-Originating-Ip': '1',
'Content-Type': 'application/json'
}
update_contact = {"lists": [{"id": "1"}],"email_addresses": [{"email_address": "yasmin1.abob19955#gmail.com"}],"first_name": "Ronald","last_name": "Martone"}
r_2 = requests.post('https://api.constantcontact.com/v2/contacts?action_by=ACTION_BY_OWNER&api_key=x', headers=headers ,params = update_contact)
print(r_2.text)
You will need to change params to data
r_2 = requests.post('https://api.constantcontact.com/v2/contacts?action_by=ACTION_BY_OWNER&api_key=x', headers=headers ,data = update_contact)
Additionally, you can use a multipart endpoint to upload contacts as well. I have found this to be very easy especially if your contacts are in a csv file.
A sample code would look like this:
import requests as r
import csv
from datetime import datetime
url = 'https://api.constantcontact.com/v2/activities/addcontacts?api_key=<your_api_key>'
headers = {'Authorization': 'Bearer <access_token>',
'X-Originating-Ip': '<ip>',
'content-type': 'multipart/form-data'}
files = {'file_name': 'Book1.csv',
'data': ('Book1.csv', open('Book1.csv', 'rb'),
'application/vnd.ms-excel', {'Expires': '0'}),
'lists':('<insert_listIds_seperated_by_commas>')}
response = r.post(url, headers=headers, files=files)
with open('CC_Upload_Response_Data.csv', 'a', newline='') as f:
writer = csv.writer(f)
time_stamp = datetime.now().strftime('%m-%d-%Y %H:%M')
row = [response, response.text, time_stamp]
writer.writerow(row)
The headers of your csv file need to be like so: "First Name" "Last Name" "Email Address" "Custom Field 1" "Custom Field 2" and so on. You can find a complete list of column names here: http://developer.constantcontact.com/docs/bulk_activities_api/bulk-activities-import-contacts.html
The csv file that this code appends acts like a log if you are going to schedule you .py file to run nightly. The log records the response code, response text and adds a timestamp.
Mess around with it a little and get it the way you like.

how to store python json.dumps into firebase as one payload; without firebase auto creating indexs in front of each object

First time poster, and I'm not really a developer, so perspective is always appreciated :)
Objective:
I am attempting to put (or patch) a json.dumps(mergedFile) into firebase as one payload without firebase auto creating indexes (0, 1, etc..) in front of each object
Problem statement:
I am submitting the following json object into the /testObject path:
[{"test1":"226.69"},{"test2":"7.48"}]
In firebase the response is stored as:
[
{
"testObject": {
0: {
"test1": "226.69"
},
1: {
"test2": "7.48"
}
}
}
]
Background:
The total # of items in the payload of the data I need to store is
just over 5000
If I parse each object via a for loop the data is written as
expected, however, it initiates a new request for each itteriation of
the loop and has a large overhead impact compared to just
dumping one large object in one request.
Here is my Code:
import json
import requests
import xml.etree.ElementTree as ET
def get_data():
try:
print 'hampsters are running...'
# OFFLINE TESTING
sourceFile = 'response.xml'
tree = ET.parse(sourceFile)
root = tree.getroot()
for symbol in root.iter('symbol'):
company = symbol.attrib['company']
location = symbol.attrib['location']
destinationData = {company: location}
mergedFile.append(destinationData)
print('downlaoding the info was a success! :)')
except:
print 'Attempt to download information did not complete successfully :('
def patch_data():
try:
print 'attempting to upload info to database...'
data = json.dumps(mergedFile)
print data
try:
req = requests.put(url, data=data, headers=headers)
req.raise_for_status()
except requests.exceptions.HTTPError as e:
print e
print req.json()
print 'upload to database complete!'
except:
print 'Attempt to upload information did not complete successfully :('
if __name__ == "__main__":
mergedFile = []
auth = "*****"
databaseURL = 'https://*****.firebaseio.com'
headers = {"auth": auth, "print": "pretty"}
# headers = {"auth": auth, "print": "pretty", "Accept": "text/event-stream"}
requestPath = '/testObject.json?auth=' + auth
url = databaseURL + requestPath
get_data()
patch_data()
I feel like its storing an array, but I'm leveraging data = json.dumps(mergedFile) before the put request. Do I have a mis-understanding of how json.dumps works? Based on the output before the request I feel it looks good. I'm also leveraging the requests python module... is this converting the data to an array?
Any insight anyone could provide would be greatly appreciated!
Regards,
James.
The Firebase Database stores arrays as regular key-value pairs, with the keys being numbers. So what you see is the expected behavior.
There are many reasons why Firebase recommends against storing arrays in the database. A few can be found in these links:
Best Practices: Arrays in Firebase
Proper way to store values array-like in Firebase
Firebase documentation on structuring data
Other questions about arrays in Firebase
this answer on arrays vs sets