I need to bulk load contacts to particular list via Constant Contact API (http://developer.constantcontact.com/docs/contacts-api/contacts-collection.html?method=POST).
I can successfully add contacts using the same JSON string as below into the following API GUI website(https://constantcontact.mashery.com/io-docs (find Tab POST 'add contact' to collection):
update_contact = {"lists": [{"id": "1"}],"email_addresses": [{"email_address": "yasmin1.abob19955#gmail.com"}],"first_name": "Ronald","last_name": "Martone"}
However when I run the same JSON string in my python code I get error 400 with the error message from my response object as the following:
[{"error_key":"query.param.invalid","error_message":"The query parameter first_name is not supported."},
{"error_key":"query.param.invalid","error_message":"The query parameter last_name is not supported."},{"error_key":"query.param.invalid","error_message":"The query parameter lists is not supported."},{"error_key":"query.param.invalid","error_message":"The query parameter email_addresses is not supported."}]
How can two of the same API calls produce different results?and how do I get my python code to work?
code:
import requests
headers = {
'Authorization': 'Bearer X',
'X-Originating-Ip': '1',
'Content-Type': 'application/json'
}
update_contact = {"lists": [{"id": "1"}],"email_addresses": [{"email_address": "yasmin1.abob19955#gmail.com"}],"first_name": "Ronald","last_name": "Martone"}
r_2 = requests.post('https://api.constantcontact.com/v2/contacts?action_by=ACTION_BY_OWNER&api_key=x', headers=headers ,params = update_contact)
print(r_2.text)
You will need to change params to data
r_2 = requests.post('https://api.constantcontact.com/v2/contacts?action_by=ACTION_BY_OWNER&api_key=x', headers=headers ,data = update_contact)
Additionally, you can use a multipart endpoint to upload contacts as well. I have found this to be very easy especially if your contacts are in a csv file.
A sample code would look like this:
import requests as r
import csv
from datetime import datetime
url = 'https://api.constantcontact.com/v2/activities/addcontacts?api_key=<your_api_key>'
headers = {'Authorization': 'Bearer <access_token>',
'X-Originating-Ip': '<ip>',
'content-type': 'multipart/form-data'}
files = {'file_name': 'Book1.csv',
'data': ('Book1.csv', open('Book1.csv', 'rb'),
'application/vnd.ms-excel', {'Expires': '0'}),
'lists':('<insert_listIds_seperated_by_commas>')}
response = r.post(url, headers=headers, files=files)
with open('CC_Upload_Response_Data.csv', 'a', newline='') as f:
writer = csv.writer(f)
time_stamp = datetime.now().strftime('%m-%d-%Y %H:%M')
row = [response, response.text, time_stamp]
writer.writerow(row)
The headers of your csv file need to be like so: "First Name" "Last Name" "Email Address" "Custom Field 1" "Custom Field 2" and so on. You can find a complete list of column names here: http://developer.constantcontact.com/docs/bulk_activities_api/bulk-activities-import-contacts.html
The csv file that this code appends acts like a log if you are going to schedule you .py file to run nightly. The log records the response code, response text and adds a timestamp.
Mess around with it a little and get it the way you like.
Related
I have a form in the front-end having multiple entries, i.e name, email, phone and also a file field entry. A Form group is used to group all these elements in the same form in Angular. There is also a corresponding model in Django, (Using Django Rest Framework).
I could not manage to have the file sent to the API, even if the rest of the data is sent correctly and saved on the back-end.
First, I am able to upload the file successfully to Front-end, below I log in the console:
Second, the object I send is something like this:
{"name":"name", "age":49, "email":"email#field.com", "file":File}
The File in the JSON is the same file object displayed in the console above.
I tested my backend with Postman, I was able to succesfully have the file as well as the other data saved. (I believe the problem to be more on the Front-end side ).
Solutions I found for uploading file in Angular used form data (i.e here), these solutions were not convenient as the form consists only of a file, however in my case I have file as well as other data (Form Group).
Another thing I tried that did not work was the following: putting a form Data object with the file in the "file" key of the JSON to be sent. Still, this it did not work.
Also, this is how I upload the file in angular:
public file: File | null = null;
public form: FormGroup;
formData = new FormData();
ngOnInit(){
this.form = this.fb.group({
name: [], [Validators.required]],
age: [],
email: [], [Validators.required]],
file: []});
fileUploadedHandler(file) {
this.file = file;
this.formData.append("file",file, file.name);
this.form.patchValue({file:file}); //this.formData});
this.createDocumentForm.updateValueAndValidity();
console.log(file);}
}
Any propositions to solve this ?
Managed to solve the problem. First I had to use formData instead of formGroup, It was also possible to have multiple fields in formData using append method :
this.formData.append("file",file, file.name);
this.formData.append("name",name);
this.formData.append("age",age);
I had also to revisit the HTTP headers used to submit the form to the API, this was the blocking part.
In my case I had to Remove the 'Content-Type': 'application/json' from the headers. The new working one was:
working_headers = new HttpHeaders({
"Accept": "*/*",
"Authorization": 'Token laksjd8654a6s56a498as5d4a6s8d7a6s5d4a',
});
I am trying to short an URL using Google API but using only the requests module.
The code looks like this:
import requests
Key = "" # found in https://developers.google.com/url-shortener/v1/getting_started#APIKey
api = "https://www.googleapis.com/urlshortener/v1/url"
target = "http://www.google.com/"
def goo_shorten_url(url=target):
payload = {'longUrl': url, "key":Key}
r = requests.post(api, params=payload)
print(r.text)
When I run goo_shorten_url it returns:
"error": {
"errors": [
{
"domain": "global",
"reason": "required",
"message": "Required",
"locationType": "parameter",
"location": "resource.longUrl"
}
],
"code": 400,
"message": "Required"
}
But the longUrl parameter is there!
What am I doing wrong?
At first, please confirm that "urlshortener api v1" is enabled at Google API Console.
Content-Type is required as a header. And please use data as a request parameter. The modified sample is as follows.
Modified sample :
import json
import requests
Key = "" # found in https://developers.google.com/url-shortener/v1/getting_started#APIKey
api = "https://www.googleapis.com/urlshortener/v1/url"
target = "http://www.google.com/"
def goo_shorten_url(url=target):
headers = {"Content-Type": "application/json"}
payload = {'longUrl': url, "key":Key}
r = requests.post(api, headers=headers, data=json.dumps(payload))
print(r.text)
If above script doesn't work, please use an access token. The scope is https://www.googleapis.com/auth/urlshortener. In the case of use of access token, the sample script is as follows.
Sample script :
import json
import requests
headers = {
"Authorization": "Bearer " + "access token",
"Content-Type": "application/json"
}
payload = {"longUrl": "http://www.google.com/"}
r = requests.post(
"https://www.googleapis.com/urlshortener/v1/url",
headers=headers,
data=json.dumps(payload)
)
print(r.text)
Result :
{
"kind": "urlshortener#url",
"id": "https://goo.gl/#####",
"longUrl": "http://www.google.com/"
}
Added 1 :
In the case of use tinyurl.com
import requests
URL = "http://www.google.com/"
r = requests.get("http://tinyurl.com/" + "api-create.php?url=" + URL)
print(r.text)
Added 2 :
How to use Python Quickstart
You can use Python Quickstart. If you don't have "google-api-python-client", please install it. After installed it, please copy paste a sample script from "Step 3: Set up the sample", and create it as a python script. Modification points are following 2 parts.
1. Scope
Before :
SCOPES = 'https://www.googleapis.com/auth/drive.metadata.readonly'
After :
SCOPES = 'https://www.googleapis.com/auth/urlshortener'
2. Script
Before :
def main():
"""Shows basic usage of the Google Drive API.
Creates a Google Drive API service object and outputs the names and IDs
for up to 10 files.
"""
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('drive', 'v3', http=http)
results = service.files().list(
pageSize=10,fields="nextPageToken, files(id, name)").execute()
items = results.get('files', [])
if not items:
print('No files found.')
else:
print('Files:')
for item in items:
print('{0} ({1})'.format(item['name'], item['id']))
After :
def main():
credentials = get_credentials()
http = credentials.authorize(httplib2.Http())
service = discovery.build('urlshortener', 'v1', http=http)
resp = service.url().insert(body={'longUrl': 'http://www.google.com/'}).execute()
print(resp)
After done the above modifications, please execute the sample script. You can get the short URL.
I am convinced that one CANNOT use ONLY requests to use google api for shorten an url.
Below I wrote the solution I ended up with,
It works, but it uses google api, which is ok, but I cannot find much documentation or examples about it (Not as much as I wanted).
To run the code remember to install google api for python first with
pip install google-api-python-client, then:
import json
from oauth2client.service_account import ServiceAccountCredentials
from apiclient.discovery import build
scopes = ['https://www.googleapis.com/auth/urlshortener']
path_to_json = "PATH_TO_JSON"
#Get the JSON file from Google Api [Website]
(https://console.developers.google.com/apis/credentials), then:
# 1. Click on Create Credentials.
# 2. Select "SERVICE ACCOUNT KEY".
# 3. Create or select a Service Account and
# 4. save the JSON file.
credentials = ServiceAccountCredentials.from_json_keyfile_name(path_to_json, scopes)
short = build("urlshortener", "v1",credentials=credentials)
request = short.url().insert(body={"longUrl":"www.google.com"})
print(request.execute())
I adapted this from Google's Manual Page.
The reason it has to be so complicated (more than I expected at first at least) is to avoid the OAuth2 authentication that requires the user (Me in this case) to press a button (to confirm that I can use my information).
As the question is not very clear this answer is divided in 4 parts.
Shortening URL Using:
1. API Key.
2. Access Token
3. Service Account
4. Simpler solution with TinyUrl.
API Key
At first, please confirm that "urlshortener api v1" is enabled at Google API Console.
Content-Type is required as a header. And please use data as a request parameter. The modified sample is as follows.
(Seems not to be working despite what the API manual says).
Modified sample :
import json
import requests
Key = "" # found in https://developers.google.com/url-shortener/v1/getting_started#APIKey
api = "https://www.googleapis.com/urlshortener/v1/url"
target = "http://www.google.com/"
def goo_shorten_url(url=target):
headers = {"Content-Type": "application/json"}
payload = {'longUrl': url, "key":Key}
r = requests.post(api, headers=headers, data=json.dumps(payload))
print(r.text)
Access Token:
If above script doesn't work, please use an access token. The scope is https://www.googleapis.com/auth/urlshortener. In the case of use of access token, the sample script is as follows.
This answer in Stackoverflow shows how to get an Access Token: Link.
Sample script :
import json
import requests
headers = {
"Authorization": "Bearer " + "access token",
"Content-Type": "application/json"
}
payload = {"longUrl": "http://www.google.com/"}
r = requests.post(
"https://www.googleapis.com/urlshortener/v1/url",
headers=headers,
data=json.dumps(payload)
)
print(r.text)
Result :
{
"kind": "urlshortener#url",
"id": "https://goo.gl/#####",
"longUrl": "http://www.google.com/"
}
Using Service Account
To avoid the user need to accept the OAuth authentication (with a pop up screen and all that) there is a solution that uses authentication from machine to machine using a Service Account (As mentioned in another proposed answer).
To run this part of the code remember to install google api for python first with pip install google-api-python-client, then:
import json
from oauth2client.service_account import ServiceAccountCredentials
from apiclient.discovery import build
scopes = ['https://www.googleapis.com/auth/urlshortener']
path_to_json = "PATH_TO_JSON"
#Get the JSON file from Google Api [Website]
(https://console.developers.google.com/apis/credentials), then:
# 1. Click on Create Credentials.
# 2. Select "SERVICE ACCOUNT KEY".
# 3. Create or select a Service Account and
# 4. save the JSON file.
credentials = ServiceAccountCredentials.from_json_keyfile_name(path_to_json, scopes)
short = build("urlshortener", "v1",credentials=credentials)
request = short.url().insert(body={"longUrl":"www.google.com"})
print(request.execute())
Adapted from Google's Manual Page.
Even simpler:
In the case of use tinyurl.com
import requests
URL = "http://www.google.com/"
r = requests.get("http://tinyurl.com/" + "api-create.php?url=" + URL)
print(r.text)
First time poster, and I'm not really a developer, so perspective is always appreciated :)
Objective:
I am attempting to put (or patch) a json.dumps(mergedFile) into firebase as one payload without firebase auto creating indexes (0, 1, etc..) in front of each object
Problem statement:
I am submitting the following json object into the /testObject path:
[{"test1":"226.69"},{"test2":"7.48"}]
In firebase the response is stored as:
[
{
"testObject": {
0: {
"test1": "226.69"
},
1: {
"test2": "7.48"
}
}
}
]
Background:
The total # of items in the payload of the data I need to store is
just over 5000
If I parse each object via a for loop the data is written as
expected, however, it initiates a new request for each itteriation of
the loop and has a large overhead impact compared to just
dumping one large object in one request.
Here is my Code:
import json
import requests
import xml.etree.ElementTree as ET
def get_data():
try:
print 'hampsters are running...'
# OFFLINE TESTING
sourceFile = 'response.xml'
tree = ET.parse(sourceFile)
root = tree.getroot()
for symbol in root.iter('symbol'):
company = symbol.attrib['company']
location = symbol.attrib['location']
destinationData = {company: location}
mergedFile.append(destinationData)
print('downlaoding the info was a success! :)')
except:
print 'Attempt to download information did not complete successfully :('
def patch_data():
try:
print 'attempting to upload info to database...'
data = json.dumps(mergedFile)
print data
try:
req = requests.put(url, data=data, headers=headers)
req.raise_for_status()
except requests.exceptions.HTTPError as e:
print e
print req.json()
print 'upload to database complete!'
except:
print 'Attempt to upload information did not complete successfully :('
if __name__ == "__main__":
mergedFile = []
auth = "*****"
databaseURL = 'https://*****.firebaseio.com'
headers = {"auth": auth, "print": "pretty"}
# headers = {"auth": auth, "print": "pretty", "Accept": "text/event-stream"}
requestPath = '/testObject.json?auth=' + auth
url = databaseURL + requestPath
get_data()
patch_data()
I feel like its storing an array, but I'm leveraging data = json.dumps(mergedFile) before the put request. Do I have a mis-understanding of how json.dumps works? Based on the output before the request I feel it looks good. I'm also leveraging the requests python module... is this converting the data to an array?
Any insight anyone could provide would be greatly appreciated!
Regards,
James.
The Firebase Database stores arrays as regular key-value pairs, with the keys being numbers. So what you see is the expected behavior.
There are many reasons why Firebase recommends against storing arrays in the database. A few can be found in these links:
Best Practices: Arrays in Firebase
Proper way to store values array-like in Firebase
Firebase documentation on structuring data
Other questions about arrays in Firebase
this answer on arrays vs sets
I'm working on Entity Extraction usin an API call to https://dandelion.eu/ . I'm sending text files and automatically i get back a json file as response. It's not the first time i use this service and it worked really good. Now I started to send a new set of text file with the same parameters I always used but i get this: ValueError: too many values to unpack.
Here is my code:
values={"text":" ",
"min_confidence":"0.6",
"include":"types",
"include":"abstract",
"include":"categories"
}
headers = {'X-Target-URI':'https://api.dandelion.eu',
'Host':'api.dandelion.eu',
'Connection': 'keep-alive',
'Server': 'Apache-Coyote/1.1',
'Content-Type': 'application/x-www-form-urlencoded; charset=utf-8',
}
for roots, dirs, files in os.walk(spath): #spath is specified
for file in files:
if file.startswith("A0"):
with open(file, "r") as f:
text = f.read()
values["text"]= " ".join(text.split())
#api call
url = "https://api.dandelion.eu/datatxt/nex/v1/"
data = urllib.urlencode(values, "utf-8")
response = requests.request("POST", url, data=data, headers=headers, params=token_api)
content = response.json()
print content
ErrorValue: too many values to unpack
Can somebody help me on this? I always used the same code for other api calls and it worked good. I don't know what is wrong now.
the API returns more than one value.
please refer to API documentation and see what are the return values.
(you did not mentioned what API raised the err in tour question)
I am atempting to post a new url to a service on ESRI (I own it) with a post using Requests. After printing the line post_url the JSON is updated as I want it however when I post it nothing happens despite getting a 200 status. Is the issue with the post or the find / replace?
json_url = "https://www.arcgis.com/sharing/rest/content/items/serviceID?&token=XXX"
update_url = "https://www.arcgis.com/sharing/rest/content/users/USERNAME/folder/items/ServiceNumber/update?"
get_json = requests.get(json_url)
load_json = str(get_json.json())
find = "findme"
replace = "replace"
post_url = load_json.replace(replace, find)
a = requests.post(update_url, data={"url": post_url, "token": "XXXX", "f":"json"})
print a.status_code
The issue is with the post
I changed the post to this:
requests.request("POST", update_url, data={"url": post_url, "token": token, "f":"json"})
update_url needs to be the API update endpoint:
https://www.arcgis.com/sharing/rest/content/users/USERNAME/FOLDER/items/Endpoint /update?"
post_url needs to be: "whatever you want" in my case It was a search and replace variable of the the existing URL in the JSON and update, because of a server migration.