I would like to send some POST data to the url that will be called once the connection is made in a twilio call. Here is my code:
import urllib, urllib2
from twilio.rest import TwilioRestClient
account = "xxx"
token = "xxx"
client = TwilioRestClient(account, token)
server_url = "http://ec2-xx.xx.xx.compute-1.amazonaws.com/"
values = dict(name='mytime', \
appt_time='2:30 PM', \
location='Arizona Location', \
client = "Suwanee",
)
data = urllib.urlencode(values)
req = urllib2.Request(server_url, data)
call = client.calls.create(to="123456789",
from_="987654321",
url="ec2-xx.xx.xx.compute-1.amazonaws.com/hello/")
How would I pass the urlencoded data to the url as a post?
ec2-xx.xx.xx.compute-1.amazonaws.com is running django, and the this server is able to see the post data when I send the following command:
curl -X POST -d "client=mytime+Suwanee&time=2%3A30+PM&location=Suwanee+Location&name=mytime2" "http://127.0.0.1:8000/remind/"
How do I replicate this same behavior in the code snippet provided in the very beginning? I want to use POST only (not GET).
For your functionality, I would recommend you to use requests library.
Example of making a POST request using that library:
>>> payload = {'key1': 'value1', 'key2': 'value2'}
>>> r = requests.post("http://httpbin.org/post", data=payload)
>>> print r.text
{
...
"form": {
"key2": "value2",
"key1": "value1"
},
...
}
Related
I attempted to request a REST request to see the document below. But do not work. https://superset.apache.org/docs/rest-api
request: curl -XGET -L http://[IP:PORT]/api/v1/chart
response: {"msg":"Bad Authorization header. Expected value 'Bearer <JWT>'"}
The Superset installation has been on PIP and was also Helm Chart. But all are the same. helm: https://github.com/apache/superset
How should I order a REST API?
Check the security section of the documentation you have linked. It has this API /security/login, you can follow the JSON parameter format and get the JWT bearer token. Use that token to send in the Header of your other API calls to superset.
open http://localhost:8080/swagger/v1, assuming http://localhost:8080 is your Superset host address
then find this section
the response would be like this
{
"access_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJmcmVzaCI6dHJ1ZSwiaWF0IjoxNjU0MzQ2OTM5LCJqdGkiOiJlZGY2NTUxMC0xMzI1LTQ0NDEtYmFmMi02MDc1MzhjZDcwNGYiLCJ0eXBlIjoiYWNjZXNzIiwic3ViIjoxLCJuYmYiOjE2NTQzNDY5MzksImV4cCI6MTY1NDM0NzgzOX0.TfjUea3ycH77xhCWOpO4LFbYHrT28Y8dnWsc1xS_IOY",
"refresh_token": "eyJ0eXAiOiJKV1QiLCJhbGciOiJIUzI1NiJ9.eyJmcmVzaCI6ZmFsc2UsImlhdCI6MTY1NDM0NjkzOSwianRpIjoiNzBiM2EyZDYtNDFlNy00ZDNlLWE0NDQtMTRiNTkyNTk4NjUwIiwidHlwZSI6InJlZnJlc2giLCJzdWIiOjEsIm5iZiI6MTY1NDM0NjkzOSwiZXhwIjoxNjU2OTM4OTM5fQ.OgcctNnO4zTDfTgtHnaEshk7u-D6wOxfxjCsjqjKYyE"
}
Thank #andrewsali commented on this github issue, I finally figure out how to access the superset REST API by python code.
import requests
from bs4 import BeautifulSoup
import json
def get_supetset_session():
"""
# http://192.168.100.120:8088/swagger/v1
url = f'http://{superset_host}/api/v1/chart/'
r = s.get(url)
# print(r.json())
"""
superset_host = '192.168.100.120:8088' # replace with your own host
username = 'YOUR_NAME'
password = 'YOUR_PASSWORD'
# set up session for auth
s = requests.Session()
login_form = s.post(f"http://{superset_host}/login")
# get Cross-Site Request Forgery protection token
soup = BeautifulSoup(login_form.text, 'html.parser')
csrf_token = soup.find('input',{'id':'csrf_token'})['value']
data = {
'username': username,
'password': password,
'csrf_token':csrf_token
}
# login the given session
s.post(f'http://{superset_host}/login/', data=data)
print(dict(s.cookies))
return s
DEMO
# s = get_supetset_session()
base_url = 'http://192.168.100.120:8088'
def get_dashboards_list(s, base_url=base_url):
"""## GET List of Dashboards"""
url = base_url + '/api/v1/dashboard/'
r = s.get(url)
resp_dashboard = r.json()
for result in resp_dashboard['result']:
print(result['dashboard_title'], result['id'])
s = get_supetset_session()
# {'session': '.eJwlj8FqAzEMRP_F5z1Islay8jOLJcu0NDSwm5xK_r0uPQ7DG978lGOeeX2U2_N85VaOz1FuxVK6JIHu1QFhGuEOk5NG8qiYGkJ7rR3_Ym-uJMOzJqySeHhIG8SkNQK6GVhTdLf0ZMmG6sZGQtiQ1Gz0qYiUTVoHhohZthLXOY_n4yu_l0-VKTObLaE13i2Hz2A2rzBmhU7WkkN1cfdH9HsuZoFbeV15_l_C8v4F4nBC9A.Ypn16Q.yz4E-vz0gp3EmJwv-6tYIcOGavU'}
get_dashboards_list(s)
Thanks #Ferris for this visual solution!
To add to this, you can also create the appropriate API call with Python just like following:
import requests
api_url = "your_url/api/v1/security/login"
payload = {"password":"your password",
"provider":"db",
"refresh":True,
"username":"your username"
}
response = requests.post(api_url, json=payload)
# the acc_token is a json, which holds access_token and refresh_token
access_token = response.json()['access_token']
# no get a guest token
api_url_for_guesttoken = "your_url/api/v1/security/guest_token"
payload = {}
# now this is the crucial part: add the specific auth-header
response = request.post(api_url_for_guesttoken , json=payload, headers={'Authorization':f"Bearer {access_token}"})
I'm trying to set cookies in the transport to execute a query using gql.
This works when I use RequestsHTTPTransport but not when I use AIOHTTPTransport.
The cookie needs to be present in the POST request. In case it's not present, the client gets redirected to the login page.
So if I'm using requests library directly, it should look like this:
cookies = authenticate(url, username, password)
json = { 'query' : '{ queryName { id }}' }
r = requests.post(url, json=json, cookies=cookies)
This works:
from gql import gql, Client
from gql.transport.aiohttp import AIOHTTPTransport
from gql.transport.requests import RequestsHTTPTransport
cookies=authenticate(url, username, password).get_dict()
transport = RequestsHTTPTransport(url=url, timeout=900, cookies=cookies)
with Client(
transport=transport,
# fetch_schema_from_transport=True,
execute_timeout=1200,
) as client:
query = gql('''
query {
queryName { id }
}
''')
result = client.execute(query)
print(result)
This redirects me to the login page.
from gql import gql, Client
from gql.transport.aiohttp import AIOHTTPTransport
cookies=authenticate(url, username, password).get_dict()
transport = AIOHTTPTransport(url=url, timeout=900, cookies=cookies)
async with Client(
transport=transport,
# fetch_schema_from_transport=True,
execute_timeout=1200,
) as client:
query = gql('''
query {
queryName { id }
}
''')
result = await client.execute(query)
print(result)
I need to send the same cookies for uploading a file via graphQL which is why I need to use AIOHttpTransport. How do I send cookies the same way as in requests?
Note: Tried sending via extra_args but that didn't work.
I like to print a pdf-version of my mediawikipage using pdfkit.
My mediawiki requires a valid login to see any pages.
I login to mediawiki using requests, and this works, and I get some cookies. However, I am not able to use these cookies with pdfkit.from_url()
My python-script looks like this:
#!/usr/bin/env python2
import pdfkit
import requests
import pickle
mywiki = "http://192.168.0.4/produniswiki/"# URL
username = 'produnis' # Username to login with
password = 'seeeecret#' # Login Password
## Login to MediaWiki
# Login request
payload = {'action': 'query', 'format': 'json', 'utf8': '', 'meta': 'tokens', 'type': 'login'}
r1 = requests.post(mywiki + 'api.php', data=payload)
# login confirm
login_token = r1.json()['query']['tokens']['logintoken']
payload = {'action': 'login', 'format': 'json', 'utf8': '', 'lgname': username, 'lgpassword': password, 'lgtoken': login_token}
r2 = requests.post(mywiki + 'api.php', data=payload, cookies=r1.cookies)
print(r2.cookies)
So, right here I am successfully logged in, and cookies are stored in r2.cookies.
The print()-command gives:
<RequestsCookieJar[<Cookie produniswikiToken=832a1f1da165016fb9d9a107ddb218fc for 192.168.0.4/>, <Cookie produniswikiUserID=1 for 192.168.0.4/>, <Cookie produniswikiUserName=Produnis for 192.168.0.4/>, <Cookie produniswiki_session=oddicobpi1d5af4n0qs71g7dg1kklmbo for 192.168.0.4/>]>
I can save the cookies into a file:
def save_cookies(requests_cookiejar, filename):
with open(filename, 'wb') as f:
pickle.dump(requests_cookiejar, f)
save_cookies(r2.cookies, "cookies")
This file looks like this: http://pastebin.com/yKyCpPTW
Now I want to print a specific page into PDF using pdfkit. Manpage states, that cookies can be set via a cookie-jar file:
options = {
'page-size': 'A4',
'margin-top': '0.5in',
'margin-right': '0.5in',
'margin-bottom': '0.5in',
'margin-left': '0.5in',
'encoding': "UTF-8",
'cookie-jar' : "cookies",
'no-outline': None
}
current_pdf = pdfkit.from_url(pdf_url, the_filename, options=options)
My Problem is:
with this code, the "cookies" file becomes 0KB and the PDF states "You must be logged in to view a page..."
So my question is:
How can I use a requests.cookies in pdfkit.from_url()?
I had the same issue and overcame it with the following:
import requests, pdfkit
# Get login cookie
s = requests.session() # if you're making multiple calls
data = {'username': 'admin', 'password': 'hunter2'}
s.post('http://example.com/login', data=data)
# Get yourself a PDF
options = {'cookie': s.cookies.items(), 'javascript-delay': 1000}
pdfkit.from_url('http://example.com/report', 'report.pdf', options=options)
Depending on how much javascript you're trying to load you might want to set the javascript-delay to something higher or lower; the default is 200ms.
I am trying to take some working code and change from urlib2 to requests.
The original code provides basic login information of username, password and posts the KEY and SECRET in the header of the urllib2 request. The following code is my attempt to change to using the requests module and gain some functionality for making additional API calls. I have tried dozens of combinations and all return a code 400. Apparently, my requests code does not successfully furnish the needed information to return a 200 response and provide the needed authorization token.
## Import needed modules
import urllib2, urllib, base64
import httplib
import requests
import json
## initialize variables
KEY = "7f1xxxx-3xxx-4xxx-9a7f-8be66839dede"
SECRET = "45xxxxxx-45xxx-469a-9ae9-a7927a76cfeb"
userName = "my-email#xxx.com"
passWord = "mypassword"
URL = "https://company.com/auth/token"
token = None
sessionid = None
DATA = urllib.urlencode({"grant_type":"password",
"username":userName,
"password":passWord})
base64string = base64.encodestring('%s:%s' % (KEY, SECRET)).replace('\n', '')
request = urllib2.Request(URL, DATA)
request.add_header("Authorization", "Basic %s" % base64string)
result = urllib2.urlopen(request)
token = result.read()
print token
This returns my authorization token and all is well. I can pass the token to the authorization server and have full access to the api for interacting with the database. Below is the attempt to use requests and have the added functions it provides.
client = requests.session()
payload = {"grant_type":"password",
"username":userName,
"password":passWord,
"applicationId": KEY
}
headers = {'content-type':'application/json',
"grant_type":"password",
"username":userName,
"password":passWord,
'applicationsId': KEY,
'Authorization': base64string,
'token': token,
'sessionid': sessionid
}
response = client.post(URL, params = payload, headers=headers)
token = response.content
print token
{"error":"invalid_request"}
print response
<Response [400]>
If you want to use basic auth you should use the method from requests..
Your post should look like
response = client.post(
URL,
params = payload,
headers=headers,
auth=HTTPBasicAuth(
KEY,
SECRET
))
Somewhere in a post a contributor to another question mentioned some items actually needed to be in the body of the request not in the header. I tried various combos and the following solved the 400 response and accomplished my goals.
data = {"grant_type":"password",
"username":userName,
"password":passWord,
"applicationId": KEY
}
headers = {'Authorization': "Basic %s" % base64string,
'token': token
}
response = client.post(URL, data = data, headers=headers)
token = response.text
print token
I am trying to post some data from Dojo to Django application. I use postData to post the data to the server
here is the code snippet
var csrftokenval = dojo.cookie('csrftoken');
var selectedmoid1 = tree.getSelectedItemId();
var loadURL = '/calerts/';
dojo.rawXhrPost({
url : loadURL,
headers : {'X-CSRFToken':csrftokenval},
postData: dojo.toJson({'selectedmoid':selectedmoid1,'previousval':previousVal}),
handleAs: "text",
load : function(data, ioArgs){
dojo.byId('content-main').innerHTML = data;
},
error : function(data, ioArgs){
}
});
In the Django views i get the data as
def calerts(request):
user = request.user
compId = int(request.session.get('USERCOMPANY_ID','-1'))
listCount = 25
print '0000000000000000000000000000000 ',request.POST
print 'post dictionary ::: ',request.POST.dict()
I know to get the dict value from querydict using dict() method however in my case the print is
post dictionary ::: {u'{"selectedmoid":"4","previousval":"4"}': u''}
i dont undersand where that final u'' comes from. Also i would like to retrieve the values of selectedmoid and previousval
Your sending the data as a raw JSON post, not a form-encoded one. So you should access request.body, not request.POST, and decode the JSON from there.