I am currently trying to connect to my instance of parse-server, created from Bitnami's AWS Parse server image, with the ParsePy python package.
However, the package's register() function requires a 'rest_key' parameter, which I believe to be the parse server instance's Rest API key.
I looked for the key in the following file, which came with the image:
/home/bitnami/apps/parse/htdocs/server.js
And only found parameters labelled masterKey and fileKey. I found a similar question here, but those answers don't direct me to where I would find the key in a Bitnami parse-server image.
Any guidance would be helpful. Thanks!
Edit: The relevant portions of server.js are shared below:
var express = require('express');
var ParseServer = require('parse-server').ParseServer;
var app = express();
var api = new ParseServer({
databaseURI: "mongodb://root:WcbKujWVdiX2#127.0.0.1:27017/bitnami_parse",
cloud: "./node_modules/parse-server/lib/cloud-code/Parse.Cloud.js",
appId: "APP_ID",
masterKey: "MASTER_KEY",
fileKey: "FILE_KEY",
serverURL: "http://34.242.164.250:80/parse"
});
I tried adding a parameter for a restAPIKey before serverURL as follows:
restAPIKey: "REST_API_KEY"
But that simply leads to this error message:
Traceback (most recent call last):
File "/anaconda3/lib/python3.7/site-packages/parse_rest/connection.py", line 140, in execute
response = urlopen(request, timeout=CONNECTION_TIMEOUT)
File "/anaconda3/lib/python3.7/urllib/request.py", line 222, in urlopen
return opener.open(url, data, timeout)
File "/anaconda3/lib/python3.7/urllib/request.py", line 531, in open
response = meth(req, response)
File "/anaconda3/lib/python3.7/urllib/request.py", line 641, in http_response
'http', request, response, code, msg, hdrs)
File "/anaconda3/lib/python3.7/urllib/request.py", line 569, in error
return self._call_chain(*args)
File "/anaconda3/lib/python3.7/urllib/request.py", line 503, in _call_chain
result = func(*args)
File "/anaconda3/lib/python3.7/urllib/request.py", line 649, in http_error_default
raise HTTPError(req.full_url, code, msg, hdrs, fp)
urllib.error.HTTPError: HTTP Error 410: Parse.com has shutdown - https://parseplatform.github.io/
Related
Hi I have been trying to import a dataset using ckan api call via python's urllib2 following the documentation at http://docs.ckan.org/en/latest/api/
the Code I am running is
`
#!/usr/bin/env python
import urllib2
import urllib
import json
import pprint
dataset_dict = {
'name': 'my_dataset_name5',
'notes': 'A long description of my dataset',
}
data_string = urllib.quote(json.dumps(dataset_dict))
request = urllib2.Request(
'http://<ckan server ip>/api/action/package_create')
request.add_header('Authorization', 'my api key')
response = urllib2.urlopen(request, data_string)
assert response.code == 200
response_dict = json.loads(response.read())
assert response_dict['success'] is True
created_package = response_dict['result']
pprint.pprint(created_package)`
However it gives the following error:
Traceback (most recent call last):File "autodatv2.py", line 26, in
response = urllib2.urlopen(request, data_string)File
"/usr/lib64/python2.7/urllib2.py", line 154, in urlopen return
opener.open(url, data, timeout)File "/usr/lib64/python2.7/urllib2.py",
line 437, in open response = meth(req, response)File
"/usr/lib64/python2.7/urllib2.py", line 550, in http_response 'http',
request, response, code, msg, hdrs) File
"/usr/lib64/python2.7/urllib2.py", line 475, in error return
self._call_chain(*args) File "/usr/lib64/python2.7/urllib2.py", line
409, in _call_chain result = func(*args) File
"/usr/lib64/python2.7/urllib2.py", line 558, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 409: Conflict
I am running CKAN version 2.4 with python 2.7.10 on a Amazon ec2 instance and
echo $HTTP_PROXY shows nothing so I'm assuming it not a proxy issue..
Could someone please provide any help to resolve this issue
CKAN is returning HTTP error 409, which could mean nearly anything. e.g You could have a missing field or there may be already a dataset of that name in the CKAN.
There will be an error message explaining the problem in the response body and also on in the CKAN log.
Frankly, using urllib2 is making life hard for yourself. To talk to the CKAN API in python, at the very least use 'requests', but best practice is to use https://github.com/ckan/ckanapi e.g.
import ckanapi
demo = ckanapi.RemoteCKAN('http://demo.ckan.org',
apikey='phony-key',
user_agent='ckanapiexample/1.0 (+http://example.com/my/website)')
pkg = demo.action.package_create(name='my-dataset', title='not going to work')
Why google adwords api stops on call this link:
https://adwords.google.com/api/adwords/mcm/v201502/CustomerService?wsdl
With this error - should I load some certificate before and how?
urllib2.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>
Using Python 2.7.10.
Full source code:
create_adwords_client_without_yaml.py
Full error code:
Traceback (most recent call last):
File "C:/Users/Crezary Wagner/PycharmProjects/learn-adwords/src/examples/create_adwords_client_without_yaml.py", line 56, in <module>
CLIENT_CUSTOMER_ID)
File "C:/Users/Crezary Wagner/PycharmProjects/learn-adwords/src/examples/create_adwords_client_without_yaml.py", line 50, in main
customer = adwords_client.GetService('CustomerService').get()
File "C:\root\Python27\lib\site-packages\googleads\adwords.py", line 256, in GetService
proxy=proxy_option, cache=self.cache, timeout=3600)
File "C:\root\Python27\lib\site-packages\suds\client.py", line 115, in __init__
self.wsdl = reader.open(url)
File "C:\root\Python27\lib\site-packages\suds\reader.py", line 150, in open
d = self.fn(url, self.options)
File "C:\root\Python27\lib\site-packages\suds\wsdl.py", line 136, in __init__
d = reader.open(url)
File "C:\root\Python27\lib\site-packages\suds\reader.py", line 74, in open
d = self.download(url)
File "C:\root\Python27\lib\site-packages\suds\reader.py", line 92, in download
fp = self.options.transport.open(Request(url))
File "C:\root\Python27\lib\site-packages\suds\transport\https.py", line 62, in open
return HttpTransport.open(self, request)
File "C:\root\Python27\lib\site-packages\suds\transport\http.py", line 67, in open
return self.u2open(u2request)
File "C:\root\Python27\lib\site-packages\suds\transport\http.py", line 132, in u2open
return url.open(u2request, timeout=tm)
File "C:\root\Python27\lib\urllib2.py", line 431, in open
response = self._open(req, data)
File "C:\root\Python27\lib\urllib2.py", line 449, in _open
'_open', req)
File "C:\root\Python27\lib\urllib2.py", line 409, in _call_chain
result = func(*args)
File "C:\root\Python27\lib\urllib2.py", line 1240, in https_open
context=self._context)
File "C:\root\Python27\lib\urllib2.py", line 1197, in do_open
raise URLError(err)
urllib2.URLError: <urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)>
Python uses certificates from 'system ssl certificate store' to certify https connection, if there is not any appropriate ssl cert in the store error occurs like this.
Download ssl certificate (Open your https link in browser and click lock icon in address bar > More Information > View Certificate > Details > Export) and install it on your system as stated this link http://windows.microsoft.com/en-us/windows/import-export-certificates-private-keys#1TC=windows-7
Not sure if that's the problem here, but worth checking it.
Python 2.7.9 enabled certificate validation by default for HTTP connections.
The server you're connecting to does not have a certificate that is trusted by your client. pysphere should configure SSL appropriately for this use case.
Try making your request like:
requests.get('https://adwords.google.com/api/adwords/mcm/v201502/CustomerService?wsdl', verify=False)
Try this, it helped me:
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
I encountered this issue. I had my phone setup using the same DNS block list and it wasn't immediately apparent after I'd enabled the tool and resumed work on this particular project. I suggest scrutinizing your setup and verify that there aren't any adblockers (DNS level in my case ala NextDNS/hosted PiHole) enabled. Hours upon hours spent trying out python versions, certificates, reinstalling things. Hope this helps someone!
hdr = {'User-Agent': 'Mozilla/5.0'}
url = "https://www.youtube.com/results?search_query=%s+%s" % (artistName, songName)
req = urllib2.Request(url, headers=hdr)
page = urllib2.urlopen(req)
soup = BeautifulSoup(page)
for SongTitle in soup.findAll('a', {'class': 'yt-uix-tile-link yt-ui-ellipsis yt-ui-ellipsis-2 yt-uix-sessionlink spf-link '}):
song_titles.append(SongTitle.string)
i += 1
So this is the code I'm using to search a youtube page with a custom URL. After that, the user can choose which video to download based on the video number.
Problem is, most of the videos that I try to download return the following error:
WARNING:root:ciphertag doesn't match signature type
WARNING:root:JRMOMjCoR58
Traceback (most recent call last):
File "PyKo.py", line 123, in <module>
query(sys.argv[1], sys.argv[2])
File "PyKo.py", line 118, in query
downloadSong(link_to_download, s_url)
File "PyKo.py", line 30, in downloadSong
video = pafy.new(url)
File "C:\Python27\lib\site-packages\pafy\pafy.py", line 138, in new
return Pafy(url, basic, gdata, signature, size, callback)
File "C:\Python27\lib\site-packages\pafy\pafy.py", line 1041, in __init__
self.fetch_basic()
File "C:\Python27\lib\site-packages\pafy\pafy.py", line 1087, in fetch_basic
self.dash = _extract_dash(self._dashurl)
File "C:\Python27\lib\site-packages\pafy\pafy.py", line 274, in _extract_dash
dashdata = fetch_decode(dashurl)
File "C:\Python27\lib\site-packages\pafy\pafy.py", line 91, in fetch_decode
req = g.opener.open(url)
File "C:\Python27\lib\urllib2.py", line 410, in open
response = meth(req, response)
File "C:\Python27\lib\urllib2.py", line 523, in http_response
'http', request, response, code, msg, hdrs)
File "C:\Python27\lib\urllib2.py", line 448, in error
return self._call_chain(*args)
File "C:\Python27\lib\urllib2.py", line 382, in _call_chain
result = func(*args)
File "C:\Python27\lib\urllib2.py", line 531, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
And only a very very small amount of videos works.
I've Googled around and found an answer to something similar and it said to insert a header, so I did but it still won't work.
Thanks in advance!
i Created new config file:
$ sudo vi ~/.boto
there i paste my credentials (as written in readthedocs for botp):
[Credentials]
aws_access_key_id = YOURACCESSKEY
aws_secret_access_key = YOURSECRETKEY
im trying to check connection:
import boto
boto.set_stream_logger('boto')
s3 = boto.connect_s3("us-east-1")
and my answer:
2014-11-26 14:05:49,532 boto [DEBUG]:Using access key provided by client.
2014-11-26 14:05:49,532 boto [DEBUG]:Retrieving credentials from metadata server.
2014-11-26 14:05:50,539 boto [ERROR]:Caught exception reading instance data
Traceback (most recent call last):
File "/Library/Python/2.7/site-packages/boto/utils.py", line 210, in retry_url
r = opener.open(req, timeout=timeout)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 404, in open
response = self._open(req, data)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 422, in _open
'_open', req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 382, in _call_chain
result = func(*args)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1214, in http_open
return self.do_open(httplib.HTTPConnection, req)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/urllib2.py", line 1184, in do_open
raise URLError(err)
URLError: <urlopen error timed out>
2014-11-26 14:05:50,540 boto [ERROR]:Unable to read instance data, giving up
Traceback (most recent call last):
File "/Users/user/PycharmProjects/project/untitled.py", line 8, in <module>
s3 = boto.connect_s3("us-east-1")
File "/Library/Python/2.7/site-packages/boto/__init__.py", line 141, in connect_s3
return S3Connection(aws_access_key_id, aws_secret_access_key, **kwargs)
File "/Library/Python/2.7/site-packages/boto/s3/connection.py", line 190, in __init__
validate_certs=validate_certs, profile_name=profile_name)
File "/Library/Python/2.7/site-packages/boto/connection.py", line 569, in __init__
host, config, self.provider, self._required_auth_capability())
File "/Library/Python/2.7/site-packages/boto/auth.py", line 975, in get_auth_handler
'Check your credentials' % (len(names), str(names)))
boto.exception.NoAuthHandlerFound: No handler was ready to authenticate. 1 handlers were checked. ['HmacAuthV1Handler'] Check your credentials
why its not found the Credentials?
there is something that i did wrong?
Your issue is:
The string 'us-west-1' you provide as the first argument will be treat as the AWSAccessKeyID.
What you want is:
First creating a connection, note that a connection has no region or location info in it.
conn = boto.connect_s3('your_access_key', 'your_secret_key')
And then when you want to do some thing with the bucket, write the region info as an argument.
from boto.s3.connection import Location
conn.create_bucket('mybucket', location=Location.USWest)
or:
conn.create_bucket('mybucket', location='us-west-1')
By default, the location is the empty string which is interpreted as the US Classic Region, the original S3 region. However, by specifying another location at the time the bucket is created, you can instruct S3 to create the bucket in that location.
I am trying to use python-request package to download a mass amount of files(like 10k+) from the web, each file size from several k to the largest as 100mb.
my script can run through fine for maybe 3000 files but suddenly it will hang.
I ctrl-c it and see it stuck at
r = requests.get(url, headers=headers, stream=True)
File "/Library/Python/2.7/site-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/Library/Python/2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/Library/Python/2.7/site-packages/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/Library/Python/2.7/site-packages/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/Library/Python/2.7/site-packages/requests/adapters.py", line 327, in send
timeout=timeout
File "/Library/Python/2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 493, in urlopen
body=body, headers=headers)
File "/Library/Python/2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 319, in _make_request
httplib_response = conn.getresponse(buffering=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1045, in getresponse
response.begin()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 409, in begin
version, status, reason = self._read_status()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 365, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
Here is my python code to do the download
basedir = os.path.dirname(filepath)
if not os.path.exists(basedir):
os.makedirs(basedir)
r = requests.get(url, headers=headers, stream=True)
with open(filepath, 'w') as f:
for chunk in r.iter_content(1024):
if chunk:
f.write(chunk)
f.flush()
I am not sure what went wrong, if anyone has a clue, please kindly share some insights.
Thanks.
This is not a duplicate of the question that #alfasin linked in their comment. Judging by the (limited) traceback you posted, the request itself is hanging (the first line shows it was executing r = requests.get(url, headers=headers, stream=True)).
What you should do is set a timeout and catch the exception that is raised when the request times out. Once you have the URL try it in a browser or with curl to ensure it responds properly, otherwise remove it from your list of URLs to request. If you find the misbehaving URL, please update your question with it.
I faced a similar situation and it seems like a bug in the requests package was causing this issue. Upgrading to requests package 2.10.0 fixed it for me.
For your reference the change log for Requests 2.10.0 shows that the embedded urllib3 was updated to version 1.15.1 Release history
And the release history for urllib3 (Release history ) shows that version 1.15.1 included fixes for:
Chunked transfer encoding when requesting with chunked=True. (Issue #790)
Fixed AppEngine handling of transfer-encoding header and bug in Timeout defaults checking. (Issue #763)