Error while running python script to get pixel points from google static map image. I got the python script from Google maps - how to get building's polygon coordinates from address?
I use python2.7 to execute the script
Initially when i was running the script i was not getting any error, but after continuous running for 3-4 hours i am getting the following error
Traceback (most recent call last):
File "pyscript.py", line 19, in <module>
imgBuildings = io.imread(urlBuildings)
File "/usr/local/lib/python2.7/dist-packages/skimage/io/_io.py", line 60, in i
with file_or_url_context(fname) as fname:
File "/usr/lib/python2.7/contextlib.py", line 17, in __enter__
return self.gen.next()
File "/usr/local/lib/python2.7/dist-packages/skimage/io/util.py", line 29, in
u = urlopen(resource_name)
File "/usr/lib/python2.7/urllib2.py", line 154, in urlopen
return opener.open(url, data, timeout)
File "/usr/lib/python2.7/urllib2.py", line 435, in open
response = meth(req, response)
File "/usr/lib/python2.7/urllib2.py", line 548, in http_response
'http', request, response, code, msg, hdrs)
File "/usr/lib/python2.7/urllib2.py", line 473, in error
return self._call_chain(*args)
File "/usr/lib/python2.7/urllib2.py", line 407, in _call_chain
result = func(*args)
File "/usr/lib/python2.7/urllib2.py", line 556, in http_error_default
raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 403: Forbidden
AS i am new to python i am not sure how to fix it ? Is this some kind of cache issue?
Help very much appreciated.
I've seen this problem quite a lot and its due to intermittent network drop errors. There is a recursive trick with try/catch exception handling that will avoid this ever happening, even if your network goes down for hours.
To explain: You attempt a download. If it fails, the download will attempt a recursive retry again 1/4,1/2,1,2,4,8,... seconds later, waiting up to 1 hour to get the next download. If you are working in a company for instance, the network might go down over the weekend, but your code will just poll for 1 hour (maximum) and then recover again when the network is fixed.
import time
def recursiveBuildingGetter( urlBuildings, waitTime=0.25 ):
try:
imgBuildings = io.imread(urlBuildings)
except:
print "Warning: Failure at time %f secs for %s" % ( waitTime, str(urlBuildings) )
waitTime = waitTime * 2.0
if ( waitTime > 3600.0 ):
waitTime = 3600.0
time.sleep(waitTime)
imgBuildings = recursiveBuildingGetter( urlBuilding, waitTime )
if ( waitTime == 3600.0 ):
waitTime = 0.25
return imgBuildings
Related
I have a Google Cloud Function in Python 3.7 reading from a Pub/Sub subscription in synchronous pull mode.
After running fine 1/hour for 24 hours, it threw this exception stack trace:
Traceback (most recent call last): File
"/env/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py",
line 57, in error_remapped_callable
return callable_(*args, **kwargs) File "/env/local/lib/python3.7/site-packages/grpc/_channel.py", line 824,
in call
return _end_unary_response_blocking(state, call, False, None) File "/env/local/lib/python3.7/site-packages/grpc/_channel.py", line
726, in _end_unary_response_blocking
raise _InactiveRpcError(state) grpc._channel._InactiveRpcError: <_InactiveRpcError of RPC that terminated with: status =
StatusCode.DEADLINE_EXCEEDED details = "Deadline Exceeded"
debug_error_string =
"{"created":"#1580454091.145703535","description":"Error received from
peer
ipv4:74.125.202.95:443","file":"src/core/lib/surface/call.cc","file_line":1056,"grpc_message":"Deadline
Exceeded","grpc_status":4}"
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File
"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py",
line 346, in run_http_function
result = _function_handler.invoke_user_function(flask.request) File
"/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py",
line 217, in invoke_user_function
return call_user_function(request_or_event) File "/env/local/lib/python3.7/site-packages/google/cloud/functions/worker.py",
line 210, in call_user_function
return self._user_function(request_or_event) File "/user_code/main.py", line 39, in iteration
response = sub.pull(sub_path, MAX_MESSAGES) File "/env/local/lib/python3.7/site-packages/google/cloud/pubsub_v1/_gapic.py",
line 40, in
fx = lambda self, *a, **kw: wrapped_fx(self.api, *a, **kw) # noqa File
"/env/local/lib/python3.7/site-packages/google/cloud/pubsub_v1/gapic/subscriber_client.py",
line 1005, in pull
request, retry=retry, timeout=timeout, metadata=metadata File "/env/local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py",
line 143, in call
return wrapped_func(*args, **kwargs) File "/env/local/lib/python3.7/site-packages/google/api_core/retry.py",
line 286, in retry_wrapped_func
on_error=on_error, File "/env/local/lib/python3.7/site-packages/google/api_core/retry.py",
line 184, in retry_target
return target() File "/env/local/lib/python3.7/site-packages/google/api_core/timeout.py",
line 214, in func_with_timeout
return func(*args, **kwargs) File "/env/local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py",
line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc) File "", line 3, in raise_from
google.api_core.exceptions.DeadlineExceeded: 504 Deadline Exceeded
What is this about? Is it to be expected or a result of some configuration problem? If to be expected, how should it be handled?
The documentation ( view-source:https://googleapis.dev/python/pubsub/latest/subscriber/api/client.html ) on pull has nothing about this being a possible exception.
I ack the messages immediately after the pull completes. I only permit one function execution at a time. I have a 600 second acknowledgement deadline. A block of messages pulled at one time seem to be less than 100 in number. If this is about failing to ack a message, it seems like the error could be done much better.
This exception is raised by the client when there's no messages to read in the subscription. It is a known issue from the latest PubSub library versions >= 1.0.0. If necessary, you can downgrade to the version 0.45.0 where this issue was not present.
However, as a workaround you can catch the DeadlineExceeded exception and retry the operation again. Also, based on the comment of Hemang, here's a small monkeypatch that you can add to your running code, which might help to get the same behavior as in version 0.45.0.
from google.cloud.pubsub_v1.gapic import subscriber_client_config as sub_config
sub_config.config['interfaces']['google.pubsub.v1.Subscriber']['retry_params']['messaging']['initial_rpc_timeout_millis'] = 25000
Finally, keep in mind that when using synchronous pull, having many outstanding pull requests helps lower the delivery latency, which in turn might result in higher latency pull requests (and DeadlineExceeded errors). Although, if latency is crucial for the application, you could consider using StreamingPull
Python 2.7.3
Calling an API from a Raspberry Pi 3, the API logs show it hits the correct endpoint and returns with a 200 status code, but the python code from the Pi spits out a huge error stack. I saw in some forums that the ZeroReturnError is always thrown meaning that there was nothing wrong, but that seems weird since I can't actually get the results of the response in an except block from the try.
My code is literally
import requests
response = requests.get(<URL I AM USING>, json={JSON I AM USING})
Not sure what to do.
Traceback (most recent call last):
File "music.py", line 13, in <module>
response = requests.get(url, json={'blah':{'blah':'*********'}})
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 60, in get
return request('get', url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/api.py", line 49, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 457, in request
resp = self.send(prep, **send_kwargs)
File "/usr/lib/python2.7/dist-packages/requests/sessions.py", line 606, in send
r.content
File "/usr/lib/python2.7/dist-packages/requests/models.py", line 724, in content
self._content = bytes().join(self.iter_content(CONTENT_CHUNK_SIZE)) or bytes()
File "/usr/lib/python2.7/dist-packages/requests/models.py", line 653, in generate
for chunk in self.raw.stream(chunk_size, decode_content=True):
File "/usr/lib/python2.7/dist-packages/urllib3/response.py", line 256, in stream
data = self.read(amt=amt, decode_content=decode_content)
File "/usr/lib/python2.7/dist-packages/urllib3/response.py", line 186, in read
data = self._fp.read(amt)
File "/usr/lib/python2.7/httplib.py", line 602, in read
s = self.fp.read(amt)
File "/usr/lib/python2.7/socket.py", line 380, in read
data = self._sock.recv(left)
File "/usr/lib/python2.7/dist-packages/urllib3/contrib/pyopenssl.py", line 188, in recv
data = self.connection.recv(*args, **kwargs)
OpenSSL.SSL.ZeroReturnError
Some more searching brought me to think it was version issues.
Ran sudo pip install urllib3 --upgrade on the Raspberry Pi and it cleared it up.
I am getting a DependencyWarning about installing PySocks, but its working correctly now.
INFO 2015-10-09 11:07:31,718 connectionpool.py:695] Starting new HTTPS connection (1): api.sandbox.braintreegateway.com
DEBUG 2015-10-09 11:07:31,724 api_server.py:277] Handled remote_socket.Resolve in 0.0028
DEBUG 2015-10-09 11:07:31,728 api_server.py:277] Handled remote_socket.CreateSocket in 0.0009
DEBUG 2015-10-09 11:07:32,049 api_server.py:277] Handled remote_socket.Connect in 0.3168
DEBUG 2015-10-09 11:07:32,055 api_server.py:272] Exception while handling service_name: "remote_socket"
method: "GetSocketOptions"
request: "\n$d15a35d7-d299-43c1-ba76-8bf4107f8850\022\006\010\001\020\003\032\000"
request_id: "aiUMNcTaLS"
Traceback (most recent call last):
File "/home/abc/Downloads/google-appengine/google_appengine/google/appengine/tools/devappserver2/api_server.py", line 247, in _handle_POST
api_response = _execute_request(request).Encode()
File "/home/abc/Downloads/google-appengine/google_appengine/google/appengine/tools/devappserver2/api_server.py", line 186, in _execute_request
make_request()
File "/home/abc/Downloads/google-appengine/google_appengine/google/appengine/tools/devappserver2/api_server.py", line 181, in make_request
request_id)
File "/home/abc/Downloads/google-appengine/google_appengine/google/appengine/api/apiproxy_stub.py", line 131, in MakeSyncCall
method(request, response)
File "/home/abc/Downloads/google-appengine/google_appengine/google/appengine/api/remote_socket/_remote_socket_stub.py", line 56, in WrappedMethod
return method(self, *args, **kwargs)
File "/home/abc/Downloads/google-appengine/google_appengine/google/appengine/api/remote_socket/_remote_socket_stub.py", line 265, in _Dynamic_GetSocketOptions
'Attempt to get blocked socket option.')
ApplicationError: ApplicationError: 5 Attempt to get blocked socket option.
DEBUG 2015-10-09 11:07:32,056 api_server.py:277] Handled remote_socket.GetSocketOptions in 0.0014
INFO 2015-10-09 11:07:32,058 views.py:570] handle_exception
INFO 2015-10-09 21:28:17,317 views.py:559] Traceback (most recent call last):
File "/home/abc/Downloads/google-appengine/google_appengine/lib/webapp2-2.5.2/webapp2.py", line 570, in dispatch
return method(*args, **kwargs)
File "/home/abc/projects/src/views.py", line 806, in get
self._callHandlingMethod(url, self.getRegexps)
File "/home/abc/projects/src/views.py", line 883, in _callHandlingMethod
function(*matched.groups())
File "/home/abc/projects/src/views.py", line 2992, in buy_get
"client_token": braintree.ClientToken.generate(),
File "/home/abc/projects/src/lib/braintree/client_token.py", line 25, in generate
return gateway.generate(params)
File "/home/abc/projects/src/lib/braintree/client_token_gateway.py", line 17, in generate
response = self.config.http().post("/client_token", params)
File "/home/abc/projects/src/lib/braintree/util/http.py", line 49, in post
return self.__http_do("POST", path, params)
File "/home/abc/projects/src/lib/braintree/util/http.py", line 66, in __http_do
status, response_body = http_strategy.http_do(http_verb, full_path, self.__headers(), request_body)
File "/home/abc/projects/src/lib/braintree/util/http.py", line 87, in http_do
timeout=self.config.timeout
File "/home/abc/projects/src/lib/requests/api.py", line 92, in post
return request('post', url, data=data, **kwargs)
File "/home/abc/projects/src/lib/requests/api.py", line 48, in request
return session.request(method=method, url=url, **kwargs)
File "/home/abc/projects/src/lib/requests/sessions.py", line 451, in request
resp = self.send(prep, **send_kwargs)
File "/home/abc/projects/src/lib/requests/sessions.py", line 557, in send
r = adapter.send(request, **kwargs)
File "/home/abc/projects/src/lib/requests/adapters.py", line 407, in send
raise ConnectionError(err, request=request)
ConnectionError: ('Connection aborted.', error(13, 'Permission denied'))
This seemed to be a https issue. I have tried different approaches
a) https://github.com/agfor/braintree-python-appengine . Gave me same error
b) I thought this could be the error from this issue - https://urllib3.readthedocs.org/en/latest/security.html#openssl-pyopenssl
But on updating required libraries , I get stuck at OpenSSL.crypto failed import.
Help anyone!
It looks like braintree is trying to use a socket option unsupported by GAE, you can see a list of supported options here https://cloud.google.com/appengine/docs/python/sockets/ which also states that attempting to get an unsupported option will raise an error
Braintree Version - 3.20.0
Requests version - 2.7.0
With the help of my friend , I used the following hack -
in braintree/util/http.py in the method - __http_do
from google.appengine.api import urlfetch
.....
.....
try:
if http_verb in ["POST", "PUT"]:
result = urlfetch.fetch(url=full_path,
payload=request_body,
method=urlfetch.POST,
headers=self.__headers())
logging.debug('result: %r' % result)
status = result.status_code
response_body = result.content
logging.debug(result.content)
else:
status, response_body = http_strategy.http_do(http_verb, full_path, self.__headers(), request_body)
except Exception as e:
.....
.....
Using this hack, I was able to get things working. Hope this helps anybody coming across the same issue.
I am trying to use python-request package to download a mass amount of files(like 10k+) from the web, each file size from several k to the largest as 100mb.
my script can run through fine for maybe 3000 files but suddenly it will hang.
I ctrl-c it and see it stuck at
r = requests.get(url, headers=headers, stream=True)
File "/Library/Python/2.7/site-packages/requests/api.py", line 55, in get
return request('get', url, **kwargs)
File "/Library/Python/2.7/site-packages/requests/api.py", line 44, in request
return session.request(method=method, url=url, **kwargs)
File "/Library/Python/2.7/site-packages/requests/sessions.py", line 456, in request
resp = self.send(prep, **send_kwargs)
File "/Library/Python/2.7/site-packages/requests/sessions.py", line 559, in send
r = adapter.send(request, **kwargs)
File "/Library/Python/2.7/site-packages/requests/adapters.py", line 327, in send
timeout=timeout
File "/Library/Python/2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 493, in urlopen
body=body, headers=headers)
File "/Library/Python/2.7/site-packages/requests/packages/urllib3/connectionpool.py", line 319, in _make_request
httplib_response = conn.getresponse(buffering=True)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1045, in getresponse
response.begin()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 409, in begin
version, status, reason = self._read_status()
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 365, in _read_status
line = self.fp.readline(_MAXLINE + 1)
File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/socket.py", line 476, in readline
data = self._sock.recv(self._rbufsize)
Here is my python code to do the download
basedir = os.path.dirname(filepath)
if not os.path.exists(basedir):
os.makedirs(basedir)
r = requests.get(url, headers=headers, stream=True)
with open(filepath, 'w') as f:
for chunk in r.iter_content(1024):
if chunk:
f.write(chunk)
f.flush()
I am not sure what went wrong, if anyone has a clue, please kindly share some insights.
Thanks.
This is not a duplicate of the question that #alfasin linked in their comment. Judging by the (limited) traceback you posted, the request itself is hanging (the first line shows it was executing r = requests.get(url, headers=headers, stream=True)).
What you should do is set a timeout and catch the exception that is raised when the request times out. Once you have the URL try it in a browser or with curl to ensure it responds properly, otherwise remove it from your list of URLs to request. If you find the misbehaving URL, please update your question with it.
I faced a similar situation and it seems like a bug in the requests package was causing this issue. Upgrading to requests package 2.10.0 fixed it for me.
For your reference the change log for Requests 2.10.0 shows that the embedded urllib3 was updated to version 1.15.1 Release history
And the release history for urllib3 (Release history ) shows that version 1.15.1 included fixes for:
Chunked transfer encoding when requesting with chunked=True. (Issue #790)
Fixed AppEngine handling of transfer-encoding header and bug in Timeout defaults checking. (Issue #763)
I have the following celery task:
#task
def get_users_facebook_as_profile_icon(user_id, facebook_id):
logger.info('Grabbing users facebook picture')
url = "http://graph.facebook.com/%s/picture?type=large" % facebook_id
import requests
response = requests.get(url)
if response.status_code != 200:
raise Exception("Could not get facebook profile picture")
...
I have more after this, but I keep getting the following error:
"AssertionError('PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()',)"
Task was called with args: (3246, 17500596) kwargs: {}.
The contents of the full traceback was:
Traceback (most recent call last):
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 240, in trace_task
R = retval = fun(*args, **kwargs)
File "/usr/local/lib/python2.7/dist-packages/celery/app/trace.py", line 437, in __protected_call__
return self.run(*args, **kwargs)
File "/home/ubuntu/mounzawebsite/mounza/celery_tasks/login_registration.py", line 42, in get_users_facebook_as_profile_icon
hashname = user.generate_picture_name()
File "/home/ubuntu/mounzawebsite/mounza/web/models.py", line 515, in generate_picture_name
return generate_random_name(None)
File "/home/ubuntu/mounzawebsite/mounza/web/models.py", line 40, in generate_random_name
str(random.randint(1, 99982098098908237)) +
File "/usr/lib/python2.7/dist-packages/Crypto/Random/__init__.py", line 41, in get_random_bytes
return _UserFriendlyRNG.get_random_bytes(n)
File "/usr/lib/python2.7/dist-packages/Crypto/Random/_UserFriendlyRNG.py", line 213, in get_random_bytes
return _get_singleton().read(n)
File "/usr/lib/python2.7/dist-packages/Crypto/Random/_UserFriendlyRNG.py", line 163, in read
return _UserFriendlyRNG.read(self, bytes)
File "/usr/lib/python2.7/dist-packages/Crypto/Random/_UserFriendlyRNG.py", line 122, in read
self._check_pid()
File "/usr/lib/python2.7/dist-packages/Crypto/Random/_UserFriendlyRNG.py", line 138, in _check_pid
raise AssertionError("PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()")
AssertionError: PID check failed. RNG must be re-initialized after fork(). Hint: Try Random.atfork()
I tried digging into this online, not able to find the root cause. but this is the only task where this error occurs. The only difference is that i'm downloading an image from Facebook, but I never see this issue anywhere else, including other tasks where I download images.
The URL works perfectly if I do it through a web browser, but it's only via this task it fails. Is there anything else that could contribute to this??
I have exhausted all attempts in fixing this :(
Here is why:
http://comments.gmane.org/gmane.comp.python.amqp.celery.user/3664
always run teh below:
Crypto.Random.atfork()
When a new worker process is initialized. Done and done.