I wrote a program to simply just read the unofficial binary list from "https://www.lfd.uci.edu/~gohlke/pythonlibs/" and then use beautifulSoup to just produce a table of all packages in the webpage but I keep getting this error
Requests giving EOF occurred in violation of protocol (_ssl.c:661)
Here is my code also I am using a windows machine with Python 2.7.14:
import urllib2
from bs4 import BeautifulSoup
url = urllib2.urlopen("https://www.lfd.uci.edu/~gohlke/pythonlibs/").read()
print(url)
I could not find any reference to (_ssl.c:661) after looking around, any and all suggestions would be highly appreciated.
Related
I am implementing a way to importing the private key following the secure key import process mentioned in this blog.
I got error code -1 from keystore and according to this code, it means that root of trust already set. I have searched all over the place to see if I could get more detail info regarding getting rid of this error but there is no much information I could find.
I followed CTS code and successfully import the symmetric key locally but unable to modify it to import the private key.
Is there anyone here also got this error before and solved it successfully?
I'm using python google search to get the top 3 links for a query.
The following is the code. My list contains 1000 queries and I'm trying to save them into a dictionary with the query as the key and URLs as the value.
from google import search
for each in list:
time.sleep(1)
for URL in search(each,stop=3,num=3):
print URL
When I try to execute this, I get the error:
urllib2.HTTPError: HTTP Error 503: Service Unavailable
Should I use requests or any other libraries to eliminate this error?
I am trying to execute a REST POST call using 'request' library, and i am using python 2.7
What happening is, somehow the call is not getting executed via python script and and i am getting HTTPS connection error saying cannot connect to proxy.
I have verified in my system, proxy settings has no issue. In the same setup i am able to execute the same REST call using POSTMAN client & cURL.
Below is code snippet: (here the 'hostname' is private IP address and accessible internally only)
import requests
import json
import pprint
import sys
url = "https://<hostname>/rest/login-sessions"
json_data = {'authLoginDomain':'',
'password':'abc',
'userName':'xyz',
'loginMsgAck':'true'}
reqHeaders = {'Content-type':'application/json',
'X-Api-Version':'200'}
try:
reqPost = requests.post(url,data=json_data,headers=reqHeaders,verify = False)
pprint.pprint(resPost.json())
except requests.exceptions.Timeout:
print "Re-Try Again"
except requests.exceptions.TooManyRedirects:
print "Try with a different URL"
except requests.exceptions.RequestException as e:
print e
sys.exit(1)
Error code pasted below:
HTTPSConnectionPool(host='<hostIP>', port=443): Max retries exceeded with url: /rest/login-sessions (Caused by ProxyError('Cannot connect to proxy.', error('Tunnel connection failed: 503 Service Unavailable',)))
I tried finding solutions available in the web but none of them are working for me. I am confuse because when i am trying to execute the REST call via a python script then only it happening, via cURL & Postman i am able to do the same.
Please suggest if i am doing some mistake here OR any better way available to this.
I want to do it with python 'requests' library only. Also, i tried executing it with cURL using python subprocess, there also i am getting the same error.
All, I'm getting some strange behavior trying to use requests for an https call into the gitub api:
print(requests.get('https://api.github.com/gists/bbc56a82f359eccd4bd6').text)
The output looks like printing a binary file (no point in pasting the garbled output here).
An equivalent cURL call ("curl https://api.github.com/gists/bbc56a82f359eccd4bd6") results in the JSON response I'm expecting.
All this started after fixing a pip issue (InsecurePlatformWarning), where a few security-related packages were installed. This fix is required for users of python<2.7.9. I'm on 2.7.3 as it was recommended on some sites not to touch the python build on debian (for dependency-breaking issues).
Note that the issue that i'm having breaks functionality for e.g. github3py python API wrapper, etc.
Is anyone else seeing issues with requests after the upgrade? Any fixes?
This URL clearly responds differently depending on user-agent. I could make the curl command line response differ by simply adding -A moo/1.
You can probably get a curl-like response with Requests from this by using a curl like user-agent.
Or even better: just ask github or read up on their API.
I'm not seeing that behaviour here:
>>> import requests
>>> print(requests.get('https://api.github.com/gists/bbc56a82f359eccd4bd6').text)
Returns a JSON string. You could try debugging this further by changing the User-Agent of your request call to be that of cURL:
headers = {
'User-Agent': 'curl/7.38.0',
}
url = 'https://api.github.com/gists/bbc56a82f359eccd4bd6'
response = requests.get(url, headers=headers)
I've got the following snip of code the has the audacity to tell me it is "FAIL to load undefnied" (the nerve...) I'm trying to pass my authenticated session to a system call that uses javascript.
import requests
from requests_ntlm import HttpNtlmAuth
from subprocess import call
# THIS WORKS - 200 returned
s = requests.Session()
r = s.get("http://example.com",auth=HttpNtlmAuth('domain\MyUserName','password'))
call(["phantomjs", "yslow.js", r.url])
The issue is when "calL" gets called - all I get is the following
FAIL to load undefined.
Im guessing that just passing the correct authenticated session should work - but the question is how do I do it such that I can extract the info I want. Out of all the other attempts this has been the most fruitful. Please help - thanks!
There seem to be couple things going on here so I'll address them one-by-one.
The subprocess module in python is meant to be used to call out to the system as if you were using the command line. It knows nothing of "authenticated session"s and the command line (or shell) has no knowledge of how to use a python object, like a session, to work with phantomjs.
phantomjs has python bindings since version 1.8 so I would expect this might be made easier by using them. I have not used them, however, so I can not tell you with certainty that they will be helpful.
I looked at yslow's website and there appears to be no way to pass it the content that you are downloading with requests. Even then, the content would not have everything (for example: any externally hosted javascript that would be loaded by selenium/phantomjs or a browser, is not loaded by requests)
yslow seems as though it normally just downloads the URL for you and performs its analysis. When the website is behind NTLM, however, it first sends the client a 401 response which should indicate to the client that it must authenticate. Further, information is sent to the client that tells it how to authenticate and provides it parameters to use when authenticating for NTLM. This is how requests_ntlm works with requests. The first request is made and generates a 401 response, then the authentication handler generates the proper header(s) and re-sends the request which is why you see the 200 response bound to r.
yslow accepts a JSON representation of the headers you want to send so you can try to use the headers found in r.request.headers but I doubt they will work.
In short, this is not a question that the people who normally follow the requests tag can help you with. And looking at the documentation for yslow it seems that it (technically) does not support authentication of any type. yslow developers might argue though that it supports Basic Authentication because it allows you to specify headers.