I run this code:
import requests
loginurl = 'https://auth.cbssports.com/login/index'
try:
response = requests.get(loginurl)
except requests.exceptions.ConnectionError as e:
print "Login URL is BAD"
response = requests.get(loginurl)
It consistently returns this error:
requests.exceptions.ConnectionError: HTTPSConnectionPool(host='auth.cbssports.com', port=443): Max retries exceeded with url: /login/index (Caused by <class 'socket.error'>: [Errno 10054] An existing connection was forcibly closed by the remote host)
I am able to access this url manually. I can't figure out why Python can't. Is there a way to solve this?
Related
I'm trying to access a private wiki in azure (I'm admin in the project) but using the a lambda in AWS.
import requests
import base64
def lambda_handler(event, context):
# Replace with your own Azure DevOps organization name
organization_name = "name of the organization"
# Replace with the project name that contains the wiki
project_name = "project name"
# Replace with the name of the wiki
wiki_name = "wiki name"
# Replace with the version of the wiki to retrieve
wiki_version = "1234"
# Replace with your own personal access token (PAT)
personal_access_token = "my pat"
# Build the URL to retrieve the wiki content
url = f"https://dev.azure.com/{organization_name}/{project_name}/_apis/wiki/wikis/{wiki_name}/{wiki_version}/content?api-version=7.0"
# Set the authorization header with the base64-encoded PAT
auth_header = f"Basic {base64.b64encode(f'{personal_access_token}'.encode('utf-8')).decode('utf-8')}"
headers = {
"Authorization": auth_header,
"Accept": "application/json"
}
# Make the request to retrieve the wiki content
response = requests.get(url, headers=headers)
# Return the response if successful, otherwise return an error message
if response.status_code == 200:
return response.json()
else:
My lambda has a security group allowing all inbound and outbound, and public subnet.
Also lambda has a very permisive role lol
But I always get a
"[ERROR] ConnectionError: HTTPSConnectionPool(host='dev.azure.com', port=443): Max retries exceeded with url:xxxxx (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f405ba34790>: Failed to establish a new connection: [Errno 110] Connection timed out'))"
Not sure if the issue is the lambda or in azure.
I already tried changing the permissions and changing the role of aws lamdba.
Also I got another PAT with all permissions allowed.
I am testing with a url that will return 503 and would want the script to request the URL with a backoff_factor of 1 with max _retries function. Base on my 8 times retries, theoretically the request was supposed to spend at least 64 s for the request but it just straight away print the response 503 and I do not think it actually retries for 8 times. I even increase backoff_factor to 20 and still instantly print out response 503
from requests.packages.urllib3.util.retry import Retry
from requests.adapters import HTTPAdapter
import requests
url = 'http://httpstat.us/503'
s = requests.Session()
retries = Retry(total=8, backoff_factor=1, status_forcelist=[ 502, 503, 504 ])
s.mount('http://', HTTPAdapter(max_retries=retries))
response = s.get(url)
print response
I have been checking other post and try with s.mount('http://', requests.adapters.HTTPAdapter(max_retries=retries)) with similar result.
How to let the script really retries 8 times and confirm it actually retries?
Please advise if the solution works for HTTP and https
When I call retries
>>> retries
Retry(total=8, connect=None, read=None, redirect=None)
Is the input correct?
I base on https://stackoverflow.com/a/35636367/5730859 solution and but it does not seems to retry. Can anyone help?
Updating to newest comments. Instead of using that urllib3 function, use your own:
>>> def test():
return requests.get("http://httpstat.us/503").status_code
>>> def main():
total_retries = 0
code = test()
while code == 503 or 502 or 504:
total_retries+=1
if total_retries > 8:
print("8 retries hit")
break
code = test()
>>> main()
8 retries hit
>>>
This will stop once it hits 8 retries with the 503, 504, 502 code(s).
I am using Django 2.2.10 and using python manage.py runsslserver to locally develop an https site.
I have written an authentication app, with view function that returns JSON data as ff:
def foobar(request):
data = {
'param1': "foo bar"
}
return JsonResponse(data)
I am calling this function in the parent project as follows:
def index(request):
scheme_domain_port = request.build_absolute_uri()[:-1]
myauth_login_links_url=f"{scheme_domain_port}{reverse('myauth:login_links')}"
print(myauth_login_links_url)
data = requests.get(myauth_login_links_url).json()
print(data)
When I navigate to https://localhost:8000myproj/index, I see that the correct URL is printed in the console, followed by multiple errors, culminating in the error shown in the title of this question:
HTTPSConnectionPool(host='localhost', port=8000): Max retries exceeded with url:/index (Caused by SSLError(SSLError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:852)'),))
How do I pass the SSL cert being used in my session (presumably generated by runsslserver to the requests module) ?
try this:
data = requests.get(myauth_login_links_url, verify=False).json()
We have a Django Webservice that uses Swisscom AppCloud's S3 solution. So far we had no problems, but without changing anything on the application we are experiencing ConnectionError: ('Connection aborted.', error(104, 'Connection reset by peer')) errors when we are trying to upload files. We are using boto3 1.4.4.
Edit:
The error occures after somwhere between 10 and 30s. When I try from my local development machine it works.
from django.conf import settings
from boto3 import session
from botocore.exceptions import ClientError
class S3Client(object):
def __init__(self):
s3_session = session.Session()
self.s3_client = s3_session.client(
service_name='s3',
aws_access_key_id=settings.AWS_ACCESS_KEY,
aws_secret_access_key=settings.AWS_SECRET_ACCESS_KEY,
endpoint_url=settings.S3_ENDPOINT,
)
.
.
.
def add_file(self, bucket, fileobj, file_name):
self.s3_client.upload_fileobj(fileobj, bucket, file_name)
url = self.s3_client.generate_presigned_url(
ClientMethod='get_object',
Params={
'Bucket': bucket,
'Key': file_name
},
ExpiresIn=60*24*356*10 # signed for 10 years. Should be enough..
)
url, signature = self._split_signed_url(url)
return url, signature, file_name
Could this be a version problem or anything else on our side?
Edit:
Made some tests with s3cmd: I can list the buckets I have access to but for all other commands like listing all objects or just listing the objects in a bucket I get a Retrying failed request: / ([Errno 54] Connection reset by peer)
After some investigation I found the error:
Swisscom's implementation of S3 is somehow not up-to-date with Amazon's. To solve the problem I had to downgrade botocore from 1.5.78 to 1.5.62.
import shutil
import requests
import json
proxy = {
'user' : 'user',
'pass' : 'password',
'host' : "test.net",
'port' : 8080
}
url = 'https://github.com/timeline.json'
response = requests.get(url,verify=True, proxies={"https" : \
"http://%(user)s:%(pass)s#%(host)s:%(port)d" % proxy})
with open(r'..\test.json','wb') as out_file:
out_file.write(response.text)
print response
I'm trying to access a HTTPS link (e.g https://github.com/timeline.json) over proxy in office environment using Requests.
Accessing HTTP link seems to be working fine. Getting SSL error in HTTPS.
Please suggest what's missing in the code. Thanks!
Error received:
raise SSLError(e)
requests.exceptions.SSLError: [Errno 8] _ssl.c:504: EOF occurred in violation of protocol
I am using almost the same code you provided, and searched an proxy server. Everything is OK with me.
Try to take a look at document here requests proxies. And notice that https://github.com/timeline.json is deprecated by Github,try https://api.github.com/events
According to doc:
To use HTTP Basic Auth with your proxy, use the http://user:password#host/ syntax:
proxies = {
"http": "http://user:pass#10.10.1.10:3128/",
}
Are you missing a / at the end? Take a try.
import requests
url = 'https://api.github.com/events'
proxy = {
"http" : "http://211.162.xxx.xxx:80"
}
response = requests.get(url, verify=True, proxies=proxy)
print response.status_code
if response.status_code == requests.codes.ok:
response.encoding = 'utf-8'
jsontxt = response.json()
print jsontxt