Trying to upload images to my AWS S3 bucket using AWS-shell - amazon-web-services

The following is my python script to upload the code.
s3 = boto3.resource('s3')
# Get list of objects for indexing
images=[('Micheal_Jordan.jpg','Micheal Jordan'),
('Wayne_Rooney.jpg','Wayne Rooney')
]
# Iterate through list to upload objects to S3
for image in images:
file = open(image[0],'rb')
object = s3.Object('famouspersons-images','index/'+ image[0])
ret = object.put(Body=file,
Metadata={'FullName':image[1]})
Getting the following error.
botocore.exceptions.SSLError: SSL validation failed for https://s3bucketname.s3.amazonaws.com/index/Wayne_Rooney.jpg [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1129)

This error occurs because Python SSL library can't find certificates on your local machine to verify against.
You can check if you have your ca_bundle set to something else:
python -c "from botocore.session import Session; print(Session().get_config_variable('ca_bundle'))"
If it doesn't print anything, then it uses default path. You can check default path by:
python -c "import ssl; print(ssl.get_default_verify_paths())"
If something is printed on the screen, then it was set by AWS_CA_BUNDLE environment variable or by aws configure set default.ca_bundle <some path> in the past.
You can install certificates by following
https://gist.github.com/marschhuynh/31c9375fc34a3e20c2d3b9eb8131d8f3.
Then you can save it as install-cert.py and run then run python install-cert.py.
If this does not work out, you can try the following solutions:
Reset AWS Credentials using AWS Configure
Issue Due to Fiddler
Reset HTTP/HTTPS Proxy Related Environment Variables
Reinstall and Upgrade AWS CLI
Using AWS_CA_BUNDLE Environment Variable
Moving CA Certificate PEM File in the Right Folder
Verifying CA Certificate
Install certifi Python Module
Install pyopenssl Python Module
Adding Trusted Root CA Details
Adding Trusted Host
Fixing the Version of requests and urllib3 Python Modules
Fixing CA Certificate Content and Location

Related

CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/free/win-64/repodata.json.bz2>

I'm setting up the virtual environment of Django for the first time. I've downloaded the Anaconda library of Python in my D drive. So initially I set up the path of Python and Conda(Scripts) manually in advance system settings. But now when I'm creating the environment using command
conda create --name mydjang0 django
the command prompt is showing an error like this-
C:\Users\AABHA GAUTAM> conda create --name mydjang0 django
Solving environment: failed
CondaHTTPError: HTTP 000 CONNECTION FAILED for url <https://repo.anaconda.com/pkgs/pro/win-64/repodata.json.bz2>
Elapsed: -
An HTTP error occurred when trying to retrieve this URL.
HTTP errors are often intermittent, and a simple retry will get you on your way.
If your current network has https://www.anaconda.com blocked, please file
a support request with your network engineering team.
SSLError(MaxRetryError('HTTPSConnectionPool(host='repo.anaconda.com', port=443): Max retries exceeded with url: /pkgs/pro/win-64/repodata.json.bz2 (Caused by SSLError("Can't connect to HTTPS URL because the SSL module is not available."))'))
If you already have a .condarc file in your root folder. Then update the file by running this command.
conda config --set ssl_verify false
If you do not have a .condarc file, then create one and then run the above command. I have added both the commands below.
conda config --add channels conda-forge
conda config --set ssl_verify false
Instead of using command prompt use Anaconda Prompt as administator.
run the same command there.
I also faced same issue with command prompt.
Only delete file .condrac from c:\users\user

How to fix urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)> Error

I'm currently running Sentry in Kubernetes with auto certificate generation using let's encrypt and cert-manager. When Sentry attempts to send an error to the sentry server, the following error is thrown:
urlopen error [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:590)> (url: https://example.host.com/)
I have verified that the correct python packages for 2.7.15 have been installed. Packages include certifi, urllib2 along with the dependencies.
Turning off TLS Verification works, but this is a last resort. Security is very important even though this is an internally hosted service.
It has been my experience that even the most up-to-date ca-certificates packages sometimes don't contain all 3 Let's Encrypt certificates. The solution(?) is to download them into the "user-controlled" certificate directory (often /usr/local/share/ca-certificates) and then re-run update-ca-certificates:
# the first one very likely is already in your chain,
# but including it here won't hurt anything
for i in isrgrootx1.pem.txt lets-encrypt-x3-cross-signed.pem.txt letsencryptauthorityx3.pem.txt
do
curl -vko /usr/local/share/ca-certificates/`basename $i .pem.txt`.crt \
https://letsencrypt.org/certs/$i
done
update-ca-certificates
The ideal outcome would be to do that process for every Node in your cluster, and then volume mount the actual ssl directory into the containers, so every container benefits from the latest certificates. However, I would guess just doing it in the affected containers could work, too.
yum update ca-certificates.noarch did the trick for me.

Setting up local https network to mock amazonaws.com in docker

I have a requirement where I need to setup a spoof/mock an AWS server in my local docker compose network... The requirement is to be able to test a set of microservice without letting the microservices know that the endpoint is not actually AWS.
For examples if a microservice, which uses the AWS-SDK, tries to make a service call to create a queue, it makes a call to https://eu-west-1.queue.amazonaws.com. I have a local dns server installed which resolves the same to a reverse proxy server(Traefik) which in turn resolves it to the mock server.
When the service call is made, the service call fails at reverse proxy level stating the below error
traefik_1 | time="2018-10-11T15:11:28Z" level=debug msg="http: TLS handshake error from 10.5.0.7:59058: remote error: tls: unknown certificate authority"
can anyone help me in setting up the system in such a way that the call is made successfully....
You're not going to be able to MITM the https api request and return a different response. You can give the SDK a different url to hit (without https, or with a self-signed cert), and then set up a proxy to proxy requests to amazon when you want them to be send to amazon, and to your other service when you want to mock them.
Some random information on how to change the api request url in the javascript SDK: https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/specifying-endpoints.html (as an example)
tls: unknown certificate authority
Based on this error message you need to update the list of trusted CA's in your environment. This needs to be done inside each image (or resulting container) that will connect to your mock service. The process varies based on the base image you select, and this question on unix.se covers many of the methods.
The Debian process:
apt-get install ca-certificates
cp cacert.pem /usr/share/ca-certificates
dpkg-reconfigure ca-certificates
The CentOS process:
cp cacert.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
The Alpine process:
apk add --no-cache ca-certificates
mkdir /usr/local/share/ca-certificates
cp cacert.pem /usr/local/share/ca-certificates/
update-ca-certificates
You are going to struggle/compromise to intercept the AWS API Calls without bypassing the validation of the cert chain.
I suggest that you provide a Custom Endpoint to the AWS SDK Client in your NodeJS code to point to the LocalStack endpoint. This value could be passed using environment variables in your test environments.
var sqsClient = new AWS.SQS(
{endpoint: process.env.SQSCLIENT}
);
Then pass the LocalStack URL into the container for test environments:
docker run mymicroservice -e SQSCLIENT='http://localstack:4576'

AWS Elastic Beanstalk commands return no output

I am very new to the Amazon Web Services and have been trying a learn-by-doing approach with them.
In summary I was trying to set up Git with the elastic beanstalk command line interface for my web-app. However, I wanted to use my SSH key-pair to authenticate (aws-access-id, secret) and in my naivety and ignorance, I just supplied this information (the SSH key files) and now I can't get it to work. More specifically stated below.
I have my project directory with Git set up so that it works. I then open the git bash window MINGW64 (I am on Windows 10) and attempt to set up eb.
$ eb init
It then tells me that my credentials are not set up and asks me for aws-access-id and the secret. I had just set up the SSH key-pair and try to enter these files; what's the harm in trying? EB failure, it turns out. Now, the instances seem to run fine still, looking at their status on the AWS console website. However, whatever I type into the bash:
$ eb init
$ eb status
$ eb deploy
$
There is no output. Not even an error. It just silently returns to awaiting a new command from me.
When using the --debug option with these commands, a long list of operations is returned, ending with
botocore.parsers.ResponseParserError: Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
I thought I would be able to log out or something the like, so that I could enter proper credentials which I messed up from the beginning. I restarted the web-app from the AWS webpage interface and restarted my PC. No success.
Thanks in advance.
EDIT:
I also tried reinstalling awscli and awsebcli:
pip uninstall awsebcli
pip uninstall awscli
pip install awscli
pip install awsebcli --upgrade --user
Problem persists, but now there is one output (previously seen only upon --debug option):
$ eb init
ERROR: ResponseParserError - Unable to parse response (no element found: line 1, column 0), invalid XML received:
b''
$
It sounds like you have replaced your AWS credentials in ~/.aws/credentials and/or ~/.aws/config file(s) with your SSH key. You could manually replace these or execute aws configure if you have the AWS CLI installed.

Python Requests not handling missing intermediate certificate only from one machine

I'm working on a box that's running CentOS (Linux), and I'm running into the following error when try to access a particular subdomain for work:
Traceback (most recent call last):
... # My code, relevant call is requests.get(url)
File "/usr/local/lib/python2.7/site-packages/requests/api.py", line 60, in get
return request('get', url, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/api.py", line 49, in request
return session.request(method=method, url=url, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 457, in request
resp = self.send(prep, **send_kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/sessions.py", line 569, in send
r = adapter.send(request, **kwargs)
File "/usr/local/lib/python2.7/site-packages/requests/adapters.py", line 420, in send
raise SSLError(e, request=request)
requests.exceptions.SSLError: [Errno 1] _ssl.c:504: error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
According to https://www.digicert.com/help/, the subdomain "is not sending the required intermediate certificate" (and that's the only problem DigiCert found). However, my code handles this without problem when I run it from my Mac laptop, and so do both Chrome and Safari. I'm running Python 2.7.5 on both my laptop and the linux box. I was running requests 1.2.0 on the linux box and 2.2.1 on my laptop, but I upgraded both to 2.4.3 and they still don't have the same behavior.
Also possibly relevant - the same certificate is being used with some other subdomains where the intermediate certificate is being sent, and neither my laptop nor the linux box has any problems with those, so it shouldn't be that my laptop has a root CA that the linux box doesn't have.
Does anyone know why it isn't working from my linux box and how I can fix it?
I spent a day to understand and fix this issue completely, so I thought it will be nice to share my findings with everybody :-)! Here are my results:
It is a common flaw in SSL server configurations to provide an incomplete chain of certificates, often omitting intermediate certificates. For instance, a site I was working with did not include the common DigiCert "intermediate" certificate "DigiCert TLS RSA SHA256 2020 CA1" in the server's response.
Because this configuration flaw is common, most but not all modern browsers implement a technique called "AIA Fetching" to fix this on the fly (see e.g. https://www.thesslstore.com/blog/aia-fetching/).
Python's SSL support does not support AIA Fetching and depends on a complete chain of certificates from the server; otherwise it throws an exception, like so
SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1124)')))
There is an ongoing discussion about whether AIA Fetching should be added to Python, e.g. in this thread: https://bugs.python.org/issue18617#msg293894.
My impression is that this will remain an open issue for the foreseeable future.
Now, how can we fix that?
Install certifi, if you have not done so, or update it
pip install certifi
or
pip install certifi --upgrade
Many (but not all) Python modules can use the certificates from certifi, and certifi takes them from the Mozilla CA Certificate initiative (https://wiki.mozilla.org/CA). Basically, certifi creates a clean *.pem file from the Mozilla site and provides a lightweight Python interface for accessing that file.
Download the missing certificate as a file in PEM syntax, e.g. from https://www.digicert.com/kb/digicert-root-certificates.htm, or from a trusted browser.
Locate the certifi *.PEM certificate file with
import certifi
print(certifi.where())
Note: I recommend to first activate the virtual environment (e.g. conda activate <envname>) you want to use the certificate with. The file path will differ. If you apply this to your base environment, any potential flawed certificate will put the entire SSL mechanism for all your code at risk.
Example:
/Users/username/anaconda3/envs/environment_name/lib/python3.8/site-packages/certifi/cacert.pem
Take a simple text editor, open that file, and insert the missing certificate at the beginning right after the header, like so
##
## Bundle of CA Root Certificates
##
...
-----BEGIN CERTIFICATE-----
+I2tIJLYrVJmuzHZ9bjPvXj1hJeRPG/cUJ9WIQDgLGB
Afr5yjK7tI4nhyfFK3TUqNaX3sNk+crOU6J
---> This is the additional certificate.
+I2tIJLYrVJmuzHZ9bjPvXj1hJeRPG/cUJ9WIQDgLGB
Afr5yjK7tI4nhyfFK3TUqNaX3sNk+crOU6J
-----END CERTIFICATE-----
It is important to include the begin and end markers.
Save the file and you should be all set!.
You can test that it works with the following few lines:
# Python 3
import urllib.request import certifi import requests
URL = 'https://www.the_url_that_caused_the_trouble.org'
print('Trying urllib.request.urlopen().')
r = urllib.request.urlopen(URL)
print(f'urllib.request.urlopen\n================\n {r.read()[:80]}')
print('Trying requests.get().')
r = requests.get(URL)
print(f'requests.get()\n================\n {r.text[:80]}')
Note: The general SSL certificates, e.g. for openssl, might be located elsewhere, so you may have to try the same approach there:
/Users/username/anaconda3/envs/environment_name/ssl
Voila!
Notes:
When you update certifi or create a new virtual environment, the changes will likely be lost, but I think that is actually good design, because it does not perpetuate a temporary security tweak to your entire system.
Naturally, the process of downloading the certificate is a potential security risk - if that download is compromised, your entire SSL chain might be, too.
The maintenance of certifi lags behind the Mozilla releases of certificates. If you want to use the most current version of the Mozilla CA bundles with certifi, you can use my script from https://github.com/mfhepp/update_certifi_certificates.
I still don't understand why it's working one place but not another, but I did find a somewhat acceptable workaround that's much better than turning off certificate verification.
According to the requests library documentation, it will use certifi if it is installed on the system. So I installed certifi
sudo pip install certifi
and then modified the .pem file it uses. You can find the file location using certifi.where():
>>> import certifi
>>> certifi.where()
'/usr/local/lib/python2.7/site-packages/certifi/cacert.pem'
I added the intermediate key to that .pem file, and it works now. FYI, the .pem file expects certificates to show up like
-----BEGIN CERTIFICATE-----
<certificate here>
-----END CERTIFICATE-----
WARNING: This is not really a solution, only a workaround. Telling your system to trust a certificate can be dangerous from a security point of view. If you don't understand certificates then don't use this workaround unless your other option is to turn off certificate verification entirely.
Also, from the requests documentation:
For the sake of security we recommend upgrading certifi frequently!
I assume that when you upgrade certifi you'll have to redo any changes you made to the file. I haven't looked at it enough to see how to make a change that won't be overwritten when certifi gets updated.
If you are on *nix and your intermediate or self-signed certificate is installed in SSL (i.e. you can hit the URL successfully from CURL but not from Python), you can set the environment variable REQUESTS_CA_BUNDLE to where your ca-certificates are stored (ex. /etc/ssl/certs/ca-certificates.crt).
Credit here.