SSL validation failed (ZScaler) - amazon-web-services

I'm on Windows 10 Enterprise.
The company has ZScaler installed, which is causing SSL validation failures when I'm attempting to connect to AWS & Github. The --no-verify-ssl flag forces things to work, but I'd prefer to configure the cert using --cabundle.
AWS CLI error
$ aws s3 ls s3://MYBUCKET/folder/
SSL validation failed for https://s3.us-east-2.amazonaws.com/MYBUCKET?list-type=2&prefix=folder%2F&delimiter=%2F&encoding-type=url [Errno 2] No such file or directory
Same issue for Github
$ git clone https://github.com/USERNAME/myproject.git
Cloning into 'myproject'...
fatal: unable to access 'https://github.com/USERNAME/myproject.git/': SSL certificate problem: unable to get local issuer certificate
My attempt to fix:
echo | openssl s_client -servername s3.us-east-2.amazonaws.com -connect s3.us-east-2.amazonaws.com:443 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > aws-s3-us-east-2.crt
openssl x509 -in aws-s3-us-east-2.crt -out aws-s3-us-east-2.pem
aws s3 ls s3://MYBUCKET/ --ca-bundle aws-s3-us-east-2.pem
Still results in error:
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1076)
Can I fix this myself or do I need some magical cert from our ZScaler admin?

Related

How to solve SSL error while running AWS CLI command on cmd?

I am list ~26k files from a bucket, but I am getting an error while running the following command in cmd.
aws s3 ls s3://raster/COP30/ --recursive --endpoint-url https://opentopography.s3.sdsc.edu --no-sign-request
I am getting the following error:
fatal error: SSL validation failed for http://opentopography.s3.sdsc.edu/raster?list-type=2&prefix=COP30%2F&encoding-type=url [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1129)

(GCP) ERROR: (gcloud.compute.ssl-certificates.create) Could not fetch resource: - The SSL key is too large [duplicate]

I have created a socketcluster nodejs app. I followed their official docs to deploy the service to Google K8s Engine. However the ingress service is not running up and complains about :
Error:googleapi: Error 400: The SSL key is too large., sslCertificateKeyTooLarge
I tried following certificates:
4048 Key size certificate from Let'sEncrypt
2048 Key size using cert created using Open SSL.
Both of them result the the same error.
Do any one know how do I resolve this? And where do I get proper certificate for enabling TLS?
IIRC, only RSA-2048 and ECDSA P256 keys are supported:
openssl genrsa -out PRIVATE_KEY_FILE 2048
openssl ecparam -name prime256v1 -genkey -noout -out PRIVATE_KEY_FILE
I also struggled due to this error on using Letsencrypt certs with 4096bit private key to a GKE ingress - even creating the secret worked fine for [1].
Finally overcame with editing "/etc/letsencrypt/cli.ini"
rsa-key-size = 2048
issued new certificate, keyfile and put those into secret.
[1] https://cloud.google.com/kubernetes-engine/docs/how-to/ingress-multi-ssl
On Cloud Shell, GCP with "openssl" and "gcloud", I tried to create a self-managed SSL certificate first running this command below to create "myCert.crt" and "myKey.key":
openssl req -new -newkey rsa:4096 -x509 -days 365 -nodes -out myCert.crt -keyout myKey.key
Then, ran this command below to create the self-managed SSL certificate "mycert" using "myCert.crt" and "myKey.key":
gcloud compute ssl-certificates create mycert --certificate=myCert.crt --private-key=myKey.key
But I got a similar error to yours:
ERROR: (gcloud.compute.ssl-certificates.create) Could not fetch
resource:
The SSL key is too large.
So I changed "rsa:4096" to "rsa:2048" then ran the first command again:
// "4096" is changed to "2048"
openssl req -new -newkey rsa:2048 -x509 -days 365 -nodes -out myCert.crt -keyout myKey.key
Then, ran the second command again:
gcloud compute ssl-certificates create mycert --certificate=myCert.crt --private-key=myKey.key
Finally, I could create the self-managed SSL certificate "mycert":
Created
[https://www.googleapis.com/compute/v1/projects/myproject-923743/global/sslCertificates/mycert].
NAME: mycert TYPE: SELF_MANAGED CREATION_TIMESTAMP:
2022-01-22T07:22:26.058-08:00 EXPIRE_TIME:
2023-01-22T07:22:08.000-08:00 MANAGED_STATUS:

Certificate issue on sam deploy

I am fairly new to AWS Lambda. I am playing around with a project that I am trying to deploy using the AWS SAM CLI. Below is the command I use:
sam deploy --s3-bucket com.nilay.bucket \
--stack-name HelloWorldLambdaJava \
--capabilities CAPABILITY_IAM
This initially failed for certificate verification issue with ssl related to cloudformation.us-east-1.amazonaws.com certificate. After some googling I circumvented this by exporting the certificate to my mac, converted it to .pem format and created a variable AWS_CA_BUNDLE. Now the deploy fails for another url (s3.amazonaws.com?) for the same certificate issue. How can I add this certificate to the certifcate bundle. It seems like the variableAWS_CA_BUNDLE` should really take a truststore as the value, but all the documentation that I see for this has a .pem file listed in it.
The sam deploy command doesn't allow --no-verify-ssl flag as the AWS CLI command does.
I did two things:
A) The first problem was solved for me from the following link. It was an issue using PIP and accessing AWS services.
SSL CERTIFICATE_VERIFY_FAILED in aws cli
Unfortunately python requests do not use any operating system's CA trust store. https://github.com/requests/requests/issues/2966
You have to set REQUESTS_CA_BUNDLE and AWS_CA_BUNDLE environment variables
https://github.com/bloomreach/s4cmd/issues/111#issuecomment-406839514
I'm accessing AWS from my corporate network. I have no issues when connecting from home on my own computer.
The solution for me is to get the root certificate used when making connections and to save it locally somewhere as a .PEM file.
Then create two local environment variables and point it to the .PEM file. Run these commands to set the environment variables (or do it manually):
setx AWS_CA_BUNDLE "C:\Users\UserX\Documents\RootCert.pem"
setx REQUESTS_CA_BUNDLE "C:\Users\UserX\Documents\RootCert.pem"
B)
The other thing I did was to update the Python certifi package. I then appended the cacert.pem file with the contents of the RootCert.pem that I downloaded.
C:\Python\Python38\Lib\site-packages\certifi\cacert.pem
Just to explain how can you generate the required file (tipically for your corporate network).
On your PC with git installed, using git shell with command (also work from VSCode Git bash terminal). Git also installs openssl so no wories ....
in terminal (git bash) type
echo | openssl s_client -showcerts -servername s3.eu-central-1.amazonaws.com:443 -connect s3.eu-central-1.amazonaws.com:443 2>/dev/null
then grab all parts
-----BEGIN CERTIFICATE-----
xxx
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
xxx
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
xxx
-----END CERTIFICATE-----
including that header and footer to have a whole certificate chain and save to ca-bundle.pem file
After that modify your aws config file
C:\Users\YOURNAME_HERE.aws\config
[default]
region = eu-central-1
output = yaml
ca_bundle = C:/aws/ca-bundle.pem

AWS CLI - [SSL : CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1056)

Am trying to use AWS-CLI to retrieve aws elasticbeanstalk details, but am getting the following error.
Error message:
C:\abdul>aws elasticbeanstalk describe-environments --environment-name myenvname
SSL validation failed for https://elasticbeanstalk.us-east-1.amazonaws.com/ [SSL
: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate
in certificate chain (_ssl.c:1056)
Note:
I can work without any issues when I try to retrieve my EC2 details,
C:\abdul>aws ec2 describe-instances --instance-ids 'i-xxxxxxxxxxxxxx'
Above command works without any issues, I get the above error only when I try "elasticbeanstalk" commands.
Note:
I have all the necessary certificates required in place.
Thanks in advance.
I found my way to this post while Googling. In my case, the error message I received was:
SSL validation failed for https://ec2.us-west-2.amazonaws.com/ [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1091)
I found this blog which told me to add an Environment Variable called AWS_CA_BUNDLE whose value was a path pointing to the CA Cert file (which I had saved on my local machine after requesting it from our corporate network team). Once I added that environment variable, I was able to run my AWS CLI commands successfully!
I had the same issue. This is how I resolved it.
Run below command first
$export REQUESTS_CA_BUNDLE=/path/to/company/certificate.crt
And then run AWS cli command
aws elasticbeanstalk describe-environments --environment-name myenvname
Steps to get this working in macOS/Linux
Download the Corporate Self-Signed Certificates using OpenSSL
openssl s_client -showcerts -verify 5 -servername ec2.us-west-2.amazonaws.com -connect ec2.us-west-2.amazonaws.com:443 < /dev/null | awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".crt"; print >out}' && for cert in *.crt; do newname=$(openssl x509 -noout -subject -in $cert | sed -n 's/^.*CN=\(.*\)$/\1/; s/[ ,.*]/_/g; s/__/_/g; s/^_//g;p').pem; mv $cert $newname; done
Create a bundle.pem by concatenating all the files fetched from the first command.
cat ec2_us-west-2_amazonaws_com.pem company_intermediate.pem company_root.pem >bundle.pem
Make it available in AWS_CA_BUNDLE environment variable.
export AWS_CA_BUNDLE=/Users/velayutham/work/corp-cert/bundle.pem
aws ec2 describe-instances --region us-west-2 ==> This should work fine now.

SSL CERTIFICATE_VERIFY_FAILED in aws cli

I installed AWS CLI on the Windows server 2007 32bit.
aws --version
aws-cli/1.8.8 Python/2.7.9 Windows/2008Server
I configure aws cli using keys
Once I run below command to test AWS S3, I get this SSL error:
aws s3 ls
[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:581)
Please help to get rid of this basic error.
If you want to use SSL and not have to specify the --no-verify-ssl option, then you need to set the AWS_CA_BUNDLE environment variable. e.g from PowerShell:
setx AWS_CA_BUNDLE "C:\Users\UserX\Documents\RootCert.pem"
The PEM file is a saved copy of the root certificate for the AWS endpoint you are trying to connect to. To generate it, first export the certificate in DER format (For details on how to do this, see here). Then run the following command to convert to the PEM format:
openssl x509 -inform der -in "C:\Users\UserX\Documents\RootCert.der" -out RootCert.pem
If you are using Powershell and not bash, then you will need to first install openssl.
For a full list of environment variables supported by the AWS CLI, see here
use this option with your cmd
"--no-verify-ssl"
Not sure if it's related to to the OP's issue, however, one of our devs had this issue this morning, turned out he was using Fiddler (on Windows), to debug other issues. After stopping Fiddler (which was intercepting https traffic), the issue was resolved.
I had the same issue on Windows 10. It happens to be due to the aws cli not reading the internet proxy setting from the Windows registry. Fixed same error by setting the environment variables HTTP_PROXY and HTTPS_PROXY to the corporate internet proxy. Hope it helps somebody!
Mine was resolved with:
pip install awscli --force-reinstall --upgrade
I ran into a similar issue on Mac OSX in the company/corporate network.
If you don't know the proxy URL Get it from your company's network administrator and configure with the following commands.
Linux, macOS, or Unix
$ export HTTP_PROXY=http://proxy.example.com:1234
$ export HTTPS_PROXY=https://proxy.example.com:1234
Windows
$ set HTTP_PROXY=http://proxy.example.com:1234
$ set HTTPS_PROXY=https://proxy.example.com:1234
More information
I added the certificate to C:\Program Files\Amazon\AWSCLIV2\awscli\botocore\cacert.pem and it resolved the problem.
My issue was our company's VPN. It worked after I disconnected from VPN
AWS already posted a clean solution for this, here it is:
Instead of hacking your system now the CLI supports you passing it a .pem file with the CA chain for it to communicate with your proxy:
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-troubleshooting.html#tshoot-certificate-verify-failed
To fix this, instruct the AWS CLI where to find your companies .pem file using the ca_bundle configuration file setting, --ca-bundle command line option, or the AWS_CA_Bundle environment variable.
Problem most likely caused by corporate proxy. In my case I was running the commands on AWS CLI behind proxy server and was getting certificate error.
So to get around this I added --no-verify-ssl flag. Though this is a bad idea, I used this as a temporary solution to get the job done until it is resolved by the network team.
I believe this option would have been tried already but just putting it here for everyones reference:
when you have proxy added to your ec2 machines and it is in private subnet with a S3 vpc-endpoint attached. I was getting the same error.
Bypassing the proxy using no_proxy for the bucket as per : https://aws.amazon.com/premiumsupport/knowledge-center/connect-s3-vpc-endpoint/
didn't help me and was still failing with the same error.
the only catch here was we need to add endpoint url which is s3.ap-southeast-2.amazonaws.com as below and it worked for me:
export NO_PROXY=169.254.169.254,s3.ap-southeast-2.amazonaws.com
169.254.169.254 is used to access instance role credentials in my case.
I had a similar issue and solved it by setting the proxy as follows:
$ set HTTP_PROXY=http://proxy.example.com:1234
$ set HTTPS_PROXY=https://proxy.example.com:1234
Linux:
$ export AWS_CA_BUNDLE="/data/ca-certs/ca-bundle.pem"
Windows:
PS C:\> setx AWS_CA_BUNDLE C:\data\ca-certs\ca-bundle.pem
$ aws s3 ls --ca-bundle "/data/ca-certs/ca-bundle.pem"
For me ec2 instance date was incorrect, after changing the date and time, fixed the problem.
Simply rebooted the ec2 instance
When you use a AWS CLI command, you receive a "[SSL: CERTIFICATE_ VERIFY_FAILED] certificate verify failed" error message. This is caused by the AWS CLI not trusting your proxy's certificate due to factors such as your proxy's certificate being self-signed, with your company set as the Certification Authority (CA). This prevents the AWS CLI from finding your companies CA root certificate in the local CA registry.
To fix this, instruct the AWS CLI where to find your companies .pem file using the ca_bundle configuration file setting, --ca-bundle command line option, or the AWS_CA_Bundle environment variable.
Please refer https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-troubleshooting.html#tshoot-certificate-verify-failed
aws configure set default.ca_bundle <your CA file>
I agree with above answers, do the following
1- Remove your cli and install latest cli
2- check the certificate exist: C:\Program Files\Amazon\AWSCLIV2\botocore\cacert.pem
3- if it doesn't exist remove the cli and go to: C:\Program Files\ and remove Amazon
4- Install cli latest version it should work.
5- Try testing with your VPN connected
use the following option to overcome the ssl certification issue.
aws s3 ls --no-verify-ssl