Getting 502 Bad Gateway Error in CloudFront AWS while using SSL - amazon-web-services

I'm trying to use Cloudfront for the first time.
Here is my infrastructure:
Created an alias (CNAME in OVH) from my domain name to the cloudfront domaine name
I have a cloudfront distribution deployed with the following configuration (see below)
I'm using an EC2 custom origin, in where I have installed Nginx (reverse-proxy) accepting traffic on port 443(https) and redirecting traffic of port 80(http) to 443(https)
I'm using a valid SSL certificate (until Sept 2018) that I created using letsencrypt and uploaded it to ACM (cert.pem -> Body / privkey.pem -> Private key / fullchain.pem -> chain`)
In Nginx I'm referencing the fullchain.pem and the privkey.pem
Here is my configuration
While troubleshooting I have done the following:
Used an SSL checker (https://www.sslchecker.com/sslchecker) which tells me that the certificate and the chain were find but not the root. BTW other online SSL checkers say that everything is ok. Don't know if the issue is coming from here. Amazon CM finds that the private, certificate and the chain are OK.
Used Openssl
openssl s_client –connect domainname:443 –servername domainname
openssl s_client –connect domainname:443
The first command is ok but the second is giving me SSL HANDSHAKE ERROR.
But Cloudfront is not configured to allow non SNI clients. So I think it is ok
- Checked my domaine with https://www.ssllabs.com/ssltest/ for ciphers & protocols. Everything seem to be ok. I can give more details if necessary
I'm still getting:
"CloudFront wasn't able to connect to the origin."
Any help please?
thank you

Related

Amazon S3: Static web hosting with HTTPS / SSL

I have an AWS S3 bucket configured for web hosting, with a custom domain name pointing to it.
I can retrieve files ok under my domain name using http, but I'm having trouble getting it working under https. I've followed all the steps below. Grateful for any help.
In my Cloudfront distribution which points at my s3 bucket, I have a Custom SSL Certificate set, with the same name domain name. The certificate is issued through AWS Certificate Manager.
In the browser e.g. Chrome I get:
This site can’t be reached i.removed.the.domain.name took too long to respond.
Try:
Checking the connection
Checking the proxy and the firewall
ERR_CONNECTION_TIMED_OUT
It's not a DNS lookup error -- the non-secure website works fine -- and it's not a certificate error. Do I need to do something else to enable the https version?

getting SSL erorr while connecting to second-level subdomain using AWS load balancer

I am using AWS load balancer to listen to dev.example.com and api.dev.example.com. I have added amazon managed certificates in the listener for both the subdomains. I can connect to dev.example.com successfully, but for api.dev.example.com I am getting an SSL error. I am using AWS default security policy(ELBSecurityPolicy-2016-08). I did sslscan for api.dev subdomain and got the following error
TLS Fallback SCSV:
Connection failed - unable to determine TLS Fallback SCSV support
TLS renegotiation:
Session renegotiation not supported
TLS Compression:
OpenSSL version does not support compression
Rebuild with zlib1g-dev package for zlib support
Heartbleed:
Supported Server Cipher(s):
Unable to parse certificate
Unable to parse certificate
Unable to parse certificate
Unable to parse certificate
Certificate information cannot be retrieved.
Why is sslscan failing for api.dev subdomain while it is successful for dev subdomain? How can I resolve this?
Second level subdomains have to be listed in the SSL certificate. If you have a *.example.com wildcard certificate the wildcard is only valid for one level. You would also need to add wildcards for other levels, like: *.dev.example.com.
This is not a limitation of AWS, it is a limitation of SSL certificates.

How to properly use Google Cloud IoT Core over HTTPS regarding to SSL certificates?

I use Cloud IoT mqtt bridge via HTTPS as described here. I created public/private keys pair to sign JWT token. Everything is ok except one thing.
As you can see the origin for the service is https://cloudiotdevice.googleapis.com. And this domain resolves to different addresses. The client fails to verify some of these hosts certificates. So some requests fail.
It's also easy to use any browser to see the issue (certificate verification failure). Just open the above link server will respond with 404, but browser anyway shows whether the certificate is ok or isn't.
I collected bunch (seven) of IP addresses and downloaded certificates.
Here is told that if there are TLS issues it's may be necessary to install root CA certs. The repo of certs is here. I downloaded first three of these CA certs and tried different ways to install these to the system (update-ca-certificates for ubuntu, trust anchor for arch). I tried to install them to the browser (Chrome) as Authority Certs and see whether it stops complaining. I tried to verify the server certificates via openssl e.g.
openssl verify -untrusted ca-bundle cert.pem
But I couldn't get it verified. What have I missed?

CloudFront wasn't able to connect to the origin

I had set up Cloudfront correctly over http. It fetched data from my website (dev.pie.video) fine. I'm now moving to https. Things are working fine at https://dev.pie.video but Cloudfront is unable to server any content.
For instance https://dev.pie.video/favicon-96x96.png works but https://d1mbpc40mdbs3p.cloudfront.net/favicon-96x96.png fails with status 502, even though my Cloudfront distribution d1mbpc40mdbs3p points to dev.pie.video.
More details if that's helpful:
d1mbpc40mdbs3p.cloudfront.net uses the default CloudFront Certificate for https
the cloudfront distribution's origin is set to work over SSL and TLS, and to use the viewer's protocol.
===== Edit 1 =====
screenshots of the cloudfront settings:
General:
Origin:
Behaviors:
==== Edit 2 ====
if that's helpful, the logs I'm getting from cloudfront look like
<timestamp> SFO20 924 96.90.217.130 GET d1mbpc40mdbs3p.cloudfront.net /favicon-96x96.png 502 - <someInfoOnTheClientBrowser> 2 - Error poZyhl63JNGFk8dIIjCluGDm4dxF8EdMZFhjg82NgHGPNqcmx6ArHA== d1mbpc40mdbs3p.cloudfront.net https 494 0.002 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Error HTTP/1.1
Your origin server is incorrectly configured for SSL. CloudFront requires a valid configuration, and may be more stringent than some browsers -- so a green lock in the browser doesn't necessarily mean your SSL setup is complete and universally compatible with all clients.
$ true | openssl s_client -connect dev.pie.video:443 -showcerts
CONNECTED(00000003)
depth=0 OU = Domain Control Validated, CN = dev.pie.video
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 OU = Domain Control Validated, CN = dev.pie.video
verify error:num=27:certificate not trusted
verify return:1
depth=0 OU = Domain Control Validated, CN = dev.pie.video
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/OU=Domain Control Validated/CN=dev.pie.video
i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
-----BEGIN CERTIFICATE-----
MIIFMzCCBBugAwIBAgIJAL96wtFpu1ZpMA0GCSqGSIb3DQEBCwUAMIG0MQswCQYD
VQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMKU2NvdHRzZGFsZTEa
MBgGA1UEChMRR29EYWRkeS5jb20sIEluYy4xLTArBgNVBAsTJGh0dHA6Ly9jZXJ0
cy5nb2RhZGR5LmNvbS9yZXBvc2l0b3J5LzEzMDEGA1UEAxMqR28gRGFkZHkgU2Vj
dXJlIENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTE2MDgwODE4MzQ0MFoX
DTE3MDgwODE4MzQ0MFowOzEhMB8GA1UECxMYRG9tYWluIENvbnRyb2wgVmFsaWRh
dGVkMRYwFAYDVQQDEw1kZXYucGllLnZpZGVvMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAz/wT5j/zHKzmt3oRvst74Knqxc0pl3sp5imUJ7UegoxcTISm
xJC5qQiDsD0U08kAFxvXDd91jlozh4QDcfLE8N7X9fsxC7OW2pDv3ks/LO7tiCxn
gNmxjvYvOQ/vASrLHIal+oGWJNdBMB1eckV4xHCeBDDEizDneq/qvjN0M0k5hQ+/
qk7RjVhJUmFAfvhXpxXaCbVDq1d3V1iRBo3oP3SGV++bj/m55QPFfKCZqGPTiM5G
c9+8ru16EVCpvs0wCWBVxjTiOCGtrMLgvp9LOs8AN369Yk/3AynpgAI0DDhb5y8I
KEuCdbUaIg5Zo029iZz4nWRsZFd5CSwgX8tZNQIDAQABo4IBvjCCAbowDAYDVR0T
AQH/BAIwADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDgYDVR0PAQH/
BAQDAgWgMDcGA1UdHwQwMC4wLKAqoCiGJmh0dHA6Ly9jcmwuZ29kYWRkeS5jb20v
Z2RpZzJzMS0yODIuY3JsMF0GA1UdIARWMFQwSAYLYIZIAYb9bQEHFwEwOTA3Bggr
BgEFBQcCARYraHR0cDovL2NlcnRpZmljYXRlcy5nb2RhZGR5LmNvbS9yZXBvc2l0
b3J5LzAIBgZngQwBAgEwdgYIKwYBBQUHAQEEajBoMCQGCCsGAQUFBzABhhhodHRw
Oi8vb2NzcC5nb2RhZGR5LmNvbS8wQAYIKwYBBQUHMAKGNGh0dHA6Ly9jZXJ0aWZp
Y2F0ZXMuZ29kYWRkeS5jb20vcmVwb3NpdG9yeS9nZGlnMi5jcnQwHwYDVR0jBBgw
FoAUQMK9J47MNIMwojPX+2yz8LQsgM4wKwYDVR0RBCQwIoINZGV2LnBpZS52aWRl
b4IRd3d3LmRldi5waWUudmlkZW8wHQYDVR0OBBYEFEPW+uDOOtZfUEdXuBs+960C
zQRKMA0GCSqGSIb3DQEBCwUAA4IBAQBLkLYJEc9E+IGv6pXaPCcYowJfji651Ju6
3DNzGXdyWfOXG+UVCMtPZuC9J66dID4Rc7HWzLveTPEI32z4IgtSjvRwRk9YyWVx
uCOpsP3e/Vgriwg5ds4NyrelQfshA3KaiTLohuiVEOBZgZgIwBEmwR2ZNFuL375E
uEn909zF9+sGkTbFnMm1zlqB2oh2UlSkUT3mj009vWF416W6kZQdFFFEmaI8uSmo
+Thd8HSxQytzWvB3dR4lCteiC09lkQPHU5t10tPgK9BtkLv05ICQQoDhFJmLeAcC
WNEmCcDnSHPxXjPi8kcyM6aqNofL1D0e1pYYvcpYQQDayWdY3tUh
-----END CERTIFICATE-----
---
Server certificate
subject=/OU=Domain Control Validated/CN=dev.pie.video
issuer=/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
---
No client certificate CA names sent
---
SSL handshake has read 2010 bytes and written 431 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
...clipped...
Your certificate is signed by "Go Daddy Secure Certificate Authority - G2" which is an intermediate certificate (not a root), and you don't have that intermediate certificate installed on your server -- so CloudFront reports that it is "unable" to connect, when in fact it is more accurately "unwilling" to connect, as a security precaution, because it can't verify the validity of your SSL certificate. You should see these as SSL negotiation failures in your web server's log. The connection itself is working, but CloudFront considers it invalid, and therefore unsafe to use, due to the trust issue.
Caution
If the origin server returns an expired certificate, an invalid certificate or a self-signed certificate, or if the origin server returns the certificate chain in the wrong order, CloudFront drops the TCP connection, returns HTTP error code 502, and sets the X-Cache header to Error from cloudfront.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html
Add your intermediate certificate to your server configuration, and you should be set. This should have been bundled with the cert when you downloaded it, but if not, it can be obtained from your CA, Go Daddy in this case.
This is not a limitation specific to Go Daddy certificates. All CAs that follow standard practice use intermediate certificates to establish a chain of trust back to a trusted root.
See also:
https://www.godaddy.com/help/what-is-an-intermediate-certificate-868
https://certs.godaddy.com/repository
In case it helps ( I am new to Lightsail )
I had a similar issue, when creating a Lightsail Distribution.
TLDR: try setting Origin protocol policy to be HTTP (since your origin indeed is only able to serve up HTTP unless you also add the SSL cert there)
DETAIL
I followed the documentation, in particular https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-creating-content-delivery-network-distribution#distribution-origin-protocol-policy
I created:
Lightsail instance (PHP bitnami image)
configured Distribution for a dynamic site, and to use HTTPS, by creating a SSL cert
created a DNS zone
configured domain to point to the nameservers of that DNS zone
configured A + CNAME records in the DNS records in DNS zone, to point to the distribution
error: browser shows 502 error page
The problem I had was that "Origin protocol policy" was set to HTTPS only, although the Lightsail instance could only serve up HTTP.
I changed "Original protocol policy" to HTTP and then the page serves OK (as HTTPS).
It seems that SSL cert and HTTPS can be handled entirely by the Distribution, and do not need to be configured on the Instance (provided you set "Origin protocol policy" to HTTP).
So a crude high level picture, looks like:
browser <-- https --> Distribution <-- http --> Instance
Of course, the downside is that my Lightsail instance is serving pages as HTTP, to anyone who knows its static IP address...
I got the same issue. I did below steps:
Looked at ALB Lister tab and checked for the port of 443.
There were two certs out of which one was expired and ALB was pointing to newer one but still we were getting 502 error.
AWS support suggested to remove the expired from 443 listener.
Thanks
Santosh Garole
I had similar issue I fixed by not selecting website endpoint when selecting origin even though it is prompted to use it.
In my case even cloudfront ssl certificate was not working however I was able to connect through website endpoint without cloudfront.
Also I needed to set default root object to index.html in order to get it working.
I've had this issue when using CloudFront (Amazon) on top of CloudFlare (different company). They surely have their https certificates correct?
Didn't get to the bottom of it and I just switched back to http for the origin. It was just images for a stupid ebay store and I was really only using CloudFront to obfuscate the domain underneath (because people steal image URLs on ebay).
I added a query string parameter ?a=1 and it worked, ?a=2 failed, ?a=3 worked, ?a=4 worked and ?a=8 failed again. So there was something funky going on with either CloudFront's
Still not sure what was going on but invalidation didn't fix it, neither would have I expected it to since I pass through query strings and changing a did not make it always work.
if you get the problem try adding a nonsense parameter and incrementing it several times and observe the results.

AWS ACM wildcard ssl certificate not working on domain

I created a SSL certificate for my site using Amazon Certificate Manager. The certificate is for *.example.com. I have then attached this certificate to my ELB and have left the instance protocol as http. So SSL chain is only between the client and ELB.
I have two A records in Route53. One for example.com one for www.example.com. Both of these are aliased to ELB. When I do https://www.example.com it works perfect. But when I do https://example.com I get the following error in FireFox:
"example.com uses an invalid security certificate. The certificate is only valid for *.example.com Error code: SSL_ERROR_BAD_CERT_DOMAIN"
Shouldn't the certificate *.example.com work for the address example.com? Am I missing something?
EDIT May 31, 2016
Thank you to Steffen Ullrich for setting me on the right track. The problem is when using the AWS Certificate Manager (ACM) in the console (web browser) there is no option to add the alternative names. For those having the same problem you need to use CLI (command line interface). A quick web search for "Install AWS CLI" will give you all the information you need to complete the installation. Once CLI is installed then you can run the ACM commands. Here is a link to the documentation:
http://docs.aws.amazon.com/cli/latest/reference/acm/request-certificate.html
The command I used was:
aws acm request-certificate --domain-name www.example.com --subject-alternative-names example.com
Once the request was approved I was able to see the SSL certificate in the ACM web interface. I installed it and everything working like a charm now!
A certificate for *.example.com matches whatever.example.com but not example.com only. This is because the * must match a label and example.com has no label in place of the *. If you want to match both whatever.example.com and example.com you need to create a certificate which has as subject alternative names both *.example.com and example.com.
When requesting a new certificate via the console, you can now add both *.domain.com and www.domain.com, before hitting next, in the next box, make sure you request to add another domain to the certificate.