AWS CloudFront OCSP stapling - amazon-web-services

I've got a problem with CloudFront and OCSP stapling.
I expect to get OCSP stapling for every request to my site hosted on S3+CloudFront.
It is required by design.
During the research, I found this site with similar question and this quote from documentation:
When CloudFront receives a lot of HTTPS requests for the same domain, every server in the edge location soon has a response from the CA that it can "staple" to a packet in the SSL handshake
As I understand it:
OCSP stapling works with default domain and does NOT work with custom domain until it receives some (unknown) number of requests;
OCSP stapling turns on automatically.
I see possible but not convenient solutions here:
Create a lambda function for 'viewer response' and use it for manual stapled response;
Create EC2 with Nginx and use OCSP stapling provided by it (migrate from CloudFront).
Is there any easier way to force OCSP stapling for CloudFront?

Related

CORS request did not succeed - Cloudfront and ELB over HTTPS

When we access backend api of ELB an https url, throws an error
Cross-Origin Request Blocked: The Same Origin Policy disallows reading the remote resource at https://abcde-1xx8x3xx3x.ap-south-1.elb.amazonaws.com/api/auth/testapi. (Reason: CORS request did not succeed).
Frontend:
We are using Cloudfront to deliver frontend from S3. The cloudfront URL link works and the page loads fine.
On cloudfront side in behaviour section,
whitelisted the headers [Accept, Access-Control-Allow-Origin, Access-Control-Request-Headers, Access-Control-Request-Method, Authorization, Content-Type, Referer, X-Access-Token, X-HTTP-Method-Override, X-Requested-With]
Viewer Prorocol Policy - Redirect HTTP to HTTPS
Allowed HTTP Methods - GET, HEAD, OPTIONS, PUT, POST.....
Backend:
Backend nodejs rest api is in EC2 and delivered via Classic Load Balancer internet facing. EC2 is within VPC. We are using ELB URL for API testing. The API works in Postman. Here, we have used Load balancer protocol as HTTPS and instance protocol as HTTP. There is a SSL certificate attached to this.
on the nodejs application, we are setting headers as below
res.setHeader('Access-Control-Allow-Origin', '*');
res.setHeader('Access-Control-Request-Method', '*');
res.setHeader('Access-Control-Allow-Methods', 'GET, POST, OPTIONS, PUT, PATCH, DELETE');
res.header('Access-Control-Allow-Headers','Content-Type, Access-Control-Allow-Headers, Authorization, X-Requested-With, Access-Control-Allow-Origin, Origin, X-HTTP-Method-Override, Accept, X-Access-Token');
res.setHeader('Access-Control-Allow-Credentials', 'true');
on the fontend code while using fetch api, we are setting mode: 'cors'
Tried looking in to other similar cors issues, but nothing helped.
Can anyone suggest me on how to solve this cors issue, what am i missing here? Or is there a better way to handle this for production?
This isn't actually a CORS error. It's a connectivity problem that happens to involve a CORS request.
Reason: CORS request did not succeed
The HTTP request which makes use of CORS failed because the HTTP connection failed at either the network or protocol level. The error is not directly related to CORS, but is a fundamental network error of some kind.
https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Errors/CORSDidNotSucceed
You have presumably modified the URL https://abcde-1xx8x3xx3x.ap-south-1.elb.amazonaws.com/api/auth/testapi somewhat, but note that it is impossible to use HTTPS on a URL ending with .elb.amazonaws.com. This is because it is impossible to get an SSL certificate for an amazonaws.com domain name -- that is Amazon's domain name, not your domain name. You can't get an SSL certificate for a domain you don't really control. You need your own domain, here, and an SSL certificate to match it, attached to the balancer.
You should find that by punching this API URL directly into the address bar of your browser, you also get an error. That error will not be a CORS error, and it is the actual error that you need to resolve.
Note also that your S3 and CloudFront settings are not involved in the error you have described.

Getting 502 Bad Gateway Error in CloudFront AWS while using SSL

I'm trying to use Cloudfront for the first time.
Here is my infrastructure:
Created an alias (CNAME in OVH) from my domain name to the cloudfront domaine name
I have a cloudfront distribution deployed with the following configuration (see below)
I'm using an EC2 custom origin, in where I have installed Nginx (reverse-proxy) accepting traffic on port 443(https) and redirecting traffic of port 80(http) to 443(https)
I'm using a valid SSL certificate (until Sept 2018) that I created using letsencrypt and uploaded it to ACM (cert.pem -> Body / privkey.pem -> Private key / fullchain.pem -> chain`)
In Nginx I'm referencing the fullchain.pem and the privkey.pem
Here is my configuration
While troubleshooting I have done the following:
Used an SSL checker (https://www.sslchecker.com/sslchecker) which tells me that the certificate and the chain were find but not the root. BTW other online SSL checkers say that everything is ok. Don't know if the issue is coming from here. Amazon CM finds that the private, certificate and the chain are OK.
Used Openssl
openssl s_client –connect domainname:443 –servername domainname
openssl s_client –connect domainname:443
The first command is ok but the second is giving me SSL HANDSHAKE ERROR.
But Cloudfront is not configured to allow non SNI clients. So I think it is ok
- Checked my domaine with https://www.ssllabs.com/ssltest/ for ciphers & protocols. Everything seem to be ok. I can give more details if necessary
I'm still getting:
"CloudFront wasn't able to connect to the origin."
Any help please?
thank you

Redirect http:// requests to https:// on AWS API Gateway (using Custom Domains)

I'm using AWS API Gateway with a custom domain. When I try to access https://www.mydomain.com it works perfectly, but when i try http://www.mydomain.com it can't connect.
Is there a way to redirect the http -> https with the custom domain in API Gateway? If not, is there a way to get the http:// links to work just like the https:// links?
API Gateway doesn't directly support http without TLS, presumably as a security feature, as well as for some practical considerations.
There is not a particularly good way to do this for APIs in general, because redirection of a POST request from HTTP to HTTPS is actually a little bit pointless -- the data is has already been sent insecurely by the time the redirect is generated, unless the client has asked the server to inspect the request headers before the body is sent, with Expect: 100-continue.
You can create a CloudFront distribution, and configure it to redirect GET and HEAD requests from HTTP to HTTPS... but if you send a POST request to such a distribution, CloudFront doesn't redirect -- it just throws an error, since (as noted) such a redirection would be more harmful than helpful.
However... if GET is your application, then it's pretty straightforward: first, deploy your API with a Regional (not Edge-Optimized) API endpoint with a system-assigned hostname, not a custom domain.
Then, create a CloudFront distribution that uses this regional API endpoint as its origin server, and configure the CloudFront distribution's behavior to redirect HTTP to HTTPS. Associate your custom domain name with the CloudFront distribution, rather than with API Gateway directly.

CloudFront wasn't able to connect to the origin

I had set up Cloudfront correctly over http. It fetched data from my website (dev.pie.video) fine. I'm now moving to https. Things are working fine at https://dev.pie.video but Cloudfront is unable to server any content.
For instance https://dev.pie.video/favicon-96x96.png works but https://d1mbpc40mdbs3p.cloudfront.net/favicon-96x96.png fails with status 502, even though my Cloudfront distribution d1mbpc40mdbs3p points to dev.pie.video.
More details if that's helpful:
d1mbpc40mdbs3p.cloudfront.net uses the default CloudFront Certificate for https
the cloudfront distribution's origin is set to work over SSL and TLS, and to use the viewer's protocol.
===== Edit 1 =====
screenshots of the cloudfront settings:
General:
Origin:
Behaviors:
==== Edit 2 ====
if that's helpful, the logs I'm getting from cloudfront look like
<timestamp> SFO20 924 96.90.217.130 GET d1mbpc40mdbs3p.cloudfront.net /favicon-96x96.png 502 - <someInfoOnTheClientBrowser> 2 - Error poZyhl63JNGFk8dIIjCluGDm4dxF8EdMZFhjg82NgHGPNqcmx6ArHA== d1mbpc40mdbs3p.cloudfront.net https 494 0.002 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Error HTTP/1.1
Your origin server is incorrectly configured for SSL. CloudFront requires a valid configuration, and may be more stringent than some browsers -- so a green lock in the browser doesn't necessarily mean your SSL setup is complete and universally compatible with all clients.
$ true | openssl s_client -connect dev.pie.video:443 -showcerts
CONNECTED(00000003)
depth=0 OU = Domain Control Validated, CN = dev.pie.video
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 OU = Domain Control Validated, CN = dev.pie.video
verify error:num=27:certificate not trusted
verify return:1
depth=0 OU = Domain Control Validated, CN = dev.pie.video
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/OU=Domain Control Validated/CN=dev.pie.video
i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
-----BEGIN CERTIFICATE-----
MIIFMzCCBBugAwIBAgIJAL96wtFpu1ZpMA0GCSqGSIb3DQEBCwUAMIG0MQswCQYD
VQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMKU2NvdHRzZGFsZTEa
MBgGA1UEChMRR29EYWRkeS5jb20sIEluYy4xLTArBgNVBAsTJGh0dHA6Ly9jZXJ0
cy5nb2RhZGR5LmNvbS9yZXBvc2l0b3J5LzEzMDEGA1UEAxMqR28gRGFkZHkgU2Vj
dXJlIENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTE2MDgwODE4MzQ0MFoX
DTE3MDgwODE4MzQ0MFowOzEhMB8GA1UECxMYRG9tYWluIENvbnRyb2wgVmFsaWRh
dGVkMRYwFAYDVQQDEw1kZXYucGllLnZpZGVvMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAz/wT5j/zHKzmt3oRvst74Knqxc0pl3sp5imUJ7UegoxcTISm
xJC5qQiDsD0U08kAFxvXDd91jlozh4QDcfLE8N7X9fsxC7OW2pDv3ks/LO7tiCxn
gNmxjvYvOQ/vASrLHIal+oGWJNdBMB1eckV4xHCeBDDEizDneq/qvjN0M0k5hQ+/
qk7RjVhJUmFAfvhXpxXaCbVDq1d3V1iRBo3oP3SGV++bj/m55QPFfKCZqGPTiM5G
c9+8ru16EVCpvs0wCWBVxjTiOCGtrMLgvp9LOs8AN369Yk/3AynpgAI0DDhb5y8I
KEuCdbUaIg5Zo029iZz4nWRsZFd5CSwgX8tZNQIDAQABo4IBvjCCAbowDAYDVR0T
AQH/BAIwADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDgYDVR0PAQH/
BAQDAgWgMDcGA1UdHwQwMC4wLKAqoCiGJmh0dHA6Ly9jcmwuZ29kYWRkeS5jb20v
Z2RpZzJzMS0yODIuY3JsMF0GA1UdIARWMFQwSAYLYIZIAYb9bQEHFwEwOTA3Bggr
BgEFBQcCARYraHR0cDovL2NlcnRpZmljYXRlcy5nb2RhZGR5LmNvbS9yZXBvc2l0
b3J5LzAIBgZngQwBAgEwdgYIKwYBBQUHAQEEajBoMCQGCCsGAQUFBzABhhhodHRw
Oi8vb2NzcC5nb2RhZGR5LmNvbS8wQAYIKwYBBQUHMAKGNGh0dHA6Ly9jZXJ0aWZp
Y2F0ZXMuZ29kYWRkeS5jb20vcmVwb3NpdG9yeS9nZGlnMi5jcnQwHwYDVR0jBBgw
FoAUQMK9J47MNIMwojPX+2yz8LQsgM4wKwYDVR0RBCQwIoINZGV2LnBpZS52aWRl
b4IRd3d3LmRldi5waWUudmlkZW8wHQYDVR0OBBYEFEPW+uDOOtZfUEdXuBs+960C
zQRKMA0GCSqGSIb3DQEBCwUAA4IBAQBLkLYJEc9E+IGv6pXaPCcYowJfji651Ju6
3DNzGXdyWfOXG+UVCMtPZuC9J66dID4Rc7HWzLveTPEI32z4IgtSjvRwRk9YyWVx
uCOpsP3e/Vgriwg5ds4NyrelQfshA3KaiTLohuiVEOBZgZgIwBEmwR2ZNFuL375E
uEn909zF9+sGkTbFnMm1zlqB2oh2UlSkUT3mj009vWF416W6kZQdFFFEmaI8uSmo
+Thd8HSxQytzWvB3dR4lCteiC09lkQPHU5t10tPgK9BtkLv05ICQQoDhFJmLeAcC
WNEmCcDnSHPxXjPi8kcyM6aqNofL1D0e1pYYvcpYQQDayWdY3tUh
-----END CERTIFICATE-----
---
Server certificate
subject=/OU=Domain Control Validated/CN=dev.pie.video
issuer=/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
---
No client certificate CA names sent
---
SSL handshake has read 2010 bytes and written 431 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
...clipped...
Your certificate is signed by "Go Daddy Secure Certificate Authority - G2" which is an intermediate certificate (not a root), and you don't have that intermediate certificate installed on your server -- so CloudFront reports that it is "unable" to connect, when in fact it is more accurately "unwilling" to connect, as a security precaution, because it can't verify the validity of your SSL certificate. You should see these as SSL negotiation failures in your web server's log. The connection itself is working, but CloudFront considers it invalid, and therefore unsafe to use, due to the trust issue.
Caution
If the origin server returns an expired certificate, an invalid certificate or a self-signed certificate, or if the origin server returns the certificate chain in the wrong order, CloudFront drops the TCP connection, returns HTTP error code 502, and sets the X-Cache header to Error from cloudfront.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html
Add your intermediate certificate to your server configuration, and you should be set. This should have been bundled with the cert when you downloaded it, but if not, it can be obtained from your CA, Go Daddy in this case.
This is not a limitation specific to Go Daddy certificates. All CAs that follow standard practice use intermediate certificates to establish a chain of trust back to a trusted root.
See also:
https://www.godaddy.com/help/what-is-an-intermediate-certificate-868
https://certs.godaddy.com/repository
In case it helps ( I am new to Lightsail )
I had a similar issue, when creating a Lightsail Distribution.
TLDR: try setting Origin protocol policy to be HTTP (since your origin indeed is only able to serve up HTTP unless you also add the SSL cert there)
DETAIL
I followed the documentation, in particular https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-creating-content-delivery-network-distribution#distribution-origin-protocol-policy
I created:
Lightsail instance (PHP bitnami image)
configured Distribution for a dynamic site, and to use HTTPS, by creating a SSL cert
created a DNS zone
configured domain to point to the nameservers of that DNS zone
configured A + CNAME records in the DNS records in DNS zone, to point to the distribution
error: browser shows 502 error page
The problem I had was that "Origin protocol policy" was set to HTTPS only, although the Lightsail instance could only serve up HTTP.
I changed "Original protocol policy" to HTTP and then the page serves OK (as HTTPS).
It seems that SSL cert and HTTPS can be handled entirely by the Distribution, and do not need to be configured on the Instance (provided you set "Origin protocol policy" to HTTP).
So a crude high level picture, looks like:
browser <-- https --> Distribution <-- http --> Instance
Of course, the downside is that my Lightsail instance is serving pages as HTTP, to anyone who knows its static IP address...
I got the same issue. I did below steps:
Looked at ALB Lister tab and checked for the port of 443.
There were two certs out of which one was expired and ALB was pointing to newer one but still we were getting 502 error.
AWS support suggested to remove the expired from 443 listener.
Thanks
Santosh Garole
I had similar issue I fixed by not selecting website endpoint when selecting origin even though it is prompted to use it.
In my case even cloudfront ssl certificate was not working however I was able to connect through website endpoint without cloudfront.
Also I needed to set default root object to index.html in order to get it working.
I've had this issue when using CloudFront (Amazon) on top of CloudFlare (different company). They surely have their https certificates correct?
Didn't get to the bottom of it and I just switched back to http for the origin. It was just images for a stupid ebay store and I was really only using CloudFront to obfuscate the domain underneath (because people steal image URLs on ebay).
I added a query string parameter ?a=1 and it worked, ?a=2 failed, ?a=3 worked, ?a=4 worked and ?a=8 failed again. So there was something funky going on with either CloudFront's
Still not sure what was going on but invalidation didn't fix it, neither would have I expected it to since I pass through query strings and changing a did not make it always work.
if you get the problem try adding a nonsense parameter and incrementing it several times and observe the results.

Haproxy to authorize traffic from AWS API Gateway

I have set up a basic API using AWS API Gateway and I would like to link my endpoints to a service I have running on an EC2 instance (using the "HTTP Proxy" integration type). I have read that in order to lock down my EC2 server from only accepting traffic from the API Gateway, I basically have one of two options:
Stick the EC2 instance behind VPC and use Lambda functions (instead of HTTP proxy) that have VPC permissions to act as a "pass through" for the API requests
Create a Client Certificate within API Gateway, make my backend requests using that cert, and verify the cert on the EC2 instance.
I would like to employ a variation of #2 and instead of verifying the cert on the EC2 service instance itself, I would like to instead do that verification on another instance running Haproxy. I have set up a second EC2 instance with Haproxy and have that pointed at my other instance as the backend. I have locked down my service instance so it will only take requests from the Haproxy instance. That is all working. What I have been struggling to figure out is how to verify the AWS Gateway Client Certificate (that I have generated) on the Haproxy machine. I have done tons of googling and there is surprisingly zero information on how to do this exact thing. A couple questions:
Everything I have read seems to suggest that I need to generate SSL server certs on my Haproxy machine and use those in the config. Do I have to do this, or can I verify the AWS client cert without generating any additional certs?
The reading I have done suggests I would need to generate a CA and then use that CA to generate both the server and client certs. If I do in fact need to generate server certs (on the Haproxy machine), how can I generate them if I don't have access to the CA that amazon used to create the gateway client cert? I only have access to the client cert itself, from what I can tell.
Any help here?
SOLUTION UPDATE
First, I had to upgrade my version of HAproxy to v1.5.14 so I could get the SSL capabilities
I originally attempted to generate an official cert with letsencrypt. While I was able to get the API gateway working with this cert, I was not able to generate a letsencrypt cert on the HAproxy machine that the API gateway would accept. The issue surfaced as an "Internal server error" response from the API gateway and as "General SSLEngine problem" in the detailed CloudWatch logs.
I then purchased a wildcard certificate from Gandi, and tried this on the HAproxy machine, but initially ran into the exact same problem. However, I was able to determine that the structure of my SSL cert was not what the API gateway wanted. I googled and found the Gandi chain here:
https://www.gandi.net/static/CAs/GandiStandardSSLCA2.pem
Then I structured my SSL file as follows:
-----BEGIN PRIVATE KEY-----
# private key I generated locally...
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
# cert from gandi...
-----END CERTIFICATE-----
# two certs from file in the above link
I saved out this new PEM file (as haproxy.pem) and used it in my HAproxy frontend bind statement, like so:
bind :443 ssl crt haproxy.pem verify required ca-file api-gw-cert.pem
The api-gw-cert.pem in the above bind statement is a file that contains a client cert that I generated in the API gateway console. Now, the HAproxy machine properly blocks any traffic coming from anywhere but the gateway.
The reading I have done suggests I would need to generate a CA and then use that CA to generate both the server and client certs.
That's one way to do it, but it is not applicable in this case.
Your HAProxy needs to be configured with a working SSL certificate signed by a trusted CA -- not the one that signed the client certificate, and not one you create. It needs to be a certificate signed by a public, trusted CA whose root certificates are in the trust store of the back-end systems at API Gateway... which should be essentially the same as what your web browser trusts, but may be a subset.
Just as your web browser will not speak SSL to a server sporting a self-signed certificate without throwing a warning that you have to bypass, the back-end of API Gateway won't negotiate with an untrusted certificate (and there's no bypass).
Suffice it to say, you need to get API Gateway talking to your HAProxy over TLS before trying to get it to use a client cert, because otherwise you are introducing too many unknowns. Note also that you can't use an Amazon Certificate Manager cert for this, because those certs only work with CloudFront and ELB, neither of which will support client certs directly.
Once the HAProxy is working with API Gateway, you need then to configure it to authenticate the client.
You need ssl and verify required in your bind statement, but you can't verify an SSL client cert without something to verify it against.
I only have access to the client cert itself, from what I can tell.
And that's all you need.
bind ... ssl ... verify required ca-file /etc/haproxy/api-gw-cert.pem.
SSL certs are essentially a trust hierarchy. The trust at the top of the tree is explicit. Normally, the CA is explicitly trusted and anything it has signed is implicitly trusted. The CA "vouches for" the certificates it signs... and for certificates it signs with the CA attribute set, which can also sign certificates under them, extending that implicit trust.
In this case, though, you simply put the client certificate in as the CA file, and then the client certificate "vouches for"... itself. A client presenting the identical certificate is trusted, and anybody else is disconnected. Having just the certificate is not enough for a client to talk to your proxy, of course -- the client also needs the matching private key, which API Gateway has.
So, consider this two separate requirements. Get API Gateway talking to your proxy over TLS first... and after that, authenticating against the client certificate is actually the easier part.
I think you are mixing up server certs and client certs. In this instance API Gateway is the client, and HAProxy is the server. You want HAProxy to verify the client cert sent by API Gateway. API Gateway will generate the certificate for you, you just need to configure HAProxy to verify that certificate is present in every request it processes.
I'm guessing you might be looking at this tutorial where they are telling you to generate the client cert, and then configure HAProxy to verify that cert. The "generate the cert" part of that tutorial can be skipped since API Gateway is generating the cert for you.
You just need to click the "Generate" button in API Gateway, then copy/paste the contents of the certificate it presents you and save that as a .pem file on the HAProxy server. Now I'm not a big HAProxy user, but I think taking the example from that tutorial your HAProxy config would look something like:
bind 192.168.10.1:443 ssl crt ./server.pem verify required