Haproxy with multiple CA configuration - amazon-web-services

SSL newbie here, using haproxy 1.8, having a situation when I have 2 aws API Gateways pointing to the same proxy server and 2 clients certificates generated by api gateway itself assigned one to each gateways.
Now I have a haproxy server that I'm trying to configure in a way to only allow access from these 2 api gateways.
When I do it for api gateway only, meaning I only set the ca-file to a file containing 1 client certificate, it works just fine as expected but I don't know how to set both client certificates to be allowed.
so I have these files setup:
haproxy.pem which contains
server cert issued by go daddy
private key
go daddy certs
api-gw.pem first client cert which was copied from api gateway
api-gw2.pem second client cert which was copied from api gateway
client-certs.crt which is a concatenated version of api-gw.pem and api-gw2.pem
when I bind ssl like below for client cert, it works just fine:
bind :443 ssl crt /etc/haproxy/haproxy.pem verify required ca-file /etc/haproxy/api-gw.pem
or
bind :443 ssl crt /etc/haproxy/haproxy.pem verify required ca-file /etc/haproxy/api-gw2.pem
for each of the bindings above only the correct api gateway can access the proxy and the other one can't.
but when I do as below to allow both access the proxy server, it only allows the first client cert even though the file contains both:
bind :443 ssl crt /etc/haproxy/haproxy.pem verify required ca-file /etc/haproxy/client-certs.pem
As my knowledge is limited when it comes to certificates and ssl, I'm not sure if it would work to put multiple client certificates into one file but from what I've read in internet, it's suggested that way... I still don't know why wouldn't it work though.
EDIT
I Michael suggested, I put both client certs together using the
cat api-gw.pem api-gw2.pem > api-gw-combo.pem
and the combo file looks like:
-----BEGIN CERTIFICATE-----
.....cert content for api-gw
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
.....cert content for api-gw2
-----END CERTIFICATE-----
but same as my initial file client-certs.crt, haproxy still accepts the first cert only.

Your config (about two client CA certificates in one file for SSL client authorization) for me is working fine:
bind 1.2.3.4:443 ssl crt /etc/haproxy/ssl/my-domain.pem ca-file /etc/haproxy/ssl/my_client_CA_bundle.crt verify required
But maybe it is working due for specific situation:
HaProxy 2.2.X,
in my_client_CA_bundle.crt file we have two Root CA for client verification,
but both our CA for client verification are created using the same private key (maybe this is important difference from your problem?).
This config was used for a smooth transition to the new client's CA when the old one runs out.

Related

How to properly use Google Cloud IoT Core over HTTPS regarding to SSL certificates?

I use Cloud IoT mqtt bridge via HTTPS as described here. I created public/private keys pair to sign JWT token. Everything is ok except one thing.
As you can see the origin for the service is https://cloudiotdevice.googleapis.com. And this domain resolves to different addresses. The client fails to verify some of these hosts certificates. So some requests fail.
It's also easy to use any browser to see the issue (certificate verification failure). Just open the above link server will respond with 404, but browser anyway shows whether the certificate is ok or isn't.
I collected bunch (seven) of IP addresses and downloaded certificates.
Here is told that if there are TLS issues it's may be necessary to install root CA certs. The repo of certs is here. I downloaded first three of these CA certs and tried different ways to install these to the system (update-ca-certificates for ubuntu, trust anchor for arch). I tried to install them to the browser (Chrome) as Authority Certs and see whether it stops complaining. I tried to verify the server certificates via openssl e.g.
openssl verify -untrusted ca-bundle cert.pem
But I couldn't get it verified. What have I missed?

AWS API Gateway gives 'More than one client certificate passed' error

I have set up Mutual TLS authentication for my API Gateway.
I then placed my client certificate in the truststore. The file contains the client certificate, intermediate and root certificates (private CA).
When accessing the API Gateway with a browser (Chrome on Windows), browser asks me to provide client certificate. I select the same certificate as I have placed in the trust store.
API Gateway reports the following in the browser:
{"message":"Invalid client certificate chain. More than one client certificate passed"}
UPDATE: I have also tried placing only intermediate and root certs in the trust store. Same error.
UPDATE 2: The same error is also reported when accessing the API with C# code (WebClient) loading the cert from Windows cert store or from disc (pfx file).
If your trust store doesn't contain all the intermediate CA certs, then the client has to send a multi-cert chain. The TLS handshake will work fine, but somewhere there is an explicit check that disallows multi-cert chains. The status code is 400, not 403(!), and you get the "More than one client certificate passed" error.
However, if you're willing to put all the intermediate CA in the API gateway trust store, then the server should not ask the client to send intermediate certs. The client should only send one cert in this case, and API gateway should work.
So something is going wrong when API gateway matches the initial client cert against the trust store, and it's not finding the intermediate. I would check these things:
Make sure you use a specific version ID with the S3 link to the trust store. Otherwise it's hard to tell which version it's actually using, because the API gateway will not automatically pick up a new version as soon as you add one to S3. Maybe you're not using the trust store you think you are.
Your trust store should only include CA certs -- the root cert and intermediates. You said you put the client cert in there, so maybe that's causing an issue. Try taking it out.

CloudFront wasn't able to connect to the origin

I had set up Cloudfront correctly over http. It fetched data from my website (dev.pie.video) fine. I'm now moving to https. Things are working fine at https://dev.pie.video but Cloudfront is unable to server any content.
For instance https://dev.pie.video/favicon-96x96.png works but https://d1mbpc40mdbs3p.cloudfront.net/favicon-96x96.png fails with status 502, even though my Cloudfront distribution d1mbpc40mdbs3p points to dev.pie.video.
More details if that's helpful:
d1mbpc40mdbs3p.cloudfront.net uses the default CloudFront Certificate for https
the cloudfront distribution's origin is set to work over SSL and TLS, and to use the viewer's protocol.
===== Edit 1 =====
screenshots of the cloudfront settings:
General:
Origin:
Behaviors:
==== Edit 2 ====
if that's helpful, the logs I'm getting from cloudfront look like
<timestamp> SFO20 924 96.90.217.130 GET d1mbpc40mdbs3p.cloudfront.net /favicon-96x96.png 502 - <someInfoOnTheClientBrowser> 2 - Error poZyhl63JNGFk8dIIjCluGDm4dxF8EdMZFhjg82NgHGPNqcmx6ArHA== d1mbpc40mdbs3p.cloudfront.net https 494 0.002 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 Error HTTP/1.1
Your origin server is incorrectly configured for SSL. CloudFront requires a valid configuration, and may be more stringent than some browsers -- so a green lock in the browser doesn't necessarily mean your SSL setup is complete and universally compatible with all clients.
$ true | openssl s_client -connect dev.pie.video:443 -showcerts
CONNECTED(00000003)
depth=0 OU = Domain Control Validated, CN = dev.pie.video
verify error:num=20:unable to get local issuer certificate
verify return:1
depth=0 OU = Domain Control Validated, CN = dev.pie.video
verify error:num=27:certificate not trusted
verify return:1
depth=0 OU = Domain Control Validated, CN = dev.pie.video
verify error:num=21:unable to verify the first certificate
verify return:1
---
Certificate chain
0 s:/OU=Domain Control Validated/CN=dev.pie.video
i:/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
-----BEGIN CERTIFICATE-----
MIIFMzCCBBugAwIBAgIJAL96wtFpu1ZpMA0GCSqGSIb3DQEBCwUAMIG0MQswCQYD
VQQGEwJVUzEQMA4GA1UECBMHQXJpem9uYTETMBEGA1UEBxMKU2NvdHRzZGFsZTEa
MBgGA1UEChMRR29EYWRkeS5jb20sIEluYy4xLTArBgNVBAsTJGh0dHA6Ly9jZXJ0
cy5nb2RhZGR5LmNvbS9yZXBvc2l0b3J5LzEzMDEGA1UEAxMqR28gRGFkZHkgU2Vj
dXJlIENlcnRpZmljYXRlIEF1dGhvcml0eSAtIEcyMB4XDTE2MDgwODE4MzQ0MFoX
DTE3MDgwODE4MzQ0MFowOzEhMB8GA1UECxMYRG9tYWluIENvbnRyb2wgVmFsaWRh
dGVkMRYwFAYDVQQDEw1kZXYucGllLnZpZGVvMIIBIjANBgkqhkiG9w0BAQEFAAOC
AQ8AMIIBCgKCAQEAz/wT5j/zHKzmt3oRvst74Knqxc0pl3sp5imUJ7UegoxcTISm
xJC5qQiDsD0U08kAFxvXDd91jlozh4QDcfLE8N7X9fsxC7OW2pDv3ks/LO7tiCxn
gNmxjvYvOQ/vASrLHIal+oGWJNdBMB1eckV4xHCeBDDEizDneq/qvjN0M0k5hQ+/
qk7RjVhJUmFAfvhXpxXaCbVDq1d3V1iRBo3oP3SGV++bj/m55QPFfKCZqGPTiM5G
c9+8ru16EVCpvs0wCWBVxjTiOCGtrMLgvp9LOs8AN369Yk/3AynpgAI0DDhb5y8I
KEuCdbUaIg5Zo029iZz4nWRsZFd5CSwgX8tZNQIDAQABo4IBvjCCAbowDAYDVR0T
AQH/BAIwADAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwDgYDVR0PAQH/
BAQDAgWgMDcGA1UdHwQwMC4wLKAqoCiGJmh0dHA6Ly9jcmwuZ29kYWRkeS5jb20v
Z2RpZzJzMS0yODIuY3JsMF0GA1UdIARWMFQwSAYLYIZIAYb9bQEHFwEwOTA3Bggr
BgEFBQcCARYraHR0cDovL2NlcnRpZmljYXRlcy5nb2RhZGR5LmNvbS9yZXBvc2l0
b3J5LzAIBgZngQwBAgEwdgYIKwYBBQUHAQEEajBoMCQGCCsGAQUFBzABhhhodHRw
Oi8vb2NzcC5nb2RhZGR5LmNvbS8wQAYIKwYBBQUHMAKGNGh0dHA6Ly9jZXJ0aWZp
Y2F0ZXMuZ29kYWRkeS5jb20vcmVwb3NpdG9yeS9nZGlnMi5jcnQwHwYDVR0jBBgw
FoAUQMK9J47MNIMwojPX+2yz8LQsgM4wKwYDVR0RBCQwIoINZGV2LnBpZS52aWRl
b4IRd3d3LmRldi5waWUudmlkZW8wHQYDVR0OBBYEFEPW+uDOOtZfUEdXuBs+960C
zQRKMA0GCSqGSIb3DQEBCwUAA4IBAQBLkLYJEc9E+IGv6pXaPCcYowJfji651Ju6
3DNzGXdyWfOXG+UVCMtPZuC9J66dID4Rc7HWzLveTPEI32z4IgtSjvRwRk9YyWVx
uCOpsP3e/Vgriwg5ds4NyrelQfshA3KaiTLohuiVEOBZgZgIwBEmwR2ZNFuL375E
uEn909zF9+sGkTbFnMm1zlqB2oh2UlSkUT3mj009vWF416W6kZQdFFFEmaI8uSmo
+Thd8HSxQytzWvB3dR4lCteiC09lkQPHU5t10tPgK9BtkLv05ICQQoDhFJmLeAcC
WNEmCcDnSHPxXjPi8kcyM6aqNofL1D0e1pYYvcpYQQDayWdY3tUh
-----END CERTIFICATE-----
---
Server certificate
subject=/OU=Domain Control Validated/CN=dev.pie.video
issuer=/C=US/ST=Arizona/L=Scottsdale/O=GoDaddy.com, Inc./OU=http://certs.godaddy.com/repository//CN=Go Daddy Secure Certificate Authority - G2
---
No client certificate CA names sent
---
SSL handshake has read 2010 bytes and written 431 bytes
---
New, TLSv1/SSLv3, Cipher is ECDHE-RSA-AES128-GCM-SHA256
Server public key is 2048 bit
...clipped...
Your certificate is signed by "Go Daddy Secure Certificate Authority - G2" which is an intermediate certificate (not a root), and you don't have that intermediate certificate installed on your server -- so CloudFront reports that it is "unable" to connect, when in fact it is more accurately "unwilling" to connect, as a security precaution, because it can't verify the validity of your SSL certificate. You should see these as SSL negotiation failures in your web server's log. The connection itself is working, but CloudFront considers it invalid, and therefore unsafe to use, due to the trust issue.
Caution
If the origin server returns an expired certificate, an invalid certificate or a self-signed certificate, or if the origin server returns the certificate chain in the wrong order, CloudFront drops the TCP connection, returns HTTP error code 502, and sets the X-Cache header to Error from cloudfront.
http://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/SecureConnections.html
Add your intermediate certificate to your server configuration, and you should be set. This should have been bundled with the cert when you downloaded it, but if not, it can be obtained from your CA, Go Daddy in this case.
This is not a limitation specific to Go Daddy certificates. All CAs that follow standard practice use intermediate certificates to establish a chain of trust back to a trusted root.
See also:
https://www.godaddy.com/help/what-is-an-intermediate-certificate-868
https://certs.godaddy.com/repository
In case it helps ( I am new to Lightsail )
I had a similar issue, when creating a Lightsail Distribution.
TLDR: try setting Origin protocol policy to be HTTP (since your origin indeed is only able to serve up HTTP unless you also add the SSL cert there)
DETAIL
I followed the documentation, in particular https://lightsail.aws.amazon.com/ls/docs/en_us/articles/amazon-lightsail-creating-content-delivery-network-distribution#distribution-origin-protocol-policy
I created:
Lightsail instance (PHP bitnami image)
configured Distribution for a dynamic site, and to use HTTPS, by creating a SSL cert
created a DNS zone
configured domain to point to the nameservers of that DNS zone
configured A + CNAME records in the DNS records in DNS zone, to point to the distribution
error: browser shows 502 error page
The problem I had was that "Origin protocol policy" was set to HTTPS only, although the Lightsail instance could only serve up HTTP.
I changed "Original protocol policy" to HTTP and then the page serves OK (as HTTPS).
It seems that SSL cert and HTTPS can be handled entirely by the Distribution, and do not need to be configured on the Instance (provided you set "Origin protocol policy" to HTTP).
So a crude high level picture, looks like:
browser <-- https --> Distribution <-- http --> Instance
Of course, the downside is that my Lightsail instance is serving pages as HTTP, to anyone who knows its static IP address...
I got the same issue. I did below steps:
Looked at ALB Lister tab and checked for the port of 443.
There were two certs out of which one was expired and ALB was pointing to newer one but still we were getting 502 error.
AWS support suggested to remove the expired from 443 listener.
Thanks
Santosh Garole
I had similar issue I fixed by not selecting website endpoint when selecting origin even though it is prompted to use it.
In my case even cloudfront ssl certificate was not working however I was able to connect through website endpoint without cloudfront.
Also I needed to set default root object to index.html in order to get it working.
I've had this issue when using CloudFront (Amazon) on top of CloudFlare (different company). They surely have their https certificates correct?
Didn't get to the bottom of it and I just switched back to http for the origin. It was just images for a stupid ebay store and I was really only using CloudFront to obfuscate the domain underneath (because people steal image URLs on ebay).
I added a query string parameter ?a=1 and it worked, ?a=2 failed, ?a=3 worked, ?a=4 worked and ?a=8 failed again. So there was something funky going on with either CloudFront's
Still not sure what was going on but invalidation didn't fix it, neither would have I expected it to since I pass through query strings and changing a did not make it always work.
if you get the problem try adding a nonsense parameter and incrementing it several times and observe the results.

Haproxy to authorize traffic from AWS API Gateway

I have set up a basic API using AWS API Gateway and I would like to link my endpoints to a service I have running on an EC2 instance (using the "HTTP Proxy" integration type). I have read that in order to lock down my EC2 server from only accepting traffic from the API Gateway, I basically have one of two options:
Stick the EC2 instance behind VPC and use Lambda functions (instead of HTTP proxy) that have VPC permissions to act as a "pass through" for the API requests
Create a Client Certificate within API Gateway, make my backend requests using that cert, and verify the cert on the EC2 instance.
I would like to employ a variation of #2 and instead of verifying the cert on the EC2 service instance itself, I would like to instead do that verification on another instance running Haproxy. I have set up a second EC2 instance with Haproxy and have that pointed at my other instance as the backend. I have locked down my service instance so it will only take requests from the Haproxy instance. That is all working. What I have been struggling to figure out is how to verify the AWS Gateway Client Certificate (that I have generated) on the Haproxy machine. I have done tons of googling and there is surprisingly zero information on how to do this exact thing. A couple questions:
Everything I have read seems to suggest that I need to generate SSL server certs on my Haproxy machine and use those in the config. Do I have to do this, or can I verify the AWS client cert without generating any additional certs?
The reading I have done suggests I would need to generate a CA and then use that CA to generate both the server and client certs. If I do in fact need to generate server certs (on the Haproxy machine), how can I generate them if I don't have access to the CA that amazon used to create the gateway client cert? I only have access to the client cert itself, from what I can tell.
Any help here?
SOLUTION UPDATE
First, I had to upgrade my version of HAproxy to v1.5.14 so I could get the SSL capabilities
I originally attempted to generate an official cert with letsencrypt. While I was able to get the API gateway working with this cert, I was not able to generate a letsencrypt cert on the HAproxy machine that the API gateway would accept. The issue surfaced as an "Internal server error" response from the API gateway and as "General SSLEngine problem" in the detailed CloudWatch logs.
I then purchased a wildcard certificate from Gandi, and tried this on the HAproxy machine, but initially ran into the exact same problem. However, I was able to determine that the structure of my SSL cert was not what the API gateway wanted. I googled and found the Gandi chain here:
https://www.gandi.net/static/CAs/GandiStandardSSLCA2.pem
Then I structured my SSL file as follows:
-----BEGIN PRIVATE KEY-----
# private key I generated locally...
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
# cert from gandi...
-----END CERTIFICATE-----
# two certs from file in the above link
I saved out this new PEM file (as haproxy.pem) and used it in my HAproxy frontend bind statement, like so:
bind :443 ssl crt haproxy.pem verify required ca-file api-gw-cert.pem
The api-gw-cert.pem in the above bind statement is a file that contains a client cert that I generated in the API gateway console. Now, the HAproxy machine properly blocks any traffic coming from anywhere but the gateway.
The reading I have done suggests I would need to generate a CA and then use that CA to generate both the server and client certs.
That's one way to do it, but it is not applicable in this case.
Your HAProxy needs to be configured with a working SSL certificate signed by a trusted CA -- not the one that signed the client certificate, and not one you create. It needs to be a certificate signed by a public, trusted CA whose root certificates are in the trust store of the back-end systems at API Gateway... which should be essentially the same as what your web browser trusts, but may be a subset.
Just as your web browser will not speak SSL to a server sporting a self-signed certificate without throwing a warning that you have to bypass, the back-end of API Gateway won't negotiate with an untrusted certificate (and there's no bypass).
Suffice it to say, you need to get API Gateway talking to your HAProxy over TLS before trying to get it to use a client cert, because otherwise you are introducing too many unknowns. Note also that you can't use an Amazon Certificate Manager cert for this, because those certs only work with CloudFront and ELB, neither of which will support client certs directly.
Once the HAProxy is working with API Gateway, you need then to configure it to authenticate the client.
You need ssl and verify required in your bind statement, but you can't verify an SSL client cert without something to verify it against.
I only have access to the client cert itself, from what I can tell.
And that's all you need.
bind ... ssl ... verify required ca-file /etc/haproxy/api-gw-cert.pem.
SSL certs are essentially a trust hierarchy. The trust at the top of the tree is explicit. Normally, the CA is explicitly trusted and anything it has signed is implicitly trusted. The CA "vouches for" the certificates it signs... and for certificates it signs with the CA attribute set, which can also sign certificates under them, extending that implicit trust.
In this case, though, you simply put the client certificate in as the CA file, and then the client certificate "vouches for"... itself. A client presenting the identical certificate is trusted, and anybody else is disconnected. Having just the certificate is not enough for a client to talk to your proxy, of course -- the client also needs the matching private key, which API Gateway has.
So, consider this two separate requirements. Get API Gateway talking to your proxy over TLS first... and after that, authenticating against the client certificate is actually the easier part.
I think you are mixing up server certs and client certs. In this instance API Gateway is the client, and HAProxy is the server. You want HAProxy to verify the client cert sent by API Gateway. API Gateway will generate the certificate for you, you just need to configure HAProxy to verify that certificate is present in every request it processes.
I'm guessing you might be looking at this tutorial where they are telling you to generate the client cert, and then configure HAProxy to verify that cert. The "generate the cert" part of that tutorial can be skipped since API Gateway is generating the cert for you.
You just need to click the "Generate" button in API Gateway, then copy/paste the contents of the certificate it presents you and save that as a .pem file on the HAProxy server. Now I'm not a big HAProxy user, but I think taking the example from that tutorial your HAProxy config would look something like:
bind 192.168.10.1:443 ssl crt ./server.pem verify required

Add SSL to a Node.js endpoint: NGINX vs Endpoint

I need to add SSL to several Node.js services, each of one is listening on its own port, and that have NGINX to map them to our public "api" domain.
Due to the release of a new security policy now all services must be enforced to only work on SSL connections.
Since I'm not used to work with SSL certificates it's not clear to me what can be the advantage of setting the SSL on NGINX and make NGINX itself to proxy-pass to a http:// connection or have the real node.js endpoint to be a SSL server and (then proxy-pass to https://).
I guess with the NGINX solution, I could re-use the same SSL cert adding it to our "api" domain, while each different SSL node server would need a different cert.
Then it's not clear to me if into a production environment like this I should be using self-signed certificates (since the endpoint is touched through other services) or if it should be a CA trusted certificate exactly like it should be a public domain.
What am I missing in this considerations?
I assume the NGINX is public facing, and the nodejs services are internal (ie. not accessed directly by public web users).
You would only secure the connection between the public web to your NGINX. The transport between the NGINX and the NodeJS services is internal, and doesn't need to be secured. it's a big waste of CPU.
For the NGINX you buy a certificate from a valid certificate authority. For internal services you may use self-signed (ie. your own internal certificate authority generated certificates), but as said above, you shouldn't need to use SSL internally.