I'm looking to serve HTTPS downloads which are authenticated in both directions using mTLS, the requests and responses being signed with certificates issued by a private CA. The purpose of this is securing OTA updates of an embedded device (I need to identify and authorize both ends before downloading a FW image and PKI + mTLS is a very workable solution). A human being with a browser will never interact with this.
Google Cloud Functions terminate TLS by serving a public Google-issued HTTPS certificate. I can't seem to figure out how to make GCF serve HTTPS using a custom certificate (or to authorize incoming HTTPS requests only if the client certificate is signed by my private CA). Is that even possible? If yes, can anyone point me down the right document or example?
Client certificates are not supported.
Related
Please can someone advise if what I'm trying to do is possible - apologies I know a lot more about AWS than Azure, and I can't find any guidance online or bypass the issue by setting up services and 'giving it a go'.
I want to send SSL-secured subdomain traffic from AWS where our primary domain is hosted to Azure where some dependent services and resources are hosted. We want to use AWS ACM for SSL management/renewals, removing any dependency on third parties or Azure for this if at all possible.
I am able to set up a CloudFront distribution with an origin of an Azure Storage Account endpoint:
xxx.blob.core.windows.net
With an alternate domain name of a subdomain of the desired URL:
xxx.xxx.co.uk
I can secure this with a wildcard ACM SSL, and the resultant images are all secure.
I have also set up a static web app, applied a custom domain to it of:
xxx.xxx.co.uk
And with the appropriate DNS/CF I can make traffic to that Azure SWA secure.
Is it possible to do the same with Azure App Gateway? All the things that I've tried or the developers working in Azure (a third party) have tried do not work, we end up with mostly 502 errors depending on the configuration. Depending on the CF/DNS configuration, I can get through to the correct resources/services by bypassing an SSL warning.
Would adding a port 80/non-https listener for our subdomain on the App Gateway work?
I have a React.js web app deployed via Google Firebase hosting. I also have an express Rest API deployed via AWS EC2. I have been so far unable to get the React app to interact with the express API because it is using HTTP. I tried to get all the SSL/cert stuff figured out to enable HTTPS on the backend but it seems like it will not work because the cert is not signed by a Certificate Authority.
Is there any workaround or other solution here? Thank you in advance.
A web browser will not accept a self-signed SSL certificate. In order to generate a legitimate SSL certificate you must first own a domain name.
You need to purchase a domain, and point your domain or subdomain to the EC2 instance. Then you need to create an SSL certificate that actually matches that domain name or subdomain, using an SSL provider like Let's Encrypt that will actually be accepted by modern web browsers.
Finally you will need to use that domain name in your API calls.
You could place a Load Balancer, or CloudFront distribution, or AWS API Gateway, in front of the EC2 server, at which point you could use a free AWS ACM SSL certificate.
If you don't want to purchase a domain name, you could still place CloudFront or API Gateway in front of the server and use their default endpoint which will also provide SSL.
First of all, I'm in no way an expert at security or networking, so any advice would be appreciated.
I'm developing an IOS app that communicates with an API hosted on an AWS EC2 linux machine.
The API is deployed using **FastAPI + Docker**.
Currently, I'm able to communicate with my remote API using HTTP requests to my server's public IP address (after opening port 80 for TCP) and transfer data between the client and my server.
One of my app's features requires sending a private cookie from the client to the server.
Since having the cookie allows potential attackers to make requests on behalf of the client, I intend to transfer the cookie securely with HTTPS.
I have several questions:
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
The FastAPI "Deploy with Docker" docs recommend this article for implementing TLS for the server (using Docker Swarm Mode and Traefik).Is that guide relevant for my use-case?
In that article, it says Define a server name using a subdomain of a domain you own. Do I really need to own a domain to implement HTTPS? Can't I just keep using the server's IP address to communicate with it?
Thanks!
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
With HTTP all traffic between your clients and the ec2 is in plain text. With HTTPS the traffic is encrypted, so it is secure.
FastAPI "Deploy with Docker"
Sadly can't comment on the article.
Do I really need to own a domain to implement HTTPS?
Yes. The SSL certificates can only be registered for domains that you own. You can't get the certificate for domain that is not yours.
I have made a flask application to use only as API. I have hosted it on aws using nginx and gunicorn. I intend to use the API to run my android application. There is a part in the application where i have to download something using Android Download Manager, but it only downloads things hosted in https domains. So i want to make my application https instead http. But every tutorial shows me a way with a purchased domain. I dont have much information on it yet, but I cant get an SSL Certificate from amazon without purchased domain name(which is pointless for an API). I just want to know how can I do this? How can I make my nginx server listen to https requests?
I have hosted it on aws using nginx and gunicorn.
I think you need a domain name to get ssl on AWS.
It is not allowed in AWS.
One part of HTTPS is encryption, the other part is identity verification. What you're asking for is impossible since it is required that you have to verify your domain name. Without this no Certificate authority will sign a certificate. You cannot have publicly valid certificate if it's self-signed. ACM (Amazon Certificate Manager) an AWS service, will not allow you to create a certificate without a valid domain name.
I have set up a basic API using AWS API Gateway and I would like to link my endpoints to a service I have running on an EC2 instance (using the "HTTP Proxy" integration type). I have read that in order to lock down my EC2 server from only accepting traffic from the API Gateway, I basically have one of two options:
Stick the EC2 instance behind VPC and use Lambda functions (instead of HTTP proxy) that have VPC permissions to act as a "pass through" for the API requests
Create a Client Certificate within API Gateway, make my backend requests using that cert, and verify the cert on the EC2 instance.
I would like to employ a variation of #2 and instead of verifying the cert on the EC2 service instance itself, I would like to instead do that verification on another instance running Haproxy. I have set up a second EC2 instance with Haproxy and have that pointed at my other instance as the backend. I have locked down my service instance so it will only take requests from the Haproxy instance. That is all working. What I have been struggling to figure out is how to verify the AWS Gateway Client Certificate (that I have generated) on the Haproxy machine. I have done tons of googling and there is surprisingly zero information on how to do this exact thing. A couple questions:
Everything I have read seems to suggest that I need to generate SSL server certs on my Haproxy machine and use those in the config. Do I have to do this, or can I verify the AWS client cert without generating any additional certs?
The reading I have done suggests I would need to generate a CA and then use that CA to generate both the server and client certs. If I do in fact need to generate server certs (on the Haproxy machine), how can I generate them if I don't have access to the CA that amazon used to create the gateway client cert? I only have access to the client cert itself, from what I can tell.
Any help here?
SOLUTION UPDATE
First, I had to upgrade my version of HAproxy to v1.5.14 so I could get the SSL capabilities
I originally attempted to generate an official cert with letsencrypt. While I was able to get the API gateway working with this cert, I was not able to generate a letsencrypt cert on the HAproxy machine that the API gateway would accept. The issue surfaced as an "Internal server error" response from the API gateway and as "General SSLEngine problem" in the detailed CloudWatch logs.
I then purchased a wildcard certificate from Gandi, and tried this on the HAproxy machine, but initially ran into the exact same problem. However, I was able to determine that the structure of my SSL cert was not what the API gateway wanted. I googled and found the Gandi chain here:
https://www.gandi.net/static/CAs/GandiStandardSSLCA2.pem
Then I structured my SSL file as follows:
-----BEGIN PRIVATE KEY-----
# private key I generated locally...
-----END PRIVATE KEY-----
-----BEGIN CERTIFICATE-----
# cert from gandi...
-----END CERTIFICATE-----
# two certs from file in the above link
I saved out this new PEM file (as haproxy.pem) and used it in my HAproxy frontend bind statement, like so:
bind :443 ssl crt haproxy.pem verify required ca-file api-gw-cert.pem
The api-gw-cert.pem in the above bind statement is a file that contains a client cert that I generated in the API gateway console. Now, the HAproxy machine properly blocks any traffic coming from anywhere but the gateway.
The reading I have done suggests I would need to generate a CA and then use that CA to generate both the server and client certs.
That's one way to do it, but it is not applicable in this case.
Your HAProxy needs to be configured with a working SSL certificate signed by a trusted CA -- not the one that signed the client certificate, and not one you create. It needs to be a certificate signed by a public, trusted CA whose root certificates are in the trust store of the back-end systems at API Gateway... which should be essentially the same as what your web browser trusts, but may be a subset.
Just as your web browser will not speak SSL to a server sporting a self-signed certificate without throwing a warning that you have to bypass, the back-end of API Gateway won't negotiate with an untrusted certificate (and there's no bypass).
Suffice it to say, you need to get API Gateway talking to your HAProxy over TLS before trying to get it to use a client cert, because otherwise you are introducing too many unknowns. Note also that you can't use an Amazon Certificate Manager cert for this, because those certs only work with CloudFront and ELB, neither of which will support client certs directly.
Once the HAProxy is working with API Gateway, you need then to configure it to authenticate the client.
You need ssl and verify required in your bind statement, but you can't verify an SSL client cert without something to verify it against.
I only have access to the client cert itself, from what I can tell.
And that's all you need.
bind ... ssl ... verify required ca-file /etc/haproxy/api-gw-cert.pem.
SSL certs are essentially a trust hierarchy. The trust at the top of the tree is explicit. Normally, the CA is explicitly trusted and anything it has signed is implicitly trusted. The CA "vouches for" the certificates it signs... and for certificates it signs with the CA attribute set, which can also sign certificates under them, extending that implicit trust.
In this case, though, you simply put the client certificate in as the CA file, and then the client certificate "vouches for"... itself. A client presenting the identical certificate is trusted, and anybody else is disconnected. Having just the certificate is not enough for a client to talk to your proxy, of course -- the client also needs the matching private key, which API Gateway has.
So, consider this two separate requirements. Get API Gateway talking to your proxy over TLS first... and after that, authenticating against the client certificate is actually the easier part.
I think you are mixing up server certs and client certs. In this instance API Gateway is the client, and HAProxy is the server. You want HAProxy to verify the client cert sent by API Gateway. API Gateway will generate the certificate for you, you just need to configure HAProxy to verify that certificate is present in every request it processes.
I'm guessing you might be looking at this tutorial where they are telling you to generate the client cert, and then configure HAProxy to verify that cert. The "generate the cert" part of that tutorial can be skipped since API Gateway is generating the cert for you.
You just need to click the "Generate" button in API Gateway, then copy/paste the contents of the certificate it presents you and save that as a .pem file on the HAProxy server. Now I'm not a big HAProxy user, but I think taking the example from that tutorial your HAProxy config would look something like:
bind 192.168.10.1:443 ssl crt ./server.pem verify required