AWS Load Balancers and Strict Transport Security Header - amazon-web-services

I am supposed to send a response from my web service with an STS header, but the service itself sits behind an AWS ALB which terminates SSL and sends the traffic on via http. This seems to be a common scenario and likely not limited to AWS, i.e. many LB's have the capability to terminate SSL as this is a very useful feature!
I have read through some messages from people who have already had this issue and have not seen any answers that seem anything other than a workaround - it seems to me to be a catch 22 situation, the LB doesn't send it as it isn't within its remit (according to one response from AWS support) and the target web server can't add it coz that header can only be added to https responses which the web server is not processing!
So my question is, is the sts header really that important if my web service can only respond on an https endpoint (no http enpdoint, not even a redirect)? Or is it still vulnerable to things like mitm attacks etc?
thanks in advance

Related

Application Load Balancer Authoriation Header not passed through

I currently have an API on API Gateway (REST) that has a single proxy endpoint hooked up to an HTTP proxy integration. I have a cognito authorizer that authorizes incoming JWTs issued by Cognito and then if valid it forwards the request along to our ECS instance via an Application Load Balancer.
The project that is running in that instance requires the Authorization header to be there for authorization purposes. The problem is that header is not forwarded to the container. After much debugging, we determined that the header was going missing when the ALB isforwarding the request to the container (previously this question was asking about API Gateway because I assumed that's where things were going wrong). Other custom headers can go through but not "Authorization".
Does anyone have any experience persisting the Authorization header using ALB? I'm very new to ALB so still learning how to build these projects.
If you're passing a header Authorization, it will be remapped with X-Amzn-Remapped-Authorization by Amazon API Gateway REST APIs.
For more information, see this guide.
We actually had two rules on the alb. One redirecting the api call from port 80 to port 443, then a forward rule to the container. We discovered that the header went missing at the redirect rule, so we eliminated that and added listener on port 80 that forwarded the call to the ecs task.

Best way to implement HTTPS for API hosted on AWS ec2 machine

First of all, I'm in no way an expert at security or networking, so any advice would be appreciated.
I'm developing an IOS app that communicates with an API hosted on an AWS EC2 linux machine.
The API is deployed using **FastAPI + Docker**.
Currently, I'm able to communicate with my remote API using HTTP requests to my server's public IP address (after opening port 80 for TCP) and transfer data between the client and my server.
One of my app's features requires sending a private cookie from the client to the server.
Since having the cookie allows potential attackers to make requests on behalf of the client, I intend to transfer the cookie securely with HTTPS.
I have several questions:
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
The FastAPI "Deploy with Docker" docs recommend this article for implementing TLS for the server (using Docker Swarm Mode and Traefik).Is that guide relevant for my use-case?
In that article, it says Define a server name using a subdomain of a domain you own. Do I really need to own a domain to implement HTTPS? Can't I just keep using the server's IP address to communicate with it?
Thanks!
Will implementing HTTPS for my server solve my security issue? Is that the right approach?
With HTTP all traffic between your clients and the ec2 is in plain text. With HTTPS the traffic is encrypted, so it is secure.
FastAPI "Deploy with Docker"
Sadly can't comment on the article.
Do I really need to own a domain to implement HTTPS?
Yes. The SSL certificates can only be registered for domains that you own. You can't get the certificate for domain that is not yours.

Using Application Load Balancer with HTTPS

This is the first time that I am using load balancer... I have spent quite a bit of time going through documentation and I am still quite confused.
I want to host my website. My website supports HTTPS only. I want to put my backend servers behind an Application Load Balancer.
I am using AWS' default VPC, I have created an ALB (myALB) and installed my SSL certificate on it. I have also created 2 EC2 instances (myBackEndServer1 & myBackEndServer2).
Questions:
Should the communication between backend servers and myALB be
through HTTP or HTTPS?
I have created an HTTPS listener on myALB, do I also need an HTTP
listener on myALB? what I want is to redirect any HTTP request to
HTTPS (I believe this should happen on myALB)?
I want to use External ID login (using Facebook). I have set up Facebook
login to work with HTTPS only. Does the communication between
Facebook and my backend servers go through myALB? I mean, I either
need HTTPS on my backend servers, or the communication with facebook
should go through myALB.
I would appreciate any general advice.
You can use both HTTP and HTTPS listeners.
Yes, you can achieve that with ALB. You can add a rule to it that says that any request that is coming to port 80 will be redirected to port 443 on a permanent basis. Check out rules for ALB.
If you make a request from your instances to Facebook - it depends on Facebook, whether your communication will be encrypted, because in such case you are a client. However if you set up some webhook, Facebook is now a client and to communicate with you, you're gonna give your load balancer's DNS name. And due to the point 2 in this list, Facebook will be forced to use TLS.
I'm not sure I fully understood your question number three, but here's something you may also find useful. ALB has some features that allows to authenticate users with Cognito. It explicitly says that your EC2 instances can be abstracted away from any authentication, also if it makes use of Facebook ID or Google Id or whatever. Never tried though.

How to enable websockets on AWS Cloudfront

I have an Akka HTTP server running on AWS EC2 Autoscale cluster. This EC2 auto scale cluster has an ELB Application Load balancer in front. In addition to the ELB, we have a cloud front distribution that is set to serve static files.
We're facing an issue where all the websocket connection requests from browsers to the backend fail with HTTP 400 Expected UpgradeToWebsocket header error.
Upon further investigation, we found that the clients are able to connect directly to the load balancer but any connection request via cloudfront fail. Eventually I came across this page on AWS Cloudfront documentation which says that cloudfront strips out any 'upgrade' headers which might be the reason clients are unable to connect.
To work around this issue, I enabled all "header forwarding" option (which disables caching), but it still didn't work. Moreover, I couldn't find any option to selective disable cloudfront caching or bypass cloudfront altogether for certain URLs.
How do I workaround this issue and ensure that websockets work through cloudfront? Or is this just not supported?
CloudFront is not the right solution for web sockets as it is optimized for caching of static web pages, whereas web sockets are mostly dynamic. ELB on the other hand does support both HTTP web sockets (ws://) and Secure web sockets (wss://), AND it's possible to configure it to handle all the SSL hand shake. however you need to configure it with TCP settings in order to keep the HTTP/HTTPS connection open while the server is transmitting. Here's how it's done:
Click "Create load balancer" in the EC2 load balancers tab
Select "Classic Load Balancer". You need that in order to do a simple TCP
Define the source and destination protocols (Choose TCP for plain web sockets) :
4. If you're doing doing secure web sockets you need to choose a certificate, like this:
5. Configure health checks, add instances and press "Create". Define the CNAME and you're all set.
Note that if you select "HTTP" or "HTTPS" as the source protocol, the load balancer will at some point throw a 408 error code (timeout), since it's not designed to keep the connection open too long. That's the reason we chose TCP.
Update
CloudFront announced support for Websockets on 2018-11-20.
CloudFront supports WebSocket connections globally with no required additional configuration. All CloudFront distributions have built-in WebSocket protocol support, as long as the client and server also both support the protocol.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-working-with.websockets.html
The client is responsible for re-establishing any websocket connection that is lost.
CloudFront does not currently support web sockets.
Certain headers are stripped from requests even if you try to configure CloudFront to forward them. These are indicated in the table on the page you mentioned by "CloudFront removes the header" and Caching Based on Header Values Is Supported = "No".
From the AWS Forums:
Rest assured that the right people are aware of this feature request.
— Richard#AWS (2015-06-06)
https://forums.aws.amazon.com/thread.jspa?messageID=723375

Are SOAP Security headers "per connection"?

I know too little SOAP theory and need some help.
Imagine a web service and a client. There is also a gateway (facing the internet), which requests have to be relayed through.
The client authenticates with the gateway using a client certificate (transport security).
The gateway, in turn, uses message credentials to authenticate with the web service.
My question: Is it reasonable that the gateway, after getting the response from the web service, forwards the Security header to the client?
I'm thinking that it "feels" like that should be for the GW -> Web Service link only, since the client didn't use any message security in its request but am I right or wrong?
You're talking about three completely different layers of the network stack. Whether you encrypt your transport with HTTPS has absolutely nothing with whether or not you wish to protect your message payload with WS-I security.
Two good articles on WS-Security (at least from a Microsoft/.Net perspective):
http://msdn.microsoft.com/en-us/library/ms788756%28v=vs.110%29.aspx
http://msdn.microsoft.com/en-us/library/ms977327.aspx
In answer to your question: if you have a SOAP security header, then you ARE using WS-Security, and the client IS passing it to your web service link. Typically, this is transparent to both your client code and your server code; it's handled by the "middleware" in your .Net libraries.