I've a REST API application running in two EC2 instance and was using AWS Classic Load Balancer for a long time. The clients of REST API rely on the response headers (e.g. such as Location).
I know that HTTP headers are case-insensitive by definition, however (unfortunately) some clients are ignoring this and checking the headers in a case-sensitive way (e.g. they expect Location to start with upper case).
Recently I've changed to AWS Application Load Balancer and now I see that it transforms all response headers to lower case, as a result clients are failing to handle the response properly.
I've couple of questions here.
Is it expected behavior of Application Load Balancer?
Is there a way to configure it to return headers as they have been built by the application?
It is an expected function of the ALB because HTTP/2 lowercases all headers and ALBs support HTTP/2. Unfortunately you can't modify how the headers are manipulated by the ALB.
Update: See the comments below. My statement that the ALB lowercases the request headers due to its support for HTTP/2 may not be accurate.
This was causing our broken clients to fail when we switched from TCP ELB to HTTPS ELB.
While we fix our clients, we temporarily disabled the new ELB HTTP/2 support, which comes enabled by default.
Related
My requirement:
Prevent non Twilio access to my ALB managed Application using CloudFlare.
My restrictions:
Due to the nature of Twilio's cloud design, it is not possible to whitelist access down to a set of IPs due to the wide pool of IPs a request could come from.
Possible solution:
Twilio suggest a couple of options under https://www.twilio.com/docs/usage/security but I don't know how to use any of these methods as a means to only allow twilio Traffic. But any designed validation must only be applied to the dns record of /api in the url to my site.
Further Info:
The underlying application is written in php.
I would prefer a CloudFlare solution over changing code in the application.
A possible approach could be:
Use Cloudflare Firewall Rules to check for the presence of X-Twilio-Signature on your api path (as a first, basic check), block requests that do not have it.
Use a Cloudflare Worker, configured on your API path. The worker code can read X-Twilio-Signature and the request data, and use the procedure described in the Twilio documentation to validate it. If it matches, forward the request to your load balancer. If it doesn't, return an error to the caller.
Also, make sure your origin server only accepts traffic from Cloudflare to prevent direct tampering.
I am supposed to send a response from my web service with an STS header, but the service itself sits behind an AWS ALB which terminates SSL and sends the traffic on via http. This seems to be a common scenario and likely not limited to AWS, i.e. many LB's have the capability to terminate SSL as this is a very useful feature!
I have read through some messages from people who have already had this issue and have not seen any answers that seem anything other than a workaround - it seems to me to be a catch 22 situation, the LB doesn't send it as it isn't within its remit (according to one response from AWS support) and the target web server can't add it coz that header can only be added to https responses which the web server is not processing!
So my question is, is the sts header really that important if my web service can only respond on an https endpoint (no http enpdoint, not even a redirect)? Or is it still vulnerable to things like mitm attacks etc?
thanks in advance
Setup: Play Framework application deployed on Amazon EC2 instances via ECS, Elastic Load Balancer in front. I want to allow only HTTPS requests for the application.
I found several ways to use HTTPS with Play, but what are the pros and cons, or which one is best practice for a (dockerized) Play app?
Enable HTTPS directly within Play (with -Dhttps.port or https.port in config file).
Set up a front-end web server (e.g. Nginx) and let it handle the HTTP->HTTPS rewrite (example).
Implement a request filter in Play and redirect the requests within the application (as described here).
I'm not so keen to use the first version as I would have to manage the certificates separately on each instance, but I listed it for the sake of completeness.
One advantage I can think of for the third approach must be that the system architecture is simpler than in the second version and requires less configuration. Are there any disadvantages (e.g. performance) to using the third approach?
If you are using a load balancer then you should request a free SSL certificate from the Amazon Certificate Manager service and then attach that certificate to the load balancer.
To enable HTTP to HTTPS redirects you simply need to check the x-forwarded-proto header that the load balancer passes to the server. If it is http return a 301 with https. The article you linked covers this part.
I have an Akka HTTP server running on AWS EC2 Autoscale cluster. This EC2 auto scale cluster has an ELB Application Load balancer in front. In addition to the ELB, we have a cloud front distribution that is set to serve static files.
We're facing an issue where all the websocket connection requests from browsers to the backend fail with HTTP 400 Expected UpgradeToWebsocket header error.
Upon further investigation, we found that the clients are able to connect directly to the load balancer but any connection request via cloudfront fail. Eventually I came across this page on AWS Cloudfront documentation which says that cloudfront strips out any 'upgrade' headers which might be the reason clients are unable to connect.
To work around this issue, I enabled all "header forwarding" option (which disables caching), but it still didn't work. Moreover, I couldn't find any option to selective disable cloudfront caching or bypass cloudfront altogether for certain URLs.
How do I workaround this issue and ensure that websockets work through cloudfront? Or is this just not supported?
CloudFront is not the right solution for web sockets as it is optimized for caching of static web pages, whereas web sockets are mostly dynamic. ELB on the other hand does support both HTTP web sockets (ws://) and Secure web sockets (wss://), AND it's possible to configure it to handle all the SSL hand shake. however you need to configure it with TCP settings in order to keep the HTTP/HTTPS connection open while the server is transmitting. Here's how it's done:
Click "Create load balancer" in the EC2 load balancers tab
Select "Classic Load Balancer". You need that in order to do a simple TCP
Define the source and destination protocols (Choose TCP for plain web sockets) :
4. If you're doing doing secure web sockets you need to choose a certificate, like this:
5. Configure health checks, add instances and press "Create". Define the CNAME and you're all set.
Note that if you select "HTTP" or "HTTPS" as the source protocol, the load balancer will at some point throw a 408 error code (timeout), since it's not designed to keep the connection open too long. That's the reason we chose TCP.
Update
CloudFront announced support for Websockets on 2018-11-20.
CloudFront supports WebSocket connections globally with no required additional configuration. All CloudFront distributions have built-in WebSocket protocol support, as long as the client and server also both support the protocol.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/distribution-working-with.websockets.html
The client is responsible for re-establishing any websocket connection that is lost.
CloudFront does not currently support web sockets.
Certain headers are stripped from requests even if you try to configure CloudFront to forward them. These are indicated in the table on the page you mentioned by "CloudFront removes the header" and Caching Based on Header Values Is Supported = "No".
From the AWS Forums:
Rest assured that the right people are aware of this feature request.
— Richard#AWS (2015-06-06)
https://forums.aws.amazon.com/thread.jspa?messageID=723375
I have some RESTfull APIs deployed on AWS, mostly on Elasticbeanstalk.
My company is gradually adopting a Microservices architecture, and, therefore, I want to start managing these APIs in a more professional and automated way. Hence, I want to adopt some kind of API Manager to provide standard functionalities such as routing and discovery.
In addition, I wish to use such API Manager to expose some of my APIs to the Internet. The manager would be exposed to the Internet through SSL only and should require some sort of authentication from external consumers before routing their requests to the internal APIs. For my use case, a simple API Key in the Authorization header of every request would suffice.
I'm currently considering two products as API Managers: WSO2 and Kong. The former is a somewhat new open source project hosted on Github.
In all the deployment scenarios that I am considering, the API Managers would have to be deployed on AWS EC2 instances. Moreover, they would have to be deployed on, at least, two different availability zones and behind an Elastic Load Balancer (ELB) to provide high availability to the managed APIs.
Most of my APIs adhere to the HATEOAS constraints. Therefore, many of their JSON responses contain links to other resources, which must be built dynamically based on the original request.
For instance:
If a user sent a request from the Internet through the exposed API Manager, the URL would look like:
https://apimanager.mycompany.com/accounts/123
As a result, the user should receive a JSON response containing an Account resource with a link to, let's say, a Subscription resource.
The link URL should be based on the protocol, host and port of the original request, and, therefore, would look like: https://apimanager.mycompany.com/subscriptions/789.
In order to meet the dynamic link generation requirements mentioned above, my APIs rely on the X-Forwarded-Proto, X-Forwarded-Host and X-Forwarded-Port HTTP headers. These should contain the protocol (http or https), the host name and the port used by the consumer in the original request, in spite of how many proxies the request passed through.
However, I noticed that when requests pass through ELBs, the X-Forwarded-Proto and X-Forwarded-Port headers are changed to values that refer to the last ELB the request passed through, instead of the values that were in the original request.
For instance: If the original request hits the API Manager through HTTPS, the Manager forwards the request to the internal API through HTTP; thus, when the request hits the second ELB, the ELB changes the X-Forwarded-Proto header to "http". As a result, the original "https" value of the X-Forwarded-Proto header is lost. Hence, the API is unable to build proper links with the "https" protocol in the URLs.
Apparently, ELBs can't be configured to behave in any other way. I couldn't find any setting that could affect this behavior in AWS's documentation.
Moreover, there doesn't seem to be any better alternative to AWS's ELBs. If I choose to use another product like HAProxy, or do the load balancing through the API Manager itself, I would have to install it on a regular EC2 instance, and, therefore, create a single point of failure.
I'm including an informal diagram to better convey my point of view.
Furthermore, I couldn't find any relevant discussion about deployment scenarios for WSO2 or Kong that would address these matters in any way. It's not clear to me how these products should relate to AWS's ELBs.
Comments from others with similar environments will be very welcome.
Thank you.
Interesting question/challenge - I'm not aware of a way to configure an Elastic Load Balancer's X-Forwarded-* header behavior. However, you might be able to work around this by leveraging ELB's different listener types for the two supported network layers of the OSI Model:
TCP/SSL Listener without Proxy Protocol
Rather than using an HTTP listener (OSI layer 7), which makes sense for terminating SSL etc., you could just use the non intrusive TCP/SSL listener (OSI layer 4) for your internal load balancers, see Protocols:
When you use TCP (layer 4) for both front-end and back-end
connections, your load balancer forwards the request to the back-end
instances without modifying the headers. [...] [emphasis mine]
I haven't tried this, but would expect the X-Forwarded-* headers added by the external HTTP/HTTPS load balancer to be passed through unmodified by the internal TCP/SSL load balancer in this scenario.
TCP/SSL Listener with Proxy Protocol
Alternatively, you could also leverage the more advanced/recent Proxy Protocol Support for Your Load Balancer right away, see the introductory blog post Elastic Load Balancing adds Support for Proxy Protocol for more on this:
Until today, ELB allowed you to obtain the clients IP address only if
you used HTTP(S) load balancing, which adds this information in the
X-Forwarded-For headers. Since X-Forwarded-For is used in HTTP headers
only, you could not obtain the clients IP address if the ELB was
configured for TCP load balancing. Many of you told us that you wanted
similar functionality for TCP traffic, so we added support for Proxy
Protocol. It simply prepends a human readable header with the clients
connection information to the TCP data sent to your server. [...] Proxy Protocol is
useful when you are serving non-HTTP traffic. Alternatively, you can
use it if you are sending HTTPS requests and do not want to terminate
the SSL connection on the load balancer. [...]
Other than the X-Forwarded-* headers, you can enable and disable proxy protocol handling. On the flip-side, your backend layers might not yet facilitate proxy protocol automatically and need to be adapted accordingly.