How can I identify incoming requests from EKS LoadBalancer on app level? - amazon-web-services

I have a k8s service defined as type: LoadBalancer which sets an external LB. Can I identify on application level that an incoming request is routed from the LoadBlancer?
Are there any guaranteed http headers? Can I define custom headers for that service that would be added to all incoming requests?

If your internal ingress is using nginx as an ingress controller you can add a custom header that will indicate that.

ELB guide says that:
Application Load Balancers and Classic Load Balancers add X-Forwarded-For, X-Forwarded-Proto, and X-Forwarded-Port headers to the request.
Have you already been trying using these ones?

Related

target group for multiple containers load balancer AWS

I have 3 containers deployed on ecs and traffic is distributed by an application load balancer, swagger on this individual containers can be accessed via e.g 52.XX.XXX.XXX/swagger.
I need the services to be accessed via for e.g:
52.XX.XXX.XXX/users/swagger
52.XX.XXX.XXX/posts/swagger
52.XX.XXX.XXX/comments/swagger
I've tried add the following the loadbalancer rules
PATH /users* or /users/
PATH /posts* or /posts/
PATH /comments* or /comments/
I get a 404 error when i visit the load balancer dns for example myapp-lb-4283349.us-east-2.elb.amazonaws.com/users/swagger
You can't achieve that with AWS Load Balancer alone. AWS LB doesn't re-route traffic based on paths. They just forwards the incoming traffic to origin.
Your service should be accessible via 52.XX.XXX.XXX/user/swagger 52.XX.XXX.XXX/posts/swagger etc. in order for Load Balancer to forward it. You can't forward (or re-route) your traffic from Load Balancer like this:
LB_URL/user/swagger -> IP/swagger
The missing /user/ part is not something Load Balancer can do for you. Update your application itself and add specific routes which you want to listen on.

Mapping different https subdomains to different ECS containers on same application load balancer?

I'm using the Docker ECS integration to deploy an app and a webservice it depends on. Both should be running over HTTPS, at different subdomains.
My problem is, both need to run over HTTPS but since the ECS docker thing only created one load balancer, it looks like I can only configure it to forward https traffic to one target group. Is there a way to get this to work?
Yes, when you add the HTTPS listener to the load balancer, set the default rule to forward to one of the target groups (probably the main web app). But then go back to the list of listeners and click "View/edit rules". You can then add a Host Header rule for each additional service. The host header just equals the domain name, including subdomain, e.g. service-A.example.com. That way one HTTPS listener can handle every subdomain on the same application load balancer. The documentation for this is here.
Note: If HTTP and HTTPS listeners aren't available, you may have accidentally created a network load balancer. This happens when at least one of your services exposes a port other than 80 or 443. To force it to be an application load balancer (which will let you forward HTTP and HTTPS traffic), your docker-compose config needs to look like this:
test:
image: mycompany/webapp
ports:
- target: 8080
x-aws-protocol: http

Gatsby site serving on EC2 with pm2 node with aws classic load balancer needs https

I am running a Gatsby site in development mode as a dev server on EC2 with a loadbalancer pointing from port 80 to 8000. I have setup a cname on my domain dns to point to the load balancer this works fine. However I need to display this page as an iframe in sanity.io as a web preview and it requires https.
I've read through this https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html and most of it is pretty straight forward for the most part.
What I have done so far is created a listener for 443 https on the loadbalancer and added https 443 to the security group. i have succsufully issued a certificate to the subdomain I am using with aws and attached it to the loadbalancer listener.
Gatsby has a article about custom certs for development mode here https://www.gatsbyjs.org/docs/local-https/#custom-key-and-certificate-files What I am looking for is the cert file, the authority file and the key file in order to pass this command below
Where in the aws certificate manager do I find these files. I think that is the last piece I need to get https working, correct me if I am wrong.
thanks ahead of time.
gatsby develop --https --key-file ../relative/path/to/key.key --cert-file ../relative/path/to/cert.crt --ca-file ../relative/path/to/ca.crt
This is the process I used to request my certficate and it says it's issued
https://docs.aws.amazon.com/acm/latest/userguide/gs-acm-request-public.html
But how do I use it with the custom https command with gatsby?
There is a export option but it says only for private keys. Do I need to create a private key and then I can export these files I need?
Do I even need to run https on gatsby's side. I watched a video using apache and no change was made to the apache server to get https working with the loadbalancer.
Here is a screenshot of my loadbalancer listenr
Here is a image of my security groups
If I run the --https for gatsby develop it breaks my site I can no longer visit it via the loadbalancer or port 8000. So not sure what to do here.
I would suggest not to encrypt the connection between your ELB and the EC2 instances. If your EC2 instances are not publicly reachable, but only through the load balancer instead, it is best practice to terminate the SSL connection on the load balancer. No need to encrypt HTTP requests inside an AWS VPC (i.e. between ELB and target instances).
You can create a load balancer that listens on both the HTTP (80) and HTTPS (443) ports. If you specify that the HTTPS listener sends requests to the instances on port 80, the load balancer terminates the requests and communication from the load balancer to the instances is not encrypted. [1]
There is some discussion (e.g. on the blog of Kevin Burke) whether it is necessary to encrypt traffic inside a VPC. [2] However, most people are probably not doing it.
What it means for you: Use the same instance protocol for your targets as before: HTTP via port 8000 for both listeners. Do not set up SSL for your Gatsby service. Use a plain HTTP server config instead. No changes are necessary to ELB targets when using SSL termination on the load balancer.
References
[1] https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-create-https-ssl-load-balancer.html
[2] https://acloud.guru/forums/aws-certified-security-specialty/discussion/-Ld2pfsORD6ns5dDK5Y7/tlsssl-termination?answer=-LecNy4QX6fviP_ryd7x

How can I get the HTTP protocol version from AWS load balancer

I'm trying to build a reverse proxy behind an AWS Classic Load Balancer, and I want to use the Via header.
As I understand Via, I should add the protocol version (eg HTTP/1.1 or just 1.1) to indicate the upstream client's request protocol version.
However, I don't see anywhere in the AWS documentation that the load balancer will pass that information on to my EC2 instances. Indeed, there is apparently no such thing as a X-Forwarded-Proto-Version header.
So how can I know the protocol version from behind an AWS Classic Load Balancer? What about the new Application Load Balancer? If that will do it where the Classic will not, I can upgrade.
EDIT (Apr 2019): We since upgraded to Application Load Balancer (ALB) for other reasons, and I can confirm that the ALB doesn't send the Via header, nor any other header that contains information about the client to ALB HTTP version. But ALB (and CLB) are a proxies and they should include the Via header, right?

Routing based on request headers (using AWS Application Load Balancer)

A Layer 7 load balancer is more sophisticated and more powerful. It
inspects packets, has access to HTTP and HTTPS headers, and (armed
with more information) can do a more intelligent job of spreading the
load out to the target.
https://aws.amazon.com/blogs/aws/new-aws-application-load-balancer/
I understand the AWS Application Load Balancer has access to the HTTP(S) request headers but I can only see how you can route via the path. Can someone explain how I can route based on the user-agent header. If it's not possible, please suggest an alternative AWS method.
Till 2017-05-26 ALB doesn't have header based routing. With a update on 2017-04-05 it has included Host based routing. Currently it supports only path and host based routing. You can visit here for latest AWS information.
If you want to route based on headers ,currently there are no options in ALB.
You have to have an additional layer either like a proxy / nginx servers.
Flow can be something like this below.
Client calling https://example.com
ALB's DNS is configured to example.com
ALB has Target group attached to it which has nginx instances. Nginx instances routes to respective Load balancer with the header information. ( eg. if customerId is 123 route to ELB 1 else route to ELB 2 )
Two ELB has different EC2 instances attached to it.
But heard AWS is working on routing request based on the headers.
For anyone looking now, as of March 27 2019, ALBs now support routing based on HTTP headers other than the Host header.