Invalid X-Forwarded-For header with cloudflare and ingress-nginx - amazon-web-services

I'm having some trouble getting the correct IP headers. I am using the following proxy setup:
Cloudflare -> Amazon NLB -> Ingress-nginx (k8s)
my ingress-nginx has the following config:
config:
use-forwarded-headers: "true"
real-ip-header: "CF-Connecting-IP"
forwarded-for-header: "CF-Connecting-IP"
set-real-ip-from: "0.0.0.0/0"
proxy-buffer-size: "16k"
proxy-buffers-number: "8"
For some reason the x-real-ip header is correct, but the x-forwarded-for header is not:
REMOTE ADDR: 127.0.0.1
X FORWARDED FOR: 10.0.102.38 <- Wrong
X-REAL-IP: xx.xxx.xxx.xxx <- Correct
The ingress-nginx loadbalancer (NLB) has:
External Traffic Policy: Local
as per the doc. And the following annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: 3600
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: *
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: xxxx
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: https
service.beta.kubernetes.io/aws-load-balancer-type: nlb
To be on the safe side I also enabled ExternalTrafficPolicy: Local on the app's service but to no avail.
Enabling use-proxy-protocol: "true" in the config breaks the app (probably because of CloudFlare).
2021-09-19 12:42:17
" while reading PROXY protocol, client: x.x.x.x, server: 0.0.0.0:80
Any help would be appreciated.

Cloudflare supports the X-Forwarded-For header, so you could try configuring:
config:
[...]
forwarded-for-header: "X-Forwarded-For"
You seem to be able of reading CF-Connecting-IP correctly in your setup, so X-Forwarded-For should also be available in the request coming from Cloudflare.
For more information you can refer to the support article.

Related

Nginx ingress controller of type internal nlb giving 400 "The plain HTTP request was sent to HTTPS port" error

I have installed nginx ingress controller of type NLB inside EKS cluster and it is of type internal.
The ingress controller created a network load balancer, with listeners 80 and 443,
with port 443 we can't attach an ssl cert for nlb type, only when I use listener type tls it is able to allow us to add ssl cert from AWS ACM.
Now the issue is, I am trying to expose a frontend application through this NLB nginx ingress controller,
when the NLB lister port is 443, it is able to access the application but complains with ssl cert (fake Kubernetes cert), when I change the listener from 443 to tls in NLB, it throws error "400 "The plain HTTP request was sent to HTTPS port" error"
Like many solutions out there mentioning changing the targetPort from https: https to https: http , I tried but with that too same error "The page isn't working,ERR_TOOMANY_REQUESTS"
Could anyone help me how to resolve this issue?
Any ideas or suggestions would be highly appreciated
To resolve the issue with the SSL certificate and the "400 "The plain HTTP request was sent to HTTPS port" error", you may need to modify your ingress configuration to specify that the ingress should listen for HTTPS traffic on port 443. This can be done by adding the following annotations to your ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/secure-backends: "true"
name: example
namespace: example
spec:
rules:
- host: example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: example
port:
name: https
tls:
- hosts:
- example.com
secretName: example-tls
In the example above, nginx.ingress.kubernetes.io/ssl-redirect tells the ingress to redirect HTTP traffic to HTTPS. nginx.ingress.kubernetes.io/secure-backends tells the ingress to encrypt the traffic between the ingress and the backend services. `secret

Traslate nginx ingress rule with snippet to Istio

I have a nginx ingress controller and expose services with him, we planned change to istio to ingress traffic.
I have a ingress rule that contains snippet:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-snippet: |
location ~* "^/" {
proxy_pass "https://127.0.0.1";
proxy_set_header Host $http_x_forwarded_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_connect_timeout 10s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
client_max_body_size 300m;
}
name: foo
spec:
ingressClassName: bar
rules:
- host: foo.bar
tls:
- hosts:
- foo.bar
This ingress copy http_x_forwarded_host to Host and send to nginx ingress.
There are any idea to convert this rule to istio?
Thanks.
Marco
Welcome to SO!,
It should be doable theoretically with below Istio building components:
Use regex based rewrites
nginx.ingress.kubernetes.io/rewrite-target => EnvoyFilter to HTTP_ROUTE object
(example to be found on github here)
Forward 'X-Forwarded-For/X-Real-IP' headers to upstream host
If your application needs to know the real client IP address use the Gateway Network Topology (Alpha) feature.
Remark:
Attached by you the source manifest file seems to be suffering from a known issue of latest nginx ingress controller, which reveals in following error on my env:
Error from server (BadRequest): error when creating "STDIN": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request:
-------------------------------------------------------------------------------
Error: exit status 1
2021/06/21 11:05:45 [emerg] 851#851: invalid number of arguments in "proxy_set_header" directive in /tmp/nginx-cfg063051389:453
nginx: [emerg] invalid number of arguments in "proxy_set_header" directive in /tmp/nginx-cfg063051389:453
nginx: configuration file /tmp/nginx-cfg063051389 test failed

Istio - Terminate TLS at AWS NLB

I'm using EKS and latest Istio installed via Helm. I'm trying to implement TLS based on a wildcard cert we have for our domain in AWS certificate manager. I'm running into a problem where the connection between the client and the NLB works, with TLS being terminated there, but the NLB can't talk to the istio LB over the secure port. In the AWS console I can rewrite the forwarding rules to forward traffic from port 443 to the standard istio http target, but I can't find a way to do this via code. I'm trying to avoid all click-ops. Here is my Helm overrides for the gateway:
gateways:
istio-ingressgateway:
serviceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:XXXXXXXXXXXXXXXXXX:certificate/XXXXXXXXXXXXXXXXXXXX"
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
So what I'm expecting to occur here is:
Client:443 --> NLB:443 --> Istio Gateway:80
but what I end up with is
Client:443 --> NLB:443 --> Istio Gateway:443
Does anyone have any thoughts on how to get this to work via code? Alternately if someone can point me to a resource to get tls communication between the NLB and Istio working I'm good with that too.
Probably, what is happening is that if you terminate TLS on the load balancer it won't carry SNI to the target group. I had the exact same issue and I ended up solving it by setting the host as '*' on the ingress Gateway and then specifying the hosts on the different VirtualServices (as recommended here and also on istio's official docs).
Your service annotation already correct what is mising is to change istio gateway port 443 to HTTP
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: http-gateway-external
namespace: istio-ingress
spec:
selector:
istio: gateway-external
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
- hosts:
- '*'
port:
name: https
number: 443
protocol: HTTP # Change from HTTPS to HTTP

Pointing domain to Amazon ALB - Kubernetes

I have a cluster created using eksctl and also valid certificates created under ACM, I have used DNS method to verify the domain ownership and its succesfully completed.
below are the details i see when executing kubectl get ing
NAME HOSTS ADDRESS PORTS AGE
eks-learning-ingress my-host.com b29c58-production-91306872.us-east-1.elb.amazonaws.com 80 18h
when i access the application using https://b29c58-production-91306872.us-east-1.elb.amazonaws.com, i see it load the application with a security warning because that not the domain name with which the certifcates are created. When i try to execute https://my-host.com i am getting a timeout.
I have 2 questions
1) I am using CNAMES to point my domain to AWS ELB, the values i added for CNAME are
name: my-host.com, points to: b29c58-production-91306872.us-east-1.elb.amazonaws.com. Is this correct?
2) below is my ingress resource defination, may be i am missing something as requests are not coming in to the application
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: eks-learning-ingress
namespace: production
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/scheme: internet-facing
alb.ingress.kubernetes.io/certificate-arn: arn:dseast-1:255982529496:sda7-a148-2d878ef678df
alb.ingress.kubernetes.io/listen-ports: '[{"HTTP": 80}, {"HTTPS": 443}, {"HTTP": 8080}, {"HTTPS": 8443}]'
labels:
app: eks-learning-ingress
spec:
rules:
- host: my-host.com
http:
paths:
- path: /*
backend:
serviceName: eks-learning-service
servicePort: 80
Any help would be really great. Thanks.
My go-to solution is using an A-record in Route 53. Instead of adding an IP, you select the "alias" option and select your load balancer. Amazon will handle the rest.
I think you have a port mismatch. https:// will use port 443, not port 80, but your ingress appears to be accepting requests on port 80 rather than 443.
If 443 was configured I'd expect to see it listed under ports as 80, 443
Can you verify with telnet or nc or curl -vvvv that your ingress is actually accepting requests on port 443? If it is, check the response body reported by curl - it should give you some indication as to why the request is not propagating downwards to your service.
We use nginx-ingress so unfortunately I can't look at local ingress config and compare it to yours.

Cannot access airflow web server via AWS load balancer HTTPS because airflow redirects me to HTTP

I have an airflow web server configured at EC2, it listens at port 8080.
I have an AWS ALB(application load balancer) in front of the EC2, listen at https 80 (facing internet) and instance target port is facing http 8080.
I cannot surf https://< airflow link > from browser because the airflow web server redirects me to http : //< airflow link >/admin, which the ALB does not listen at.
If I surf https://< airflow link > /admin/airflow/login?next=%2Fadmin%2F from browser, then I see the login page because this link does not redirect me.
My question is how to change airflow so that when surfing https://< airflow link > , airflow web server will redirect me to https:..., not http://.....
so that AWS ALB can process the request.
I tried to change base_url of airflow.cfg from http://localhost:8080 to https://localhost:8080, according to the below answer, but I do not see any difference with my change....
Anyway, how to access https://< airflow link > from ALB?
Since they're using Gunicorn - you can configure the forwarded_allow_ips value as an evironment variable instead of having to use an intermediary proxy like Nginx.
In my case I just set FORWARDED_ALLOW_IPS = * and it's working perfectly fine.
In ECS you can set this in the webserver task configuration if you're using one docker image for all the Airflow tasks (webserver, scheduler, worker, etc.).
Finally I found a solution myself.
I introduced a nginx reverse proxy between ALB and airflow web server: ie.
https request ->ALB:443 ->nginx proxy: 80 ->web server:8080. I make the nginx proxy tell the airflow web server that the original request is https not http by adding a http header "X-Forwarded-Proto https".
The nginx server is co-located with the web server. and I set the config of it as /etc/nginx/sites-enabled/vhost1.conf (see below). Besides, I deletes the /etc/nginx/sites-enabled/default config file.
server {
listen 80;
server_name <domain>;
index index.html index.htm;
location / {
proxy_pass_header Authorization;
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
}
}
User user389955 own solution is probably the best approach, but for anyone looking for a quick fix (or want a better idea on what is going on), this seems to be the culprit.
In the following file (python distro may differ):
/usr/local/lib/python3.5/dist-packages/gunicorn/config.py
The following section prevents forwarded for headers from anything other than local
class ForwardedAllowIPS(Setting):
name = "forwarded_allow_ips"
section = "Server Mechanics"
cli = ["--forwarded-allow-ips"]
meta = "STRING"
validator = validate_string_to_list
default = os.environ.get("FORWARDED_ALLOW_IPS", "127.0.0.1")
desc = """\
Front-end's IPs from which allowed to handle set secure headers.
(comma separate).
Set to ``*`` to disable checking of Front-end IPs (useful for setups
where you don't know in advance the IP address of Front-end, but
you still trust the environment).
By default, the value of the ``FORWARDED_ALLOW_IPS`` environment
variable. If it is not defined, the default is ``"127.0.0.1"``.
"""
Changing from 127.0.0.1 to specific IP's or * if IP's unknown will do the trick.
At this point, I haven't found a way to set this parameter from within airflow config itself. If I find a way, will update my answer.
We solved this problem in my team by adding an HTTP listener to our ALB that redirects all HTTP traffic to HTTPS (so we have an HTTP listener AND an HTTPS listener). Our Airflow webserver tasks still listen on port 80 for HTTP traffic, but this HTTP traffic is only in our VPC so we don't care. The connection from browser to the load balancer is always HTTPS or HTTP that gets rerouted to HTTPS and that's what matters.
Here is the Terraform code for the new listener:
resource "aws_lb_listener" "alb_http" {
load_balancer_arn = aws_lb.lb.arn
port = 80
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
Or if you're an AWS console kinda place here's how you set up the default action for the listener:
Console
I think you have everything working correctly. The redirect you are seeing is expected as the webserver is set to redirect from / to /admin. If you are using curl, you can pass the flag -L / --location to follow redirects and it should bring you to the list of DAGs.
Another good endpoint to test on is https://<airflow domain name>/health (with no trailing slash, or you'll get a 404!). It should return "The server is healthy!".
Be sure you have https:// in the base_url under the webserver section of your airflow config.
Digging into the gunicorn documentation: it seems to be possible to pass any command line argument (when gunicorn command is called) via the GUNICORN_CMD_ARGS environment variable.
So what I'm trying out is setting GUNICORN_CMD_ARGS=--forwarded-allow-ips=* since all the traffic will come to my instance from the AWS ALB... I guess the wildcard could be replaced with the actual IP of the ALB as seen by the instance, but that'll be next step...
Since I'm running on ECS, I'm passing it as:
- Name: GUNICORN_CMD_ARGS
Value: --forwarded-allow-ips=*
in the Environment of my task's container definition.
PS: from the doc, this possibility was added as of gunicorn 19.7; for comparison, Airflow 1.10.9 seems to be on gunicorn 19.10 so good to go with any (more or less) recent version of Airflow !
I encountered this issue too when using the official apache airflow helm chart (version 1.0.0).
Problem
Originally I had configured the webserver service with type LoadBalancer.
webserver:
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:1234512341234:certificate/231rc-r12c3h-1rch3-1rch3-rc1h3r-1r3ch
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
This resulted in the creation of a classic elastic load balancer.
This mostly worked but when I clicked on the airflow logo (which links to https://my-domain.com), I'd get redirected to http://my-domain.com/home which failed because the load balancer was configured to use HTTPS only.
Solution
I resolved this by installing the AWS Load Balancer Controller on my EKS cluster and then configuring ingress.
The ingress-related portion of the chart config looks like this:
ingress:
enabled: true
web:
host: my-airflow-address.com
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/subnets: subnet-01234,subnet-01235,subnet-01236
alb.ingress.kubernetes.io/scheme: internal # if in private subnets
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
webserver:
service:
type: NodePort
Notes
It might be possible to configure the webserver to use an ALB instead of classic ELB and configure it to handle the HTTP routing, but I have not tested it.