Eks ALB Ingress-Controller configure add_header - amazon-web-services

I currently have a Kubernetes cluster on AWS (EKS).
For the ingress I have an ingress controller deployed.
I have a deployment with a pod in which there are two containers. A PHP Container and an Nginx Container. The Nginx container only acts as a proxy and I would like to remove it.
Currently the nginx .conf has the following that I don't know how to pass it to the ALB ingress.
($ request_method = 'POST') {
add_header 'Access-Control-Allow-Origin' '*';
add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS';
add_header 'Access-Control-Allow-Headers' 'DNT, User-Agent, X-Requested-With, If-Modified-Since, Cache-Control, Content-Type, Range';
add_header 'Access-Control-Expose-Headers' 'Content-Length, Content-Range';
I don't know if it is possible to pass the add_header to the ALB Ingress. Does anyone know if it can be done or if on the contrary it is necessary to install an Nginx Ingress Controller?
Thanks

i think this will help your question. https://gitanswer.com/how-to-config-cors-with-alb-go-aws-load-balancer-controller-485142972
because the alb ingress controller only opens an alb that routes traffic to the service it cannot be done like that. and as you said working with nginx ingress controller will solve your problem.

Related

ECS Fargate docker internal networking with service discovery not working

I am running a few microservices in ECS running on Fargate. I need these docker containers to be able to communicate between each other. Right now both containers are simply running NGINX to get a proof of concept up and running.
I have setup a private DNS namespace and created a service for each docker container e.g service1.local and service2.local
I then created the ECS services and linked the service discovery, each ECS service now has the .local namespace attached.
Right now both ECS services are in a public subnet with public IP's and are reachable individually. However reach service B via service A's IP is not possible i.e. the two containers cannot communicate with each other.
I get the following error service1.local could not be resolved (5: Operation refused).
I attached an EC2 instance to the same VPC and subnets and was able to ping each service via service1.local, service2.local curl them etc so so service discovery is working as expected in this situation.
Right now security groups allow traffic from all ports and all IPs (just whilst testing)
Below is a barebones copy of my NGIX config for service1
server {
listen 80;
server_name localhost;
resolver ${PRIVATE_DNS_RESOLVER} valid=10s;
set $serverHost http://service2.local:8000;
location /status {
access_log off;
add_header 'Content-Type' 'application/json';
return 200 '{"status":"OK"}';
}
# Proxy to server container
location /api {
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass $serverHost$request_uri;
}
# Route for client
location / {
root /usr/share/nginx/html;
index index.html;
try_files $uri $uri/ /index.html;
}
# Route all errors back to index.html for react router to handle
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
}
Hitting the public IP of this service followed by /api is what should proxy to the second service via service2.local
tl;dr
Both containers are accessible via public IPs.
Both containers are accessible from within the ups using the service discover private name space
Container one cannot resolve the hostname of container2
The CPC has dns hostnames enabled
The resolver here is one of the NS records from the private namespace. Setting it to 127.0.0.1 or 127.0.0.11 gave the same error.

Django behind NGINX reverse proxy and AWS Application Load Balancer doesn't get HTTPS forwarded from client in HTTP_X_FORWARDED_PROTO

I'm running Django on Gunicorn behind a NGINX reverse proxy, and an AWS Application Load Balancer. The ALB has 2 listeners. The HTTPS listener forwards to a Target Group in port 80, and the HTTP listener redirects to HTTPS.
The ALB thus connects with an AWS ECS Cluster, which runs a task with 2 containers: one for the Django app, and another for the NGINX that acts as a reverse proxy for the Django app. Here's the configuration for the NGINX reverse proxy:
upstream django {
server web:8000;
}
server {
listen 80;
listen [::]:80;
location / {
proxy_pass http://django;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Port $server_port;
proxy_ssl_server_name on;
proxy_redirect off;
}
}
This configuration ensures that whenever the client tries to hit the website app using an HTTP request, he gets redirected to HTTPS. And everything works fine with one exception. In Django, when I run request.is_secure() I'm getting False instead of True as expected. If I run request.build_absolute_uri(), I get http://mywebsite.com and not https://mywebsite.com as expected.
I already tried adding the following lines to settings.py:
USE_X_FORWARDED_HOST = True
SECURE_PROXY_SSL_HEADER = ('HTTP_X_FORWARDED_PROTO', 'https')
as explained in the documentation, but it doesn't work. Whenever I inspect request.META (or the raw request.headers), I'm seeing 'HTTP_X_FORWARDED_PROTO': 'http' (and the equivalent raw 'X-Forwarded-Proto': 'http') instead of https as expected. The stack is correctly forwarding 'HTTP_X_FORWARDED_HOST': 'mywebsite.com' from the client, but the scheme is being ignored.
Can anyone help me identify what I'm doing wrong and how to fix it? Thanks
With a Classic ELB you specify the "instance port" (see the listeners tab) and that controls the protocol that you send downstream to nginx. In that scenario it is common to attach an SSL cert to the 443 port but send HTTP down port 80 to nginx. the port 80 listener also sends HTTP. In that setup, where only HTTP is coming in from the load balancer it is your job to inspect the X-Forwarded-Proto header and perform a permanent redirect to HTTPS. That's because the classic ELB could not redirect HTTP to HTTPS. With the Application Load Balancer (ALB) I believe you can redirect HTTP to HTTPS if you want to speak 443 to nginx.
In your specific case it seems like you were only listening on Port 80 so you were probably only sending HTTP from the load balancer. Check you Instance Port and protocol.

Traslate nginx ingress rule with snippet to Istio

I have a nginx ingress controller and expose services with him, we planned change to istio to ingress traffic.
I have a ingress rule that contains snippet:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$1
nginx.ingress.kubernetes.io/server-snippet: |
location ~* "^/" {
proxy_pass "https://127.0.0.1";
proxy_set_header Host $http_x_forwarded_host;
proxy_set_header X-Real-IP $remote_addr;
proxy_connect_timeout 10s;
proxy_send_timeout 120s;
proxy_read_timeout 120s;
client_max_body_size 300m;
}
name: foo
spec:
ingressClassName: bar
rules:
- host: foo.bar
tls:
- hosts:
- foo.bar
This ingress copy http_x_forwarded_host to Host and send to nginx ingress.
There are any idea to convert this rule to istio?
Thanks.
Marco
Welcome to SO!,
It should be doable theoretically with below Istio building components:
Use regex based rewrites
nginx.ingress.kubernetes.io/rewrite-target => EnvoyFilter to HTTP_ROUTE object
(example to be found on github here)
Forward 'X-Forwarded-For/X-Real-IP' headers to upstream host
If your application needs to know the real client IP address use the Gateway Network Topology (Alpha) feature.
Remark:
Attached by you the source manifest file seems to be suffering from a known issue of latest nginx ingress controller, which reveals in following error on my env:
Error from server (BadRequest): error when creating "STDIN": admission webhook "validate.nginx.ingress.kubernetes.io" denied the request:
-------------------------------------------------------------------------------
Error: exit status 1
2021/06/21 11:05:45 [emerg] 851#851: invalid number of arguments in "proxy_set_header" directive in /tmp/nginx-cfg063051389:453
nginx: [emerg] invalid number of arguments in "proxy_set_header" directive in /tmp/nginx-cfg063051389:453
nginx: configuration file /tmp/nginx-cfg063051389 test failed

How to solve 502 Bad Gateway errors with Elastic Load Balancer and EC2/Nginx for HTTPS requests?

I'm running into '502 Bad Gateway' issues for HTTPS requests when using AWS Elastic Load Balancer (Application type) in front of EC2 instances running Nginx. Nginx is acting as a reverse proxy on each instance for a waitress server serving up a python app (Pyramid framework). I'm trying to use TLS termination at the ELB so that the EC2 instances are only dealing with HTTP. Here's the rough setup:
Client HTTPS request > ELB (listening on 443, forwarding to 80 on backend) > Nginx listening on port 80 (on Ec2 instance) > forwarded to waitress/Pyramid (on same ec2 instance)
When I make requests on HTTPS I get the 502 error. However, when I make regular HTTP requests I get a response as expected (same setup as above except ELB is listening on port 80).
Some additional info:
ELB health checks are working.
All VPC/Security groups are configured correctly (I believe).
I'm using an AWS certificate on the ELB using the standard setup/walkthrough on AWS.
I SSH'd into the Ec2 instance and in the Nginx access log it looks like the HTTPS request are still encrypted? Or some encoding issue?
And here's nginx.conf on the EC2 instance:
#user nobody;
worker_processes 1;
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
access_log /etc/nginx/access.log;
sendfile on;
# Configuration containing list of application servers
upstream app_servers {
server 127.0.0.1:6543;
}
server {
listen 80;
server_name [MY-EC2-SERVER-NAME];
# Proxy connections to the application servers
# app_servers
location / {
proxy_pass http://app_servers;
proxy_redirect off;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Host $server_name;
}
}
}
Ok I figured it out (I'm a dummy). I had two listeners set up on the ELB, one for 80 and one for 443, which was correct. The listener for 80 was set up correctly to forward to backend (Nginx) port 80 over HTTP as expected. The 443 listener was INCORRECTLY configured to send to port 80 on the backend over HTTPS. I updated the 443 listener to use the same rule as the 80 listener (i.e. listen on 443 but send to backend 80 over HTTP) and it worked. Disregard y'all.

Cannot access airflow web server via AWS load balancer HTTPS because airflow redirects me to HTTP

I have an airflow web server configured at EC2, it listens at port 8080.
I have an AWS ALB(application load balancer) in front of the EC2, listen at https 80 (facing internet) and instance target port is facing http 8080.
I cannot surf https://< airflow link > from browser because the airflow web server redirects me to http : //< airflow link >/admin, which the ALB does not listen at.
If I surf https://< airflow link > /admin/airflow/login?next=%2Fadmin%2F from browser, then I see the login page because this link does not redirect me.
My question is how to change airflow so that when surfing https://< airflow link > , airflow web server will redirect me to https:..., not http://.....
so that AWS ALB can process the request.
I tried to change base_url of airflow.cfg from http://localhost:8080 to https://localhost:8080, according to the below answer, but I do not see any difference with my change....
Anyway, how to access https://< airflow link > from ALB?
Since they're using Gunicorn - you can configure the forwarded_allow_ips value as an evironment variable instead of having to use an intermediary proxy like Nginx.
In my case I just set FORWARDED_ALLOW_IPS = * and it's working perfectly fine.
In ECS you can set this in the webserver task configuration if you're using one docker image for all the Airflow tasks (webserver, scheduler, worker, etc.).
Finally I found a solution myself.
I introduced a nginx reverse proxy between ALB and airflow web server: ie.
https request ->ALB:443 ->nginx proxy: 80 ->web server:8080. I make the nginx proxy tell the airflow web server that the original request is https not http by adding a http header "X-Forwarded-Proto https".
The nginx server is co-located with the web server. and I set the config of it as /etc/nginx/sites-enabled/vhost1.conf (see below). Besides, I deletes the /etc/nginx/sites-enabled/default config file.
server {
listen 80;
server_name <domain>;
index index.html index.htm;
location / {
proxy_pass_header Authorization;
proxy_pass http://localhost:8080;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto https;
proxy_http_version 1.1;
proxy_redirect off;
proxy_set_header Connection "";
proxy_buffering off;
client_max_body_size 0;
proxy_read_timeout 36000s;
}
}
User user389955 own solution is probably the best approach, but for anyone looking for a quick fix (or want a better idea on what is going on), this seems to be the culprit.
In the following file (python distro may differ):
/usr/local/lib/python3.5/dist-packages/gunicorn/config.py
The following section prevents forwarded for headers from anything other than local
class ForwardedAllowIPS(Setting):
name = "forwarded_allow_ips"
section = "Server Mechanics"
cli = ["--forwarded-allow-ips"]
meta = "STRING"
validator = validate_string_to_list
default = os.environ.get("FORWARDED_ALLOW_IPS", "127.0.0.1")
desc = """\
Front-end's IPs from which allowed to handle set secure headers.
(comma separate).
Set to ``*`` to disable checking of Front-end IPs (useful for setups
where you don't know in advance the IP address of Front-end, but
you still trust the environment).
By default, the value of the ``FORWARDED_ALLOW_IPS`` environment
variable. If it is not defined, the default is ``"127.0.0.1"``.
"""
Changing from 127.0.0.1 to specific IP's or * if IP's unknown will do the trick.
At this point, I haven't found a way to set this parameter from within airflow config itself. If I find a way, will update my answer.
We solved this problem in my team by adding an HTTP listener to our ALB that redirects all HTTP traffic to HTTPS (so we have an HTTP listener AND an HTTPS listener). Our Airflow webserver tasks still listen on port 80 for HTTP traffic, but this HTTP traffic is only in our VPC so we don't care. The connection from browser to the load balancer is always HTTPS or HTTP that gets rerouted to HTTPS and that's what matters.
Here is the Terraform code for the new listener:
resource "aws_lb_listener" "alb_http" {
load_balancer_arn = aws_lb.lb.arn
port = 80
protocol = "HTTP"
default_action {
type = "redirect"
redirect {
port = "443"
protocol = "HTTPS"
status_code = "HTTP_301"
}
}
}
Or if you're an AWS console kinda place here's how you set up the default action for the listener:
Console
I think you have everything working correctly. The redirect you are seeing is expected as the webserver is set to redirect from / to /admin. If you are using curl, you can pass the flag -L / --location to follow redirects and it should bring you to the list of DAGs.
Another good endpoint to test on is https://<airflow domain name>/health (with no trailing slash, or you'll get a 404!). It should return "The server is healthy!".
Be sure you have https:// in the base_url under the webserver section of your airflow config.
Digging into the gunicorn documentation: it seems to be possible to pass any command line argument (when gunicorn command is called) via the GUNICORN_CMD_ARGS environment variable.
So what I'm trying out is setting GUNICORN_CMD_ARGS=--forwarded-allow-ips=* since all the traffic will come to my instance from the AWS ALB... I guess the wildcard could be replaced with the actual IP of the ALB as seen by the instance, but that'll be next step...
Since I'm running on ECS, I'm passing it as:
- Name: GUNICORN_CMD_ARGS
Value: --forwarded-allow-ips=*
in the Environment of my task's container definition.
PS: from the doc, this possibility was added as of gunicorn 19.7; for comparison, Airflow 1.10.9 seems to be on gunicorn 19.10 so good to go with any (more or less) recent version of Airflow !
I encountered this issue too when using the official apache airflow helm chart (version 1.0.0).
Problem
Originally I had configured the webserver service with type LoadBalancer.
webserver:
service:
type: LoadBalancer
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:us-west-2:1234512341234:certificate/231rc-r12c3h-1rch3-1rch3-rc1h3r-1r3ch
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: tcp
This resulted in the creation of a classic elastic load balancer.
This mostly worked but when I clicked on the airflow logo (which links to https://my-domain.com), I'd get redirected to http://my-domain.com/home which failed because the load balancer was configured to use HTTPS only.
Solution
I resolved this by installing the AWS Load Balancer Controller on my EKS cluster and then configuring ingress.
The ingress-related portion of the chart config looks like this:
ingress:
enabled: true
web:
host: my-airflow-address.com
annotations:
kubernetes.io/ingress.class: alb
alb.ingress.kubernetes.io/subnets: subnet-01234,subnet-01235,subnet-01236
alb.ingress.kubernetes.io/scheme: internal # if in private subnets
alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS":443}]'
webserver:
service:
type: NodePort
Notes
It might be possible to configure the webserver to use an ALB instead of classic ELB and configure it to handle the HTTP routing, but I have not tested it.