So I have a setup like this:
AWS NLB (forwards) --> Istio --> Nginx pod
Now, I'm trying to implement rate limiting at Istio layer. I followed this link. However, I can still request the API more than what I configured. Looking more into it, I logged X-Forwarded-For header in the nginx, and it's empty.
So, how do I get the client IP in Istio when I'm using NLB? NLB forwards the client IP, but how? In header?
EDITS:
Istio Version: 1.2.5
istio-ingressgateway is configured as type NodePort.
According to AWS documentation about Network Load Balancer:
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.
...
When you create a target group, you specify its target type, which determines whether you register targets by instance ID or IP address. If you register targets by instance ID, the source IP addresses of the clients are preserved and provided to your applications. If you register targets by IP address, the source IP addresses are the private IP addresses of the load balancer nodes.
There are two ways of preserving client IP address when using NLB:
1.: NLB preserves client IP address in source address
when registering targets by instance ID.
So client IP address are only available in specific NLB configuration. You can read more about Target Groups in aws documentation.
2.: Proxy Protocol headers.
It is possible to use to send additional data such as the source IP address in header. Even if You specify targets by IP addresses.
You can follow aws documentation for guide and examples how to configure proxy protocol.
To enable Proxy Protocol using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
Choose Description, Edit attributes.
Select Enable proxy protocol v2, and then choose Save.
Related
I have an unmanaged instance group that has 2 VM Instances in it with an external IP Address of, let's say 1.2.3.4 and 1.2.3.5. After that, I created an External TCP LoadBalancer for this instance group (as the backend service). After creating this load balancer, I received the frontend IP Address of that loadBalancer (which I assume is the IP Address of the forwarding rule) and let's say this IP Address is 5.6.7.8. Now, when we create a loadbalancer we need to create health checks and create a firewall rule to allow that health check to communicate with each VMs.. Hence, I created a firewall rule, ingress, allow, to port 80 (by the way everything here is port 80... that's the only port I use) with Source IPV4 ranges are 209.85.204.0/22 209.85.152.0/22 35.191.0.0/16 (port 80) where these IPv4 ranges are available in Google's Documentation page.
Now, the load balancer declares that the backend service are healthy. So then, I wanted to make a firewall rule for my VMs (instance group) that only allow ingress from the frontend IP of the load balancer, that is ingress, allow, source IPv4 ranges 5.6.7.8/32 (again port 80) to my VMs,, thinking that it will work. However, when I input the IP address in my browser, it does not "redirect" to the respective VMs (that is 1.2.3.4 and 1.2.3.5). It only works if i put 0.0.0.0/0 as the source IPv4. Hence, it is kinda useless for having two firewalls (one for healthchecks one for forwarding rule).
The reason I want to do this is because I only want my VMs to receive incoming ingress from the load balancer frontend IP address, such then if i put 1.2.3.4 or 1.2.3.5 in my browser it will not connect. It connects if and only if I put 5.6.7.8.
Is this achievable?
Thank you in advance!!
Edit: All resources are in the same region and zone!
According to the doc, the firewall rule must allow the following source ranges:
130.211.0.0/22
35.191.0.0/16
Also, you can read this doc. The IP 5.6.7.8 is not the source IP that sends to your backend from LB. LB sent to your backend is from the same range used by health check:
35.191.0.0/16 130.211.0.0/22.
Suggestion:
You might use tcpdump to see what IP sends to your VM.
Tag the backend instances "application," and create a firewall rule with the target tag "application" and the source IP range of the allowed clients and Google health check IP ranges.
I have a running Web server on Google Cloud. It's a Debian VM serving a few sites with low-ish traffic, but I don't like Cloudflare. So, Cloud CDN it is.
I created a load balancer with static IP.
I do all the items from the guides I've found. But when it comes time to Add origin to Cloud CDN, no load balancer is available because it's "unhealthy", as seen by rolling over the yellow triangle in the LB status page: "1 backend service is unhealthy".
At this point, the only option is to choose Create a Load Balancer.
I've created several load balancers with different attributes, thinking that might be it, but no luck. They all get the "1 backend service is unhealthy" tag, and thus are unavailable.
---Edit below---
During LB creation, I don't see anywhere that causes the LB to know about the VM, except in cert issue (see below). Nowhere does it ask for any field that would point to the VM.
I created another LB just now, and here are those settings. It finishes, then it's marked unhealthy.
Type
HTTP(S) Load Balancing
Internet facing or internal only?
From Internet to my VMs
(my VM is not listed in backend services, so I create one... is this the problem?)
Create backend service
Backend type: Instanced group
Port numbers: 80,443
Enable Cloud CDN: checked
Health check: create new: https, check /
Simple host and path rule: checked
New Frontend IP and port
Protocol: HTTPS
IP: v4, static reserved and issued
Port: 443
Certificate: Create New: Create Google-managed certificate, mydomain.com and www.mydomain.com
Load balancer's unhealthy state could mean that your LB's healthcheck probe is unable to reach your backend service(Your Debian VM in this case).
If your backend service looks good now, I think there is a problem with your firewall configuration.
Check your firewall rules whether it allows healthcheck probe's IP address range or not.
Refer to the docoment below to get more detailed information.
Required firewall rule
I recently started working with AWS CloudFormation and encountered the resource DhcpOptions.
Can anyone tell me, when I would need to include this resource in my template?
Thank you.
The use of the resource AWS::EC2::DHCPOptions [1] is optional.
AWS by default creates this resource for you and associates it to your VPC using a AWS::EC2::VPCDHCPOptionsAssociation resource.
The setting is used to configure the (AWS-provided) DHCP-Server in your VPC. The attributes you assign to the resource are translated into DHCP Options which are passed to any of your EC2 instances once they request the information via DHCP protocol, e.g.
CloudFormation DomainName is translated into Domain Name Server Option (see Section 3.8 in RFC 2132 [3]). For an overview of all exisiting BOOTP and DHCP options, see the IETF document on the website of IANA [4] or the respective wikipedia article [5].
AWS defines the DHCP Option Set in their docs [5] as:
The Dynamic Host Configuration Protocol (DHCP) provides a standard for passing configuration information to hosts on a TCP/IP network. The options field of a DHCP message contains the configuration parameters. Some of those parameters are the domain name, domain name server, and the netbios-node-type.
You can configure DHCP options sets for your virtual private clouds (VPC).
When to use the DHCPOptions resource?
As far as I know, the DHCP options of a VPC are often modified in order to change the network's default DNS server. If you have your clients configured to obtain the DNS servers via DHCP (which is standard for Amazon Linux and Amazon Linux 2), you can change their DNS servers by creating a new DHCP options set and setting a custom domain-name-servers option.
That is of course only on aspect of network configuration. By changing other options, such as the ntp-servers option, you can modify other configuration values which are sent over your VPC network when requested by DHCP clients (such as EC2 instances).
Note: Once you create a AWS::EC2::DHCPOptions resource, do not forget to assign it to a VPC using the AWS::EC2::VPCDHCPOptionsAssociation resource.
What is the default DHCP option set?
The docs [6] state:
When you create a VPC, we automatically create a set of DHCP options and associate them with the VPC. This set includes two options: domain-name-servers=AmazonProvidedDNS, and domain-name=domain-name-for-your-region.
That is for the domain-name-servers option:
AmazonProvidedDNS is an Amazon DNS server, and this option enables DNS for instances that need to communicate over the VPC's Internet gateway. The string AmazonProvidedDNS maps to a DNS server running on a reserved IP address at the base of the VPC IPv4 network range, plus two. For example, the DNS Server on a 10.0.0.0/16 network is located at 10.0.0.2.
References
[1] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-dhcp-options.html
[2] https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-vpc-dhcp-options-assoc.html
[3] https://www.rfc-editor.org/rfc/rfc2132
[4] https://www.iana.org/assignments/bootp-dhcp-parameters/bootp-dhcp-parameters.xhtml
[5] https://en.wikipedia.org/wiki/Dynamic_Host_Configuration_Protocol#Options
[6] https://docs.aws.amazon.com/vpc/latest/userguide/VPC_DHCP_Options.html
I have my React website hosted in AWS on https using a classic load balancer and cloudfront but I now need to have port 1234 opened as well. When I currently browse my domain with port 1234 the page cannot be displayed. The reason I want port 1234 opened as this is where my nodeJs web server is running for React to communicate with.
I tried adding port 1234 into my load balancer listener settings although it made no difference. It's noticeable the load balancer health check panel seems to only have one value which is currently HTTP:80/index.html. I assume the load balancer can listen to port 80 and 1234 (even though it can only perform a health check on one port number)?
Do I need to use action groups or something else to open up the port? Please help, any advice much appreciated.
Many thanks,
Load balancer settings
Infrastructure
I am using the following
EC2 (free tier) with the two code projects installed (React website and node server on the same machine in different directories)
Certificate created (using Certificate Manager)
I have created a CloudFront Distribution and verified it using email. My certificate was selected in the cloud front as the customer SSL certificate
I have a classic load balancer (instance points to my only EC2) and the status is InService. When I visit the load balancer DNS name value I see my React website. The load balancer listens to HTTP port 80. I've added port 1234 but this didn't help
Note:
Please note this project is to learn AWS, React and NodeJs so if things are strange please indicate
EC2 instance screenshot
Security group screenshot
Load balancer screenshot
Target group screenshot
An attempt to register a target group
Thank you for having clarified your architecture.
I woud keep CloudFront out of the game now and be sure your setup works with just the load balancer. When everything will be configured correctly, you can easily add Cloudfront as a next step. In general, for all things in IT, it is easier to build a simple system that is working and increase complexity one step at a time rather than debugging a complex system that does not work.
The idea is to have an Application Load Balancer with two listeners, one for the web (TCP 80) and one for the API (TCP 123). The ALB will have two target groups (one for each port on your EC2 instance) and you will create Listeners rules to forward the correct port to the correct target groups. Please read "Application Load Balancer components" to understand how ALBs work.
Here are a couple of thing to check
be sure you have two listeners and two target group on your Application Load Balancer
the load balancer must be in a security group allowing TCP 80 and TCP 1234 from anywhere (0.0.0.0/0) (let's say SG-001)
the EC2 instance must be in a security group allowing TCP connections on port 1234 (for the API) and 80 (for the web site) only from source SG-001 (just the load balancer)
After having written all this, I realise you are using Classic Load Balancer. This should work as well, just be sure your EC2 instance has the correct security group (two rules, one for each port)
I have an API gateway which is sending requests via the VPC link to Network load balancer(NLB) which is then forwarded to the target instance. As per AWS documentation, when the target group is instance the source ip is passed unfettered to the target instance, but if by ip address then NLB ip address. However even though the target group is set to instance I am still getting NLB ip address.
If you need the source ip, you can map the context variable context.identity.sourceIpto a integration header docs. You will be able to access this header in your server.
The docs for NLB are referring to the proxy protocol 2 support which will allow your to get the source ip of a connection to a nlb. This requires running a web server with proxy protocol enabled (squid/nginx has a flag to enable this). With respect to VPC Links, this ip is not the same as your source ip of a request to your server since the NLB actually sees connections from API Gateway, so enabling this on the NLB will return internal ip addresses of API Gateway.
In swagger it'll look like
...
"requestParameters" : {
"integration.request.header.x-source-ip" : "context.identity.sourceIp",
}
...