AWS: Source ip not visible at instance through Network Load Balancer - amazon-web-services

I have an API gateway which is sending requests via the VPC link to Network load balancer(NLB) which is then forwarded to the target instance. As per AWS documentation, when the target group is instance the source ip is passed unfettered to the target instance, but if by ip address then NLB ip address. However even though the target group is set to instance I am still getting NLB ip address.

If you need the source ip, you can map the context variable context.identity.sourceIpto a integration header docs. You will be able to access this header in your server.
The docs for NLB are referring to the proxy protocol 2 support which will allow your to get the source ip of a connection to a nlb. This requires running a web server with proxy protocol enabled (squid/nginx has a flag to enable this). With respect to VPC Links, this ip is not the same as your source ip of a request to your server since the NLB actually sees connections from API Gateway, so enabling this on the NLB will return internal ip addresses of API Gateway.
In swagger it'll look like
...
"requestParameters" : {
"integration.request.header.x-source-ip" : "context.identity.sourceIp",
}
...

Related

Firewall rules and external TCP Load Balancers in GCP

I have an unmanaged instance group that has 2 VM Instances in it with an external IP Address of, let's say 1.2.3.4 and 1.2.3.5. After that, I created an External TCP LoadBalancer for this instance group (as the backend service). After creating this load balancer, I received the frontend IP Address of that loadBalancer (which I assume is the IP Address of the forwarding rule) and let's say this IP Address is 5.6.7.8. Now, when we create a loadbalancer we need to create health checks and create a firewall rule to allow that health check to communicate with each VMs.. Hence, I created a firewall rule, ingress, allow, to port 80 (by the way everything here is port 80... that's the only port I use) with Source IPV4 ranges are 209.85.204.0/22 209.85.152.0/22 35.191.0.0/16 (port 80) where these IPv4 ranges are available in Google's Documentation page.
Now, the load balancer declares that the backend service are healthy. So then, I wanted to make a firewall rule for my VMs (instance group) that only allow ingress from the frontend IP of the load balancer, that is ingress, allow, source IPv4 ranges 5.6.7.8/32 (again port 80) to my VMs,, thinking that it will work. However, when I input the IP address in my browser, it does not "redirect" to the respective VMs (that is 1.2.3.4 and 1.2.3.5). It only works if i put 0.0.0.0/0 as the source IPv4. Hence, it is kinda useless for having two firewalls (one for healthchecks one for forwarding rule).
The reason I want to do this is because I only want my VMs to receive incoming ingress from the load balancer frontend IP address, such then if i put 1.2.3.4 or 1.2.3.5 in my browser it will not connect. It connects if and only if I put 5.6.7.8.
Is this achievable?
Thank you in advance!!
Edit: All resources are in the same region and zone!
According to the doc, the firewall rule must allow the following source ranges:
130.211.0.0/22
35.191.0.0/16
Also, you can read this doc. The IP 5.6.7.8 is not the source IP that sends to your backend from LB. LB sent to your backend is from the same range used by health check:
35.191.0.0/16 130.211.0.0/22.
Suggestion:
You might use tcpdump to see what IP sends to your VM.
Tag the backend instances "application," and create a firewall rule with the target tag "application" and the source IP range of the allowed clients and Google health check IP ranges.

VPC SSL/HTTPS environment

I have the following VPC setup with AWS Elastic Beanstalk:
Web App Public Load Balancer pointed to by my domain (proxied through cloudflare) with EC2 instances in private subnet.
Private internal API Load Balancer with inbound access granted to EC2 instances above via Security Group
Database within the private subnet, accessible by EC2 instances behind the API Load Balancer.
I would like to enable end to end HTTPS, AWS has good documentation here (https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/configuring-https-endtoend.html).
I have followed this, albeit with my free Cloudflare domain certs. This seemed ok until I get the following error: 'SELF_SIGNED_CERT_IN_CHAIN' when my web app tries to connect to the internal API via https://internal-aweseb-dns.amazonaws.com (DNS for internal API Load Balancer).
Questions
Is this the correct way get end to end HTTPS?; and
How do I resolve the above error? (returned by Node JS)
Thanks
In the end I came to this conclusion: I don't need end to end HTTPS when my instances are in a private subnet because:-
Once HTTPS is terminated at the Load Balancer, the internal requests are over HTTP but are not over the public internet. They requests cannot be seen by anyone outside the AWS network.
The data I am transmitting is not overly sensitive (just emails and user preferences) so there is no Compliance/Regulatory reason to enforce end to end HTTPS in a private network.
There is a small performance hit when using HTTPS as an SSL handshake must occur, which is an overhead.
I have additional security via Security Groups, only allowing internal traffic originating from the Load Balancer.
There are many suggestions that would guide you to configure your application to ignore the certificate when connecting via HTTPS... but that defeats the whole point of HTTPS (secure encrypted connection). You may as well just HTTP instead of doing this.
After much research and discussion with AWS, I think using HTTP over an internal network is secure enough for 99% of use cases and is pretty standard with a lot of setups and so unless you actually need end-to-end encryption for your use case, I would advise doing this instead.
Hope this helps.

Cannot connect to AWS Transfer S3 SFTP server - might need to set security group

I'm trying to set up an SFTP server managed by AWS that has a fixed IP address which external clients can whitelist in a firewall. Based on this FAQ this is what I should do:
You can enable fixed IPs for your server endpoint by selecting the VPC endpoint for your server and choosing the internet-facing option. This will allow you to attach Elastic IPs (including BYO IPs) to your server’s endpoint, which is assigned as the endpoint’s IP address
So I followed the official instructions here under "Creating an Internet-Facing Endpoint for Your SFTP Server". The creation settings look like this:
The result looks like this:
Compare with the result screenshot from the docs:
(source: amazon.com)
My result is almost the same, except that under the table "Endpoint Configuration" the last column says "Private IPv4 Address" instead of 'Public'. That's the first red flag. I have no idea why it's a private address. It doesn't look like one, it's the IP address of the Elastic IP that I created, and the endpoint DNS name s-******.server.transfer.eu-west-1.amazonaws.com resolves to that IP address on my local machine.
If I ping the endpoint or the IP address, it doesn't work:
451 packets transmitted, 0 received, 100% packet loss, time 460776ms
If I try connecting with sftp or ssh it hangs for a while before failing:
ssh: connect to host 34.****** port 22: Connection timed out
Connection closed
The other potential problem is security groups:
At this point, your endpoint is assigned with the selected VPC's default security group. To associate additional or change existing security groups, visit the Security Groups section in the https://console.aws.amazon.com/vpc/.
These instructions don't make sense to me because there's nowhere in the Security Groups interface that I can assign a group to another entity such as a transfer server. And there's nowhere in the transfer server configuration that mentions security groups. How do I set a new security group?
I tried changing the security group of the Network Interface of the Elastic IP, but I got a permission error even though I'm an administrator. Apparently I don't actually own ENIs? In any case I don't know if this is the right path.
The solution was to find the endpoint that was created for the server in the "Endpoints" section of the VPC console. The security groups of the endpoint can be edited.
The "Private IPv4 address" seems to be irrelevant.
The default security group controls access to the internet-facing endpoint for the new sftp server in a vpc. Mess around with the default security group ingress rules for the vpc selected for the sftp server. Or, white list the exact ip address connecting to the sftp endpoint in the default security group.
If the admin says ho hum, create a second vpc for the sftp server if isolation is absolutely necessary. Fiddle with the default group in the new, isolated vpc.
Link:
Creating an Internet-Facing endpoint for Your sftp server
Happy transferring!

Client IP address in Istio

So I have a setup like this:
AWS NLB (forwards) --> Istio --> Nginx pod
Now, I'm trying to implement rate limiting at Istio layer. I followed this link. However, I can still request the API more than what I configured. Looking more into it, I logged X-Forwarded-For header in the nginx, and it's empty.
So, how do I get the client IP in Istio when I'm using NLB? NLB forwards the client IP, but how? In header?
EDITS:
Istio Version: 1.2.5
istio-ingressgateway is configured as type NodePort.
According to AWS documentation about Network Load Balancer:
A Network Load Balancer functions at the fourth layer of the Open Systems Interconnection (OSI) model. It can handle millions of requests per second. After the load balancer receives a connection request, it selects a target from the target group for the default rule. It attempts to open a TCP connection to the selected target on the port specified in the listener configuration.
...
When you create a target group, you specify its target type, which determines whether you register targets by instance ID or IP address. If you register targets by instance ID, the source IP addresses of the clients are preserved and provided to your applications. If you register targets by IP address, the source IP addresses are the private IP addresses of the load balancer nodes.
There are two ways of preserving client IP address when using NLB:
1.: NLB preserves client IP address in source address
when registering targets by instance ID.
So client IP address are only available in specific NLB configuration. You can read more about Target Groups in aws documentation.
2.: Proxy Protocol headers.
It is possible to use to send additional data such as the source IP address in header. Even if You specify targets by IP addresses.
You can follow aws documentation for guide and examples how to configure proxy protocol.
To enable Proxy Protocol using the console
Open the Amazon EC2 console at https://console.aws.amazon.com/ec2/.
On the navigation pane, under LOAD BALANCING, choose Target Groups.
Select the target group.
Choose Description, Edit attributes.
Select Enable proxy protocol v2, and then choose Save.

How to avoid the configuration error while using AWS API Gateway with VPC Link? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
I have created the VPC Link using the Network Load Balancer (NLB) as per the AWS documentation and attached the same with the API Gateway resource / method. But it throws "Internal Server Error" when accessing the "Invoke URL" and displays this error while testing: "Execution failed due to configuration error: There was an internal error while executing your request".
Procedure I followed:
1) Created Network Load Balancer :
Load Balancer Scheme: Internal
Load Balancer Protocol / port : TCP / 80
Availability Zone : Created VPC with CIDR "10.0.0.0/16" and public subnet with CIDR "1XX.XX.0.0/16".
Target Group : Protocol / Port / Target Type - TCP / 80 / Instance
No Target Registration.
Launched NLB.
2) Created VPC Link in API Gateway using the newly created NLB.
3) Created new API :
Method : Get
Integration Type : VPC Link
Use Proxy Integration : True
VPC Link : ${stageVariables.vpcLinkId}
Endpoint URL : "My ec2 instance URL with port" (Ex: http://ec2-XX-XXX-XXX-XXX.compute-1.amazonaws.com:3000)
Created API resource.
4) Deployed the selected API using the "Deploy API" action and newly created stage.
5) Configured the "vpcLinkId" in the "Stage Variables" section.
Now if I hit the "Invoke URL", the web page displays " {"message": "Internal server error"} ".
Note: If I use the same EC2 url with the "Integration Type : HTTP", the "Invoke URL" works. Same is not working with the VPC Link.
Error:
Other Points Worth Noting:
In EC2 instance with security policy will allow all TCP ports.
EC2 instance was launched by using ECS / ECR (Docker Container).
Enabled the Cloud Watch logs from API Gateway stage, but it produces nothing.
I'm happy to provide additional information, if required.
EDIT 1
Based on JNY's (jny) input I have changed the API gateway end point to the NLB and added my EC2 instance as Target in the NLB. Still I'm facing the same issue. Below images will show all the configurations that I have done.
Load Balancer Config:
Load Balancer Target Group settings:
Target Group Port Settings:
Here I have given 3000 as port to check the instance health as my application (Node) listens on 3000 port.
Enabled the port numbers 80 and 3000 in the security policy.
API Gateway Settings:
Finally I changed the Endpoint the API Gateway to NLB
Result of the same:
Still I'm not sure what is the mistake I'm making here.
I was also getting 500 Internal server error, then I have added inbound rules in EC2 security group and allow HTTP with CIDR of VPC subnet and now I able to access the API using NLB
Your NLB is missing inbound permissions to the EC2 instance (in their security Groups) for port 80. But since an NLB does not have as security group (but does have permanent IP), you will have to use its ip and add it directly to the security group for the EC2 instance.
Here's how you can find the ip of your NLBs: https://docs.aws.amazon.com/elasticloadbalancing/latest/network/target-group-register-targets.html#target-security-groups.
You did it correctly, but maybe it will help someone:
My fault was to use HTTPS for the endpoint url in the api gateway. It must be HTTP.
Correct:
http://myLoadBalancer.elb.us-east-1.amazonaws.com
The textfield was too short to show the whole url, so I didn't see it.
Issue got resolved after using the same port for NLB, EC2, ECS, etc..