I have GCP setup with external https load balancer and backend services with server less NEG. in front of the load balancer that has another cloud WAF. my requirement is when we blocking all IP ranges (except cloud WAF) from cloud armor security policy that will be apply only for layer 7. when we scan port in load balancer IP, port 80 and 443 are open for everyone in layer 3 and 4 is there any security rule for layer3 and layer 4?
I have config cloud armor IP blocking security policy.
The GCLB will have open ports enabled by default: https://cloud.google.com/load-balancing/docs/https#open_ports
How are you instituting the IP allow/deny rule? Are you inspecting a header with the contained IPs, which would imply L7 rules? Generally with an IP ACL rule, Cloud Armor will look at the connecting IP and issue a block at the lower layer and not use L7 rules.
Taking a step back, what is the concern you are trying to mitigate?
Related
with this flow:
external world --> AWS API Gateway ---> VPC Link ---> Network Load Balancer ---> my single EC2 instance
How can I configure AWS Netword Load Balancer such that:
Requests to https://myapp.com is routed into port 80 of my EC2 instance.
Requests to https://myapp.com/api/* is routed into port 3000 of my EC2 instance.
?
Currently I have only configured one Listener on the NLB that listens on port 80 and all traffics from the API Gateway are routed to port 80 of my EC2 instance.
I have found that in Application Load Balancer, you can configure "Rules" that map path to different ports: Path based routing in AWS ALB to single host with multiple ports
Is this available with NLB?
This is not possible with the Network Load Balancer, because it operates on a level of the network stack that has no concept of Paths.
The NLB operates on Layer 4 and supports the protocols TCP and UDP. These essentially create a connection between ports on two machines that allow data to flow between them.
Paths as in HTTP(S) Paths are a Layer 5+ concept and belong to the HTTP Protocol. They're not available to the NLB because it can only work based on data that's guaranteed to be available there.
You can use an Application Load Balancer as the target for your Network Load Balancer and then configure the Path-based rules there, because the ALB is a layer 5+ load balancer and understands the Layer 5 protocol HTTP.
Here is a blog detailing this: Application Load Balancer-type Target Group for Network Load Balancer
I am trying to figure out is how can I connect a TCP Load balancer with a http/https load balancer in GCP.
I have installed kong on a GKE cluster and it creates a TCP Load balancer.
Now if I have multiple GKE clusters with Kong they all will have their own TCP Load balancers.
From a user perspective I need to then do a DNS load balancing which I dont think is always fruitful.
So m trying to figure out if I can use Cloud CDN, NEG and or HTTP/HTTPS load balancer to act as a front end for Kong's TCP Load balancer..
Is it possible, r there any alternatives... Thanks!!!
There are several options you can follow depending on what you are trying to do and your needs, but if you must use Kong inside each GKE cluster and handle your SSL certs yourself, then:
TCP Proxy LB
(optional) You can deploy GKE NodePorts instead of Load Balancer service for your Kong deployment, since you try to unify all your Kong services, having individual Load Balancer exposing to the public internet can work, but you will be paying for any extra external IP address you are using.
You can manually deploy a TCP Proxy Load Balancer that will use the same GKE Instance Groups and port as your NodePort / current Load Balancer (behind the scenes), you would need to setup each backend for each GKE cluster node pool you are currently using (across the all the GKE clusters that you are deploying your Kong service).
HTTP(S) LB
You can use NodePorts or take advantage (same thing as TCP Proxy LB) from your current Load Balancer setup to use as backends, with the addition of NEGs in case you want to use those.
You would need to deploy and maintain this manually, but you can also configure your SSL certificates here (if you plan to provide HTTPS connections) since client termination happens here.
The advantage here is that you can leave SSL cert renewal to GCP (once configured) and you can also use Cloud CDN to reduce latency and costs, this feature can only be used with HTTP(S) LB as per today.
In the GCP cloud armor documentation, it is mentioned here that it supports HTTP(S) and TCP load balancers to be configured. But I am unable to add TCP load balancer as a Target in Cloud Armor as it doesn't show the TCP load balancer in the target list.
Accoridng to Google document you attached, It is DDOS attack protection service that HTTP(S) and TCP load balancers are supported.
As your mention that "add TCP load balancer as a target in cloud armor", You may have tried create security policy.
But security policy service is not supported to TCP load balancer as a target yet.
Refer to Google document that describes Cloud armor's security policy requirements here
I have configured to use my ip in the security group on ec2 instance. But I am getting 504 gateway timeout error.
When I make it open to world i.e 0.0.0.0/0 then it works well.
I checked for my IP address on the ec2 instance using "who am i" and this is similar to the one in the security group.
Please suggest how to make it work only for my machine.
I have followed the steps mentioned on
possible to whitelist ip for inbound communication to an ec2 instance behind an aws load balancer?
This is how my inbound rule for the security group looks.
All traffic All All 123.201.54.223/32 Dev Security Rule
Security groups will not allow you to make it work on a machine-by-machine basis, only by IPs and security groups, eg if you limit ingress by IP, any other machine using that same IP address (usually on same network/access point etc) will also be allowed in, not just your machine.
If you are using a load balancer, then it is the load balancer that should have access to your instance via its security group, and your access via IP should be controlled in the load balancer's security group, so you should use the settings you have quoted (at least to begin with!) on your LB security group, not your instance security group.
With the instance or group of instances (ie those that are behind the load balancer) in their security groups you want to only allow ingress from the load balancer security group, there's no need to set an IP address ingress (unless you want to allow eg ssh access from specific IP addresses or want them to talk to a database instance).
504 gateway timeout error It's mean your LB not able to communicate with the desired instance and you are able to communicate with LB.
All traffic All All 123.201.54.223/32 Dev Security Rule This will only allow traffic from you IP not Load Balancer IP.
You do not need to mention your IP in the security group of EC2, You have to allow traffic from LB that is 10.0.0.0/16.
HTTP 504: Gateway Timeout
Description: Indicates that the load balancer closed a connection
because a request did not complete within the idle timeout period.
Cause 1: The application takes longer to respond than the configured
idle timeout.
Solution 1: Monitor the HTTPCode_ELB_5XX and Latency metrics. If there
is an increase in these metrics, it could be due to the application
not responding within the idle timeout period. For details about the
requests that are timing out, enable access logs on the load balancer
and review the 504 response codes in the logs that are generated by
Elastic Load Balancing. If necessary, you can increase your capacity
or increase the configured idle timeout so that lengthy operations
(such as uploading a large file) can complete. For more information,
see Configure the Idle Connection Timeout for Your Classic Load
Balancer and How do I troubleshoot Elastic Load Balancing high
latency.
Cause 2: Registered instances closing the connection to Elastic Load
Balancing.
Solution 2: Enable keep-alive settings on your EC2 instances and make
sure that the keep-alive timeout is greater than the idle timeout
settings of your load balancer.
ts-elb-errorcodes-http504
I am trying to use the Google Cloud NAT on a set of VMs running on Compute Engine which are in their own specific subnet such that all of the servers make requests to customer websites from a single static IP address. Unfortunately when I add these VMs to a TCP/SSL Proxy LB they don't appear to be using the NAT which I believe is configured correctly.
I have tried configuring the TCP Proxy LB as well as an HTTP(S) LB and the Cloud NAT and when I try and make an egress http request it results in a timeout. The ingress via the LB is working properly. The VM instances do not have external IPs which is a requirement for the Cloud NAT.
I expect the http requests to hit the server and for the web-server to make outbound http request via the Cloud NAT such that other servers need only whitelist a single IP address (a static IP assigned to the Cloud NAT)
I'm trying to understand why would you need Cloud NAT in this scenario, since a TCP/SSL proxy load balancer will connect to the backends using a private conneciton and the backends won't be exposed to the Internet. Configuring just a TCP/SSL proxy would be enough for your scenario imo.
The following official documentation will explain my point1:
Backend VMs for HTTP(S), SSL Proxy, and TCP Proxy load balancers do
not need external IP addresses themselves, nor do they need Cloud NAT
to send replies to the load balancer. HTTP(S), SSL Proxy, and TCP
Proxy load balancers communicate with backend VMs using their primary
internal IP addresses.