We see a lot of scans and attempts to hack our various external ingresses in GCP and the majority of these come from outside the U.S. The neat thing is that we don't service anyone outside maybe 5 U.S. states and I'd like to know how to only allow ingress from IPs located inside the U.S. How can I create a firewall rule that does this in GCP is that even possible? Google searches asking this question yield nothing, not even anyone asking this question. Netflix and Hulu seem to have no problems doing this, can we do it too?
The closest thing GCP has to prevent attacks like DDoS or from hacking is Cloud Armor. Google Cloud Armor delivers defense at scale against infrastructure and application Distributed Denial of Service (DDoS) attacks, using Google’s global infrastructure and security systems. Cloud Armor works in conjunction with Global HTTP(S) LB and provides a layer of protection for your applications running on the backend servers. You cannot use it without a Global HTTP LB.
To limit traffic and protect HTTPS LB, you can configure Cloud Armor security policies that are made up of rules that allow or prohibit traffic from IP addresses or ranges defined in the rule. You can create Google Cloud Armor security policies with IP deny lists and allow lists that restrict unauthorized access to HTTP(S) Load Balancer from the internet. It’s worth mentioning that in GCP, firewall rules are specified in VPC level not coupled with HTTPS Load balancer. Additionally, you could get a little more information about GCP Best Practices for DDoS Protection and Mitigation on Google Cloud Platform.
I’m not sure if this is will answer you question or not, but there is a way to combine multiple GCP services like Cloud Armor, Memorystore and Cloud Run you can dynamically configure a security policy that can serve your purpose, to add a functionality similar to fail2ban to a cloud environment.
I setup a tutorial for this and you can find it here hodo.dev/posts/post-39-gcp-fail2ban-cloud-armor
Related
I have one GCP Https function. This function will be invoked either by other GCP function(Pub/Sub) or from the external application.
I want my function should be accessible from these 2 sources only. By default, my function ingress setting is "Allow All Traffic".
How can I achieve this so that my function should be accessible only from one specific external IP. I am beginner in cloud technology so I may have missed something.
Filtering by the IP address is not the recommend way. Using authentication (with IAM service that check the authorization) is a much better solution. Your IPs can change, you can use a VPN, or whatever. Network can change not your identity
When you use Cloud Functions native authentication filtering (with IAM service) and makes your cloud function private, anyone can access it from the internet. BUT, before reaching your Cloud Functions, the traffic is checked by GFE (Google Front End) that control the authentication token presence, validity and IAM permission.
ONLY is all the condition are met, your Cloud Functions is invoked. I feel the fear in your comment that anyone can invoke your functions and it will cost a lot. With private functions, only authorized traffic is routed to your functions (and therefore paid). All the bad traffic is discarded by Google for you.
If you really want to enforce your pattern, you can do the following:
set the ingress to "internal and cloud load balancing"
Add a VPC connector to your other GCP function with the egress set to ALL (to route all the traffic through your VPC)
Create a HTTPS load balancer with a serverless NEG that reference your Cloud Functions
Activate Cloud Armor on your load balancer with a rule that filter only the IP that you want.
Heavy, boring and expensive (you have to pay the load balancer, $14 per month) for nothing more than identity check. Prefer the first solution ;)
I am developing a solution where a Java application hosted on GKE wants to make an outbound HTTP call to a cloud function which is deployed under a different GCP project, where the GKE operates on a shared network of which possesses firewall rules for the CIDR ranges in that shared network.
For example - GKE cluster & Application deployed under GCP Project A, wishes to invoke a Serverless GCP Function deployed to project B.
There are a number of firewall rules configured on the shared network of which the GKE is operating upon, causing my HTTP call to time out, as the HTTP trigger URL is not mapped to an allowed CIDR range (in that shared network).
What have I tried?
I have lightly investigated one or two solutions which make use of Cloud NAT & Router to proxy the HTTP call to the Cloud Function trigger endpoint, but I am wondering if there are any other, simpler suggestions? The address range for cloud functions is massive so allowing that range is out of the question.
I was thinking about maybe deploying the cloud function into the same VPC & applying ingress restrictions to it, would that allow the HTTP trigger to exist in the allowed IP range?
Thanks in advance
Serverless VPC Access is a GCP solution specially designed to achieve what you want. The communication between the serverless environment and the VPC is done through an internal IP address, and therefore never exposed to the Internet.
For your specific infrastructure, you would need to follow the guide Connecting to a Shared VPC network.
Is GCP Firewall able to allow ingress traffic based on a specific domain name?
I've googled about it and I didn't find any result on this.
All I know is it can allow or deny based on IP address.
A network firewall typically acts at the packet level and since network packets don't carry information about the domain, the standard GCP VPC Firewall will not let you do that.
What you are looking for is an Application Firewall (or Layer 7 Firewall). Google Cloud has another service called Cloud Armor that has WAF (Web Application Firewall) capabilities. I think that by using Cloud Armor and load balancers you might be able to do what you want.
AWS has AWS Shield for free, and they seem pretty similar. Right now DDoS protection is the most important reason to go cloud for me, so this may be the deciding factor.
Cloud Armor is not free, you can check outs its pricing here, and it's not integrated for free with other GCP products; by looking at the AWS documentation, it seems to be the equivalent of "AWS Shield Advanced".
However, just by using the Google Cloud infraestructure, you are protected by the Google Frontend if you use HTTP(s) Load Balancing. This seems to be similar to what AWS offers on their "Shield Standard" tier, which seems to be the free tier as well.
This document here contains more information about what measures you can take in GCP to mitigate and protect yourself from DDoS attacks.
Perhaps the part more relevant segments for your question are these:
DDoS Protection by enabling Proxy-based Load Balancing
When you enable HTTP(S) Load Balancing or SSL proxy Load Balancing,
Google infrastructure mitigates and absorbs many Layer 4 and below
attacks, such as SYN floods, IP fragment floods, port exhaustion, etc.
[...]
Protection by Google Frontend infrastructure
With Google Cloud Global Load Balancing, the frontend infrastructure
which terminates user traffic, automatically scales to absorb certain
types of attacks (e.g., SYN floods) before they reach your compute
instances
So GCP Load Balancing protects you by default from common attacks, while Cloud Armor extends this by allowing you to create and set policies for more complex/targeted DDoS attacks on your services.
We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.