I have one GCP Https function. This function will be invoked either by other GCP function(Pub/Sub) or from the external application.
I want my function should be accessible from these 2 sources only. By default, my function ingress setting is "Allow All Traffic".
How can I achieve this so that my function should be accessible only from one specific external IP. I am beginner in cloud technology so I may have missed something.
Filtering by the IP address is not the recommend way. Using authentication (with IAM service that check the authorization) is a much better solution. Your IPs can change, you can use a VPN, or whatever. Network can change not your identity
When you use Cloud Functions native authentication filtering (with IAM service) and makes your cloud function private, anyone can access it from the internet. BUT, before reaching your Cloud Functions, the traffic is checked by GFE (Google Front End) that control the authentication token presence, validity and IAM permission.
ONLY is all the condition are met, your Cloud Functions is invoked. I feel the fear in your comment that anyone can invoke your functions and it will cost a lot. With private functions, only authorized traffic is routed to your functions (and therefore paid). All the bad traffic is discarded by Google for you.
If you really want to enforce your pattern, you can do the following:
set the ingress to "internal and cloud load balancing"
Add a VPC connector to your other GCP function with the egress set to ALL (to route all the traffic through your VPC)
Create a HTTPS load balancer with a serverless NEG that reference your Cloud Functions
Activate Cloud Armor on your load balancer with a rule that filter only the IP that you want.
Heavy, boring and expensive (you have to pay the load balancer, $14 per month) for nothing more than identity check. Prefer the first solution ;)
Related
I am developing a solution where a Java application hosted on GKE wants to make an outbound HTTP call to a cloud function which is deployed under a different GCP project, where the GKE operates on a shared network of which possesses firewall rules for the CIDR ranges in that shared network.
For example - GKE cluster & Application deployed under GCP Project A, wishes to invoke a Serverless GCP Function deployed to project B.
There are a number of firewall rules configured on the shared network of which the GKE is operating upon, causing my HTTP call to time out, as the HTTP trigger URL is not mapped to an allowed CIDR range (in that shared network).
What have I tried?
I have lightly investigated one or two solutions which make use of Cloud NAT & Router to proxy the HTTP call to the Cloud Function trigger endpoint, but I am wondering if there are any other, simpler suggestions? The address range for cloud functions is massive so allowing that range is out of the question.
I was thinking about maybe deploying the cloud function into the same VPC & applying ingress restrictions to it, would that allow the HTTP trigger to exist in the allowed IP range?
Thanks in advance
Serverless VPC Access is a GCP solution specially designed to achieve what you want. The communication between the serverless environment and the VPC is done through an internal IP address, and therefore never exposed to the Internet.
For your specific infrastructure, you would need to follow the guide Connecting to a Shared VPC network.
I've been searching on google and keep getting referred to the VPC documentation https://cloud.google.com/vpc-service-controls/docs/set-up-private-connectivity but I don't think this will solve my problem. I'm trying to limit the IP address accessing my webhook function on GCP and I need to use API gateway (Apigee isn't an option at the moment for me). Any advice would be great!
If API Gateway isn't requirement, I propose you this solution:
Update the ingress control of your function to set it internal_and_cloud_load_balancing to allow only traffic from your VPCs and the load balancers
Then create a HTTPS external load balancer with a serverless NEG that point to your Cloud Functions
Add Cloud Armor policies on your Load Balancer to filter IP sources.
We see a lot of scans and attempts to hack our various external ingresses in GCP and the majority of these come from outside the U.S. The neat thing is that we don't service anyone outside maybe 5 U.S. states and I'd like to know how to only allow ingress from IPs located inside the U.S. How can I create a firewall rule that does this in GCP is that even possible? Google searches asking this question yield nothing, not even anyone asking this question. Netflix and Hulu seem to have no problems doing this, can we do it too?
The closest thing GCP has to prevent attacks like DDoS or from hacking is Cloud Armor. Google Cloud Armor delivers defense at scale against infrastructure and application Distributed Denial of Service (DDoS) attacks, using Google’s global infrastructure and security systems. Cloud Armor works in conjunction with Global HTTP(S) LB and provides a layer of protection for your applications running on the backend servers. You cannot use it without a Global HTTP LB.
To limit traffic and protect HTTPS LB, you can configure Cloud Armor security policies that are made up of rules that allow or prohibit traffic from IP addresses or ranges defined in the rule. You can create Google Cloud Armor security policies with IP deny lists and allow lists that restrict unauthorized access to HTTP(S) Load Balancer from the internet. It’s worth mentioning that in GCP, firewall rules are specified in VPC level not coupled with HTTPS Load balancer. Additionally, you could get a little more information about GCP Best Practices for DDoS Protection and Mitigation on Google Cloud Platform.
I’m not sure if this is will answer you question or not, but there is a way to combine multiple GCP services like Cloud Armor, Memorystore and Cloud Run you can dynamically configure a security policy that can serve your purpose, to add a functionality similar to fail2ban to a cloud environment.
I setup a tutorial for this and you can find it here hodo.dev/posts/post-39-gcp-fail2ban-cloud-armor
I have provisioned an AWS API Gateway and created a Lambda function to connect to an external REST API. The API Gateway & Lambda is not in a VPC so the egress IP address is random. The challenge I have is the external REST API is behind a firewall, which requires the IP address or subnet of the Lambda to be whitelisted.
I have looked at the AWS IP Address page (below), however there is no explicit mention of either API Gateway or Lambda.
https://docs.aws.amazon.com/general/latest/gr/aws-ip-ranges.html#filter-json-file
Has anyone come across this before & found a resolution to it. For the purposes of this solution I cannot put the API Gateway & Lambdas in a VPC.
Any help would be greatly appreciated!
API Gateway seems to be irrelevant to this discussion. If I understand your question, you're trying to make API requests from a Lambda function to a remote API server and you want those requests to originate from a known IP address so that you can whitelist that IP at the remote server.
First thing I would say is don't use IP whitelisting; use authenticated API requests instead.
If that's not possible then use VPC with an Internet Gateway (IGW). Create a NAT and an Elastic IP, launch the Lambda into a private subnet of that VPC, and route the subnet's non-local traffic to the NAT. Then whitelist the NAT's Elastic IP on the remote API server. Examples here and here.
I know that you said you "cannot put [...] Lambdas in a VPC", but if you don't then you have no control over the originating IP address.
It is frustrating that the only way to ensure a Lambda function uses a static ip without a hack is to put the Lambda inside a VPC, create a NAT with an Elastic IP, as many other answers nicely explain.
However, NATs cost around $40 per month in regions that I am familiar with, even with minimal traffic. This may be cost prohibitive for certain use cases, such as if you need multiple dev/test/staging environments.
One possible workaround (which should be used with caution) is to create a micro EC2 instance with an elastic IP (which gives the static IP address), then your choice of proxy/routing so you can make HTTP calls by tunnelling from the Lambda function through the EC2 instance. (e.g. SSH from Lambda function into EC2 instance then CURL from EC2 to the endpoint which is protected by an allowlist)
This is a few extra hoops to jump through and could introduce security vulnerabilities which should be mitigated (e.g. Beware storing SSH keys or passwords inside a Lambda function, ensure Security Groups are tight) but I wanted to post this as a possible workaround for any devs who need a cost effective workaround for a requirement to connect to an endpoint which enforces allowlist rules.
We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.