Firewall issue - egress from GKE to Cloud Function HTTP Trigger - google-cloud-platform

I am developing a solution where a Java application hosted on GKE wants to make an outbound HTTP call to a cloud function which is deployed under a different GCP project, where the GKE operates on a shared network of which possesses firewall rules for the CIDR ranges in that shared network.
For example - GKE cluster & Application deployed under GCP Project A, wishes to invoke a Serverless GCP Function deployed to project B.
There are a number of firewall rules configured on the shared network of which the GKE is operating upon, causing my HTTP call to time out, as the HTTP trigger URL is not mapped to an allowed CIDR range (in that shared network).
What have I tried?
I have lightly investigated one or two solutions which make use of Cloud NAT & Router to proxy the HTTP call to the Cloud Function trigger endpoint, but I am wondering if there are any other, simpler suggestions? The address range for cloud functions is massive so allowing that range is out of the question.
I was thinking about maybe deploying the cloud function into the same VPC & applying ingress restrictions to it, would that allow the HTTP trigger to exist in the allowed IP range?
Thanks in advance

Serverless VPC Access is a GCP solution specially designed to achieve what you want. The communication between the serverless environment and the VPC is done through an internal IP address, and therefore never exposed to the Internet.
For your specific infrastructure, you would need to follow the guide Connecting to a Shared VPC network.

Related

GCP Https Function - Ingress settings

I have one GCP Https function. This function will be invoked either by other GCP function(Pub/Sub) or from the external application.
I want my function should be accessible from these 2 sources only. By default, my function ingress setting is "Allow All Traffic".
How can I achieve this so that my function should be accessible only from one specific external IP. I am beginner in cloud technology so I may have missed something.
Filtering by the IP address is not the recommend way. Using authentication (with IAM service that check the authorization) is a much better solution. Your IPs can change, you can use a VPN, or whatever. Network can change not your identity
When you use Cloud Functions native authentication filtering (with IAM service) and makes your cloud function private, anyone can access it from the internet. BUT, before reaching your Cloud Functions, the traffic is checked by GFE (Google Front End) that control the authentication token presence, validity and IAM permission.
ONLY is all the condition are met, your Cloud Functions is invoked. I feel the fear in your comment that anyone can invoke your functions and it will cost a lot. With private functions, only authorized traffic is routed to your functions (and therefore paid). All the bad traffic is discarded by Google for you.
If you really want to enforce your pattern, you can do the following:
set the ingress to "internal and cloud load balancing"
Add a VPC connector to your other GCP function with the egress set to ALL (to route all the traffic through your VPC)
Create a HTTPS load balancer with a serverless NEG that reference your Cloud Functions
Activate Cloud Armor on your load balancer with a rule that filter only the IP that you want.
Heavy, boring and expensive (you have to pay the load balancer, $14 per month) for nothing more than identity check. Prefer the first solution ;)

Does Cloud Run with a VPC Connector send all original outbound traffic through the connector?

With fully managed Cloud Run connected to a VPC with a Serverless VPC Accessor, does all outbound traffic from Cloud Run go through that connector, or only traffic destined for RFC 1918 addresses?
If only for private IPs, how can I configure Cloud Run to send all of its outbound requests into the VPC?
(Note - with Cloud Functions there is an option to route all traffic or only private IPs through the connector - reference )
For now the Serverless VPC Connector for Cloud Run is still on Beta and some Network features will be added in the future, including Egress control.
The goal right now is to develop an identical implementation as the one Cloud Functions has so it makes sense to quote that doc. Unfortunately there is no ETA for it to be implemented. We encourage you to follow up the release notes
When reading this particular doc keep in mind: The Cloud Run service is the one fully managed, not the VPC Access connector. With that said, we could tell that all the traffic without would go through the VPC for now.
Hope this is helpful! :)

cross-region k8s inter-cluster communication in GCP

I am looking for a way to access services/applications in a remote k8s cluster(C2) hosted in a different region(R2) from a client application in my current cluster(C1 in region R1).
Server application needs to load-balanced(fqdn preferred over IP)
Communication is through private network, no internet
I tried using an internal-LB for C2 which doesn't work and later realized it to be a regional product.
Moreover, it seems, the same constraint is true for vpc peering also.
Please suggest how to achieve this.
You can't use any internal GCP LB on a regional level. However, you may be able to use an Nginx internal ingress as it may not be limited to the same region.
Otherwise you can use Creating VPC-native clusters using Alias IPs which can allow you to call on pods directly. It will not offer built in load balancing but it is an alternative.
Finally, if you need to use the internal load balancers, you can create a VPN tunnel between the two regions and create a route that forces traffic through the gateway. Traffic coming through the tunnel will be regional to the ILB, but this config is more expensive and with more moving parts, there's a higher chance of failure

How to expose APIs endpoints from private AWS ALB

We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.

Amazon Elastic Beanstalk internal and internet access

We’re trying to create a setup of multiple APIs via the Amazon AWS Elastic Beanstalk (AEB) component. The reason we have chosen AEB is because it provides seamless deployment and scaling for the applications we deploy, without the need to manually create Load Balancers (LB) and scaling rules. We would very much like to keep it this way as we are planning on launching a (large) number of applications and APIs.
However, we’re facing a number of challenges with AEB.
First and foremost, some of the API’s need to communicate internally, and low latency is a core requirement for us. In order to utilize internal network communication in AEB we have been “forced” to:
Allocate a VPC in Amazon
Deploy each application to this VPC - each behind their own internal LB
Now, when using the Elastic beanstalk URLs the APIs are able to resolve the internal IP of the LB of another API and thus the latency is eliminated and all is good - the APIs can communicate with one another.
However, this spawns another issue for us:
Some of these “internally” allocated APIs (remember, they’re behind an internal LB in a VPC) must also be accessible from the internet.
We still haven’t found a way to make the internal LBs internet accessible (while keeping their ability to also act as internal LB), so any help on this matter is greatly appreciated.
Each application should be on a subnet within VPC
Update ACL and ELB Security Group to let external access
AWS Elastic Load Balancing Inside of a Virtual Private Cloud
Also, this question on SO contains relevant information: Amazon ELB in VPC