Is it not possible to run GCP cloud functions (or any GCP serverless compute resources for that matter) inside private networks? Are they always using shared capacity and public networks? Am I missing something? Don't confuse this with egress, I know it is possible to access private networks from serverless resources, but is it possible to limit access to the functions at the network level? AFAIK you can do this with lambdas on AWS and with app service on Azure (although on Azure it was expensive since you need to move away from shared capacity).
You have 2 way traffic in Cloud Function: ingress and egress
Ingress: you can limit the traffic coming from internet, or uniquely from project VPC or VPC SC
Egress: By default, the traffic is directly routed to internet. You can use a serverless VPC connector for:
Either routing only the private IP (RFC1918) to the VPC
Or routing all the traffic to the VPC.
Related
Is it possible to access GCP PaaS (App Engine , Cloud Function, Cloud Run) internally (throught VPC)
I see in this doc : https://cloud.google.com/vpc/docs/configure-serverless-vpc-access
"Serverless VPC Access only allows requests to be initiated by the serverless environment. Requests initiated by a VM must use the external address of your serverless service—see Private Google Access for more information."
But searching for something like "Serverless VPC Access allows in/out requests"
You have 2 ways: in and out
Request TO serverless APP
You can use ingress control with Cloud Functions and Cloud Run services. You can say: I want that only connections from my VPC (or VPC SC perimeter) access to my serverless APP. With App Engine, you have firewall rules but doesn't work with private IP.
Request FROM serverless APP
Here you want to reach private resource exposed only on your VPC with a private IP. And with Cloud Run, Cloud Functions and App Engine, you can plug a serverless VPC connector to achieve this.
EDIT 1
With your appliance firewall deployed on Google Cloud, App Engine isn't the perfect product for this. Indeed, with App Engine you can't control the ingress traffic, and you always accept the traffic from the internet, even if you have a stuff (here your appliance) already on Google Cloud Network with a private IP.
The solution here (to test, depends on the appliance capacity) is to use Cloud NAT and to route all the traffic of the subnet on which the appliance is deployed, and to use a reserved static IP.
Then, on App Engine, you can set a firewall rule to accept only traffic from this reserved static IP.
The latency will increase with all these layers...
Goal
I'm using Google Cloud Functions for running my application logic and need to associate egress network traffic with an IP address in region europe-north1.
The problem
Cloud Functions are not available in region europe-north1 so I need to keep them in some other, supported region. If I set up an external static IP address in region europe-north1, Cloud Function egress doesn't seem to work at all anymore; the network traffic is blocked.
What I've tried
My Cloud Functions run just fine in region europe-west1, and I have been able to associate egress traffic with a static IP address in region europe-west1 successfully by following this GCP guide. This is what the setup looks like:
[Cloud Function] -- [Serverless VPC Access connector]
|
[VPC network]
|
[Cloud NAT gateway] -- [Cloud Router] -- [static IP address]
In this setup I have all the resources in the same region, europe-west1, but that's not what I want, I need the static IP in region europe-north1.
If I get a static IP address from region europe-north1, the Cloud Router as well as the Cloud NAT gateway need to be in that region, too, and then egress is blocked.
I've tried both of the VPC network Dynamic routing modes, regional and global, to no avail.
Serverless VPC Access connectors are supported in europe-north1 but they can't be connected to Cloud Functions in other regions (and CFs aren't supported in that region).
I even started setting up a forward proxy on a VM instance but soon realised it's likely not going to work with HTTPS traffic...
How can I route the traffic from Cloud Functions to a static IP in europe-north1?
Put Extensible Service Proxy V2 (ESPv2) in front of your function. Cloud Run is now available in Hamina (europe-north-1) so this is very convenient way to expose your API from Finland.
https://cloud.google.com/endpoints/docs/openapi/set-up-cloud-functions-espv2
I am trying to connect to a cloud function with HTTP trigger. It has an ingress rule to allow only internal traffic, I want to access this from another function running in a different project.
I tried creating VPCs in both the projects and have also peered them. In the cloud functions I am using a vpc connector in the egress but I still am not able to access.
Is there a direct way to access a cloud function running in say project-A from a cloud function running in say project-B using the network setting?
P.S Due to some constraints I cannot use shared VPC.
You can't achieve this today. Indeed, when you perform a VPC Peering, you define a special hop in the routes to go to the other VPC.
The problem is: When you call your Cloud Function, you don't call it by its IP but by its DNS.
Thereby, you won't use your VPC peering to reach the right VPC and, through it, the cloud function. You will use the public DNS, as any external system can do this, and thus you are blocked.
Why can't we implement multiple network interface on a single VPC (Which has multiple subnets) in GCP? Where as it is possible in AWS and Azure.
I came across with a problem where I had to implement multiple network interface in GCP. But in my case, all the subnets was present in the single VPC network, I read GCP documentation and got to know that, in GCP it is not possible to have multiple network interface in a single VPC network, in order to implement multiple network interface, all the subnets must be in a different VPC network, where as its completely opposite in AWS and Azure.
In AWS - all network interface must be available in the same VPC, and cannot at network interface from other VPC network.
In Azure vNet - all network interface must be available in the same VPC, and cannot at network interface from other vNet.
Of course, VPC in google cloud is little different from AWS, as an example, Azure vNet and AWS VPC's are regional in nature where as in GCP it is global in nature. And there are several other difference as well.
Was just curious to know about this limitation in GCP which I got.
Your assumption is wrong. You cannot attach more than one network interface to the same subnet, but you can to different subnets in the same VPC.
We are having several microservices on AWS ECS. We have single ALB which has different target group for different microservices. We want to expose some endpoints externally while some endpoints just for internal communication.
The problem is that if we put our load balancer in public VPC than it means that we are exposing all register endpoints externally. If we move load balancer to private VPC, we have to use some sort of proxy in public VPC, which required additional infra/cost and custom implementation of all security concerns like D-DOS etc.
What possible approaches we can have or does AWS provide some sort of out of the box solution for this ?
I would strongly recommend running 2 albs for this. Sure, it will cost you more (not double because the traffic costs won't be doubled), but it's much more straight forward to have an internal load balancer and an external load balancer. Work hours cost money too! Running 2 albs will be the least admin and probably the cheapest overall.
Checkout WAF. It stands for web application firewall and is available as AWS service. Follow these steps as guidance:
Create a WAF ACL.
Add "String and regex matching" condition for your private endpoints.
Add "IP addresses" condition for your IP list/range that are allowed to access private endpoints.
Create a rule in your ACL to Allow access if both conditions above are met.
Assign ALB to your WAF ACL.
UPDATE:
In this case you have to use external facing ALB in a public subnet as mentioned by Dan Farrell in comment below.
I would suggest doing it like this:
one internal ALB
one target group per microservice, as limited by ECS.
one Network load balancer(NLB), with one ip based target group.
The Ip based target group will have the internal ALB ip addresses,as the private ip addresses for ALB are not static, you will need to setup cloudwatch cron rule with this lambda function(forked from aws documentation and modified to work on public endpoints as well):
https://github.com/talal-shobaita/populate-nlb-tg-withalb/
Both ALB and NLB are scalable and protected from DDOS by AWS, AWS WAF is another great tool that can be attached directly to your ALB listener for extended protection.
Alternatively, you can wait for AWS to support multiple target group registration per service, it is already in their roadmap:
https://github.com/aws/containers-roadmap/issues/104
This how we eventually solved.
Two LB one in private and one in public subnet.
Some APIs meant to be public, so directly exposed through public LB.
For some private APIs endpoints need to be exposed, added a proxy in public LB and routed those particular paths from public LB to private LB through this proxy.
These days API Gateway is the best way to do this. You can have your API serve a number of different endpoints while serving only the public ones via API Gateway and proxying back to the API.
I don't see it mentioned yet so I'll note that we use a CloudMap for internal routing and an ALB for "external" (in our case simply intra/inter-VPC) communication. I didn't read in depth, but I think this article describes it.
AWS Cloud Map is a managed solution that lets you map logical names to the components/resources for an application. It allows applications to discover the resources using one of the AWS SDKs, RESTful API calls, or DNS queries. AWS Cloud Map serves registered resources, which can be Amazon DynamoDB tables, Amazon Simple Queue Service (SQS) queues, any higher-level application services that are built using EC2 instances or ECS tasks, or using a serverless stack.
...
Amazon ECS is tightly integrated with AWS Cloud Map to enable service discovery for compute workloads running in ECS. When you enable service discovery for ECS services, it automatically keeps track of all task instances in AWS Cloud Map.
You want to look at AWS Security Groups.
A security group acts as a virtual firewall for your instance to control inbound and outbound traffic.
For each security group, you add rules that control the inbound traffic to instances, and a separate set of rules that control the outbound traffic.
Even more specific to your use-case though might be their doc on ELB Security Groups. These are, as you may expect, security groups that are applied at the ELB level rather than the Instance level.
Using security groups, you can specify who has access to which endpoints.