Serverless VPC connector, Google APIs, traffic routing and best practices [closed] - google-cloud-platform

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
Working for a Findata company, we have strict requirements in terms of compliance to be considered "safe" to work with us.
I took some time reading about Serverless VPC connector specifically and it raised mainly two questions.
Here's an architecture diagram that may help answering question 2.
Question 1
I understand that when creating a Serverless VPC connector, you can connect to any private IP present in the same VPC. For instance, a Cloud Run app that connects to a Cloud SQL instance through it's private IP.
What I am still wondering, is how it works when using Google Cloud APIs. For instance, let's take a Cloud Run app that consumes data from BigQuery.
Knowing that we can configure egress traffic to be routed like so:
If we route all traffic through the VPC connector, from my tests, it will reach BigQuery API only if the subnet associated to the connector activated Private Google Access
So here it's going through the VPC for sure. The downside (big?) is that it consumes bandwidth of your connector, right? Also, if the app is scaling up, bandwidth consumption will increase.
My question there is:
To avoid this overhead, does Route only requests to private IPs through the VPC connector option use also the private network? Or does it go to Internet to reach Google APIs?
Question 2
For us, connectors are expensive. We were thinking on how to deploy them (if required, it actually depends on the answer of the question 1)
From what I know, for expensive network setup (like sharing an Cloud Interconnect link), people tends to create a Host Project that manage all the networking and share it using Shared VPC
My question there is:
Is it something to consider as well for Serverless VPC connector? Is it better to create few big ones and share them to multiple serverless service or create a lot of small ones?

Related

Multiple VPCs with same CIDR blocks to connect to Redis VPC [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 28 days ago.
Improve this question
I'm trying make a setup on AWS with the following structure (very much simplified)
Where we have different environments/stages with the same exact setup accessing a global environment. It utilizes VPC Peering and works for the first environment. I have specifically chosen CIDR 11.0.0.0/16 for the global VPC and CIDR 10.0.0.0/16 for the non-global environment so that VPC peering is possible as the CIDR blocks do not collide.
When adding a new developer to the team, an exact copy of the non-global environment is set up. While the global and non-global CIDRs does not collide, the CIDRs of the sibling environments now collides, creating an error when trying to update the global environments route table.
How does one go about this?
I guess we could rework the non-global environments CIDRs to something like 10.1.0.0/16, 10.2.0.0/16 and so on. However, we would very much not like to do this so we don't have to maintain a map of which environments has which CIDRs manually.
If there is an even better way to connect to the Redis cluster from other VPCs, I would love to hear it as well.
FYI: This is the setup for the devs and a similar setup will be made for production (albeit only with one peered connection between Redis and the production application - no other stages).
It is not possible to peer with multiple VPCs with the same CIDR range.
Also, please note that a 11.x.x.x network is NOT an appropriate range, since it is using public IP addresses. You should always use Private network addresses - Wikipedia to avoid conflicting with actual IP addresses on the Internet.

How to block a particular IP from making requests to my EC2 instance? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
there is a bot that keeps searching for security vulnerabilities my site and ended up ddosing me like 4 times per day.
I want to block this IP 31.220.61.65
My architecture is very simple:
I have one EC2 instance with elastic IP serving a website. (of course it has a security group attached as well allowing port 443)
Every couple of hours the ip above (and others) make like 300 requests per minute and crash my modest server and leave tthe real users without service.
How can I block requests from particular ips reaching the ec2 in my simple architecture?
You can use Network ACLs
If your EC2 instance is behind a load balancer or CloudFront distribution you can add a Web Application Firewall and block the IP via a WAF rule.
The easiest and quickest way to block IPs for such attacks is to block them at NACL level.
Go to the subnet of your EC2 instance
Select NetworkACL as shown in below picture
Click on the highlighted link to go on NACL detail page
Add a Deny Rule for IP 31.220.61.65/32.
Locate the Network ACL for EC2
Add Deny rule to block IP

Access Google Cloud Run through the VPN [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I am trying to hide my Cloud Run application in VPN.
What I already did:
I have created a box in VPC (vpn-gateway)
Configured VPN there
Configured routing to allow access all machines in VPC
Created VPC Connecter to have a bridge between Cloud Run and my VPC
Set 'Route all traffic through the VPC Connector' on Cloud Run
route traffic 216.239.0.0/16 (aka Cloud Run) through the VPN
At this point it works well.
Unfortunately, I have 3 projects (prod/testing/dev). And I'd like to have one VPN for all projects.
I decided to add Shared VPC on top of all my projects. And it seems that hosts from Host (parent) project cannot access Cloud Run in Service Project.
What is best practice for hiding Cloud Run applications from 3rd party users?

Can we use Unmanaged Instance group for external HTTPS load balancer in GCP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am planning to create an External HTTPS load balancer on GCP but in my use case there are no identically configured VMs and i need to make sure it should be a highly available (HA) setup.
So as no identical VMs i am planning to go with Unmanaged instance group(for backend-service configuration in LB) but in gcp documentation it is mentioned that unmanaged instance group is not suitable for HA.
can you help me out which approach we need to choose for this use case?
Thanks in advance...
Why are your VMs not identical?
If you put them in the same Unmanaged Instance Group, and then to an backend service the same request may end up in any of the VMs, so they need to serve the same content, for the same request.
Can you give an example of your use case?
As in case you need more VMs to respond to the requests depending on the load for example, it is required to have a single image that can be provisioned on multiple VMs.
Another solution would be to have different backend services for each unique VM type, so that way every unique VM type would be in its own Managed Instance group.
I don't think there is another way, beside Managed Instance Group to achieve HA with VMs instances in GCP.
Unmanaged means not managed! And it means a lot.
no health check: If your VM is down, slow, unstable (...) nothing is performed by the instance group
no scalability: in case of increase or decrease of the usage, the instance group won't create or delete instance.
no instance rollout: you have a new version of your VM (with patches, new app versions,...), the instance group won't ensure a roll out without downtime.
Because, it's not managed, you have to ensure all these things by yourselves. Not impossible, but a lot to do. Or switch to a managed instance group, even if your group is only 1 VM!
I would like to give some suggestions on how to make your infra setup as reliable :
Get global https LB
Even though you have different VM config still get the managed instance group as backend
Get ssl cert for your domain if used and make it secure too
I have tried using different backend services for each unique VM type, so that way every unique VM type would be in its own Managed Instance group and set up the https load balancer and it worked!

Alternatives for hosting a simple Slack App (AWS is too expensive)? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 2 years ago.
Improve this question
I've been developing a small Slack application for my team. It's a very simple app to help organizing projects. I've written it in Python and used AWS Lambda (one of the Slack API hosting recommendations) to host it. As the usage of this app will be very incidental, I thought that AWS Free Tier could handle it for a while. But I was surprised to discover that, while Lambda has a free tier, I need to configure a NAT Gateway that costs $0.045 per hour to get anything useful out of it.
I'm very disappointed by this. I can't justify the costs of this NAT Gateway for such a small and simple application (that will be used by 5-10 people maximum, and only sometimes). Is there are workaround that I could use (I've heard about NAT instances)?
EDIT: I've created a NAT instance and tried using it with my app. Thing is, now Slack is throwing me a Timeout Reached error (since Slack expects a response within 3000ms before throwing this error). So, are NAT instances slower than NAT Gateways?
NAT instances provide Internet connectivity for EC2 instances located in private subnets. NAT instances provide network address translation. NAT instances are not related to API Gateway nor Lambda functions.
NAT Instances
API Gateway does not have a cost per hour unless you configure caching, which probably is not necessary for your use case. More details are needed to be sure.
API Caching
Note: You can call you Lambda functions directly from your Python code if you do not need all of the features of API Gateway.
Boto3 Lambda.Client