Multiple VPCs with same CIDR blocks to connect to Redis VPC [closed] - amazon-web-services

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 28 days ago.
Improve this question
I'm trying make a setup on AWS with the following structure (very much simplified)
Where we have different environments/stages with the same exact setup accessing a global environment. It utilizes VPC Peering and works for the first environment. I have specifically chosen CIDR 11.0.0.0/16 for the global VPC and CIDR 10.0.0.0/16 for the non-global environment so that VPC peering is possible as the CIDR blocks do not collide.
When adding a new developer to the team, an exact copy of the non-global environment is set up. While the global and non-global CIDRs does not collide, the CIDRs of the sibling environments now collides, creating an error when trying to update the global environments route table.
How does one go about this?
I guess we could rework the non-global environments CIDRs to something like 10.1.0.0/16, 10.2.0.0/16 and so on. However, we would very much not like to do this so we don't have to maintain a map of which environments has which CIDRs manually.
If there is an even better way to connect to the Redis cluster from other VPCs, I would love to hear it as well.
FYI: This is the setup for the devs and a similar setup will be made for production (albeit only with one peered connection between Redis and the production application - no other stages).

It is not possible to peer with multiple VPCs with the same CIDR range.
Also, please note that a 11.x.x.x network is NOT an appropriate range, since it is using public IP addresses. You should always use Private network addresses - Wikipedia to avoid conflicting with actual IP addresses on the Internet.

Related

How to block a particular IP from making requests to my EC2 instance? [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed last month.
Improve this question
there is a bot that keeps searching for security vulnerabilities my site and ended up ddosing me like 4 times per day.
I want to block this IP 31.220.61.65
My architecture is very simple:
I have one EC2 instance with elastic IP serving a website. (of course it has a security group attached as well allowing port 443)
Every couple of hours the ip above (and others) make like 300 requests per minute and crash my modest server and leave tthe real users without service.
How can I block requests from particular ips reaching the ec2 in my simple architecture?
You can use Network ACLs
If your EC2 instance is behind a load balancer or CloudFront distribution you can add a Web Application Firewall and block the IP via a WAF rule.
The easiest and quickest way to block IPs for such attacks is to block them at NACL level.
Go to the subnet of your EC2 instance
Select NetworkACL as shown in below picture
Click on the highlighted link to go on NACL detail page
Add a Deny Rule for IP 31.220.61.65/32.
Locate the Network ACL for EC2
Add Deny rule to block IP

Access Google Cloud Run through the VPN [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 months ago.
Improve this question
I am trying to hide my Cloud Run application in VPN.
What I already did:
I have created a box in VPC (vpn-gateway)
Configured VPN there
Configured routing to allow access all machines in VPC
Created VPC Connecter to have a bridge between Cloud Run and my VPC
Set 'Route all traffic through the VPC Connector' on Cloud Run
route traffic 216.239.0.0/16 (aka Cloud Run) through the VPN
At this point it works well.
Unfortunately, I have 3 projects (prod/testing/dev). And I'd like to have one VPN for all projects.
I decided to add Shared VPC on top of all my projects. And it seems that hosts from Host (parent) project cannot access Cloud Run in Service Project.
What is best practice for hiding Cloud Run applications from 3rd party users?

Serverless VPC connector, Google APIs, traffic routing and best practices [closed]

Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 months ago.
Improve this question
Working for a Findata company, we have strict requirements in terms of compliance to be considered "safe" to work with us.
I took some time reading about Serverless VPC connector specifically and it raised mainly two questions.
Here's an architecture diagram that may help answering question 2.
Question 1
I understand that when creating a Serverless VPC connector, you can connect to any private IP present in the same VPC. For instance, a Cloud Run app that connects to a Cloud SQL instance through it's private IP.
What I am still wondering, is how it works when using Google Cloud APIs. For instance, let's take a Cloud Run app that consumes data from BigQuery.
Knowing that we can configure egress traffic to be routed like so:
If we route all traffic through the VPC connector, from my tests, it will reach BigQuery API only if the subnet associated to the connector activated Private Google Access
So here it's going through the VPC for sure. The downside (big?) is that it consumes bandwidth of your connector, right? Also, if the app is scaling up, bandwidth consumption will increase.
My question there is:
To avoid this overhead, does Route only requests to private IPs through the VPC connector option use also the private network? Or does it go to Internet to reach Google APIs?
Question 2
For us, connectors are expensive. We were thinking on how to deploy them (if required, it actually depends on the answer of the question 1)
From what I know, for expensive network setup (like sharing an Cloud Interconnect link), people tends to create a Host Project that manage all the networking and share it using Shared VPC
My question there is:
Is it something to consider as well for Serverless VPC connector? Is it better to create few big ones and share them to multiple serverless service or create a lot of small ones?

Can we use Unmanaged Instance group for external HTTPS load balancer in GCP [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 years ago.
Improve this question
I am planning to create an External HTTPS load balancer on GCP but in my use case there are no identically configured VMs and i need to make sure it should be a highly available (HA) setup.
So as no identical VMs i am planning to go with Unmanaged instance group(for backend-service configuration in LB) but in gcp documentation it is mentioned that unmanaged instance group is not suitable for HA.
can you help me out which approach we need to choose for this use case?
Thanks in advance...
Why are your VMs not identical?
If you put them in the same Unmanaged Instance Group, and then to an backend service the same request may end up in any of the VMs, so they need to serve the same content, for the same request.
Can you give an example of your use case?
As in case you need more VMs to respond to the requests depending on the load for example, it is required to have a single image that can be provisioned on multiple VMs.
Another solution would be to have different backend services for each unique VM type, so that way every unique VM type would be in its own Managed Instance group.
I don't think there is another way, beside Managed Instance Group to achieve HA with VMs instances in GCP.
Unmanaged means not managed! And it means a lot.
no health check: If your VM is down, slow, unstable (...) nothing is performed by the instance group
no scalability: in case of increase or decrease of the usage, the instance group won't create or delete instance.
no instance rollout: you have a new version of your VM (with patches, new app versions,...), the instance group won't ensure a roll out without downtime.
Because, it's not managed, you have to ensure all these things by yourselves. Not impossible, but a lot to do. Or switch to a managed instance group, even if your group is only 1 VM!
I would like to give some suggestions on how to make your infra setup as reliable :
Get global https LB
Even though you have different VM config still get the managed instance group as backend
Get ssl cert for your domain if used and make it secure too
I have tried using different backend services for each unique VM type, so that way every unique VM type would be in its own Managed Instance group and set up the https load balancer and it worked!

Purpose of AWS Client VPN Client CIDR Range?

Originally asked on the AWS forums but I get the sense I won't hear back for quite some time, so I'm also posing my questions here:
I recently set up a Client VPN based on this guide. When connected I'm successfully able to access the internet as well as resources in a private subnet, so at this point I have a basic understanding of how all the parts fit together, except for one: the Client CIDR range. This concept gave me so much trouble that I think it stretched out the time-to-build by 2 days because of all the thrashing I did trying to connect it to the other concepts Client VPN involves. But it bugs me when I don't fully understand a thing so I have some questions about it:
Does the Range benefit at all from being in the same CIDR range as the VPC it's a part of, assuming it doesn't overlap with target network(s)? Why or why not?
Why does the Range need to be of size /22, while target networks can be as small as /27? Doesn't that imply 2^5 more clients could be attempting to access a resource in a VPC as there are available addresses in a given subnet?
In setting up security groups for the private subnet I noticed that I had to use rules based on the CIDR range of the target subnet client connections landed in, rather than the Client CIDR range - why is that?
As you can probably tell from my questions, I'm not a network administrator. I'm trying to understand that world at the same time I'm trying to spin up useful infrastructure. My guess is the answers to these questions are blindingly obvious to someone with experience in that area, but I just don't get it.
Here are my attempts at clarification:
So the range shouldn't overlap the VPC CIDR supernet (and individual subnets within the VPC) or you may get routing conflicts. So I'm not sure what you are referring to? Can you provide your configuration.
From what I can tell the /16 to /22 range is just something that is not technical restriction, probably because AWS hadn't had a chance to add a feature that would allow this to have more options. I'm assuming you want a smaller range? In Azure P2S VPN, there is not such restriction - their minimum pool is a /29.
SGs are applied to resources such as EC2s and not VPCs directly but in the inbound rules you can specific CIDRs directly - so I'm not sure what you are referring to... do you have the specific example you could share?