ap-east-1,us-west-2,eu-north-1 we have these different regions, and I want to communicate through intranet,may be need use VPC. Could you please tell me how to config vpc, route, and subnets?
You need to setup VPC peering between the two VPCs in different regions. With the peering,
Traffic always stays on the global AWS backbone, and never traverses the public internet, which reduces threats, such as common exploits, and DDoS attacks.
How to set it up is explained in detail in AWS documentation.
From the information presented, the best option in your case as Marcin suggested is to use VPC Peering. You need to bear in mind that you can't peer VPCes with overlapping VPC CIDRs
Just for completion, there is another option to cross-connect VPCes from different accounts and regions called Transit Gateway. In your case, as you'll need to set up Transit Gateway Peering between the regions it may be a bit too complex.
VPC Peering is a region specific service and so can join VPCs only within the same region.
If you want to establish communication without using public internet between VPCs across regions you may explore on "VPC Endpoint" and "VPC Endpoint Services" options. Please note some of these services are not free-tier eligible, in case you are experimenting.
Related
The way my set-up works is that I have a 'development', 'staging' and 'production' environment all sitting in separate VPCs and I want to create a client VPN endpoint to allow engineers access to the internals of all these environments (the database mostly).
However, I can't decide how to approach this issue, my first idea was to create a single VPC which peers into all the other VPCs. This would make building the resource in terraform easier as the VPN can be completely separated out.
My other option would be to just have the VPN connection to the development VPC for example and then from there the development VPC peers into the production & staging VPCs however I really don't like this approach
As Steve mentioned from the comment, if you want to centralize your networking setup, for example:
A single or multi AWS VPN S2S with many VPCs
A single or multi DX connection with many VPCs
A single AWS Client VPN to many VPCs
and more
The answer is AWS Transit Gateway. This service also helps you if you have your VPCs placing under different AWS accounts.
For your use-case, AWS has published a blog post with detailed architecture with use-cases for your reference.
https://aws.amazon.com/blogs/networking-and-content-delivery/using-aws-client-vpn-to-scale-your-work-from-home-capacity/
I need to set up a connection between VPCs. My plan was VPC peering but customer asks for Private Link as they heard it is the secure way. But I am mostly concerned with performance overhead with the private link. What I understood (maybe wrong); in the Private Link architecture there is an extra NLB. Does not this introduce a latency because of extra network hop?
VPC peering and Private Link serve two different purposes.
VPC peering enables you to connect two VPC in a same way you would connect to local networks together, and remote networks using VPN. VPC peering allows network traffic from one VPC to the second VPC. For example, you can SSH from an instance in VPC A into an instance in VPC B.
Private Link is used to expose individual services of yours in VPC A to VPC B. But this does not allow for free flow of network traffic from VPC A to VPC B. For example, let's say you've developed very cool application for image segmentation. The application and all its databases and other resources that it requires are in VPC A. Now a friend comes and he/she would like to use your application. But the friend is in VPC B. Since your application is private, not exposed to the internet, a way for your friend to use the app would be to expose it through Private Link. For this you create NLB in-front of your application, and your friend will get network interface in his VPC B through which he can access your private application in VPC A.
Based on this and your question, there is no clear answer as the two options are used for different purposes. I would suggest to clarify exactly what are your or your customer requirements.
But generally, both will be equally fast. AWS docs write about VPC peering the following:
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.
Other examples from AWS docs is here:
Example: Services Using AWS PrivateLink and VPC Peering
Edit: Based on #Michael comment.
I have been reading and watching everything [1] I can related to designing highly available VPCs. I have a couple of questions. For a typical 3-tier application (web, app, db) that needs HA within a single region it looks like you need to do the following:
Create one public subnet in each AZ.
Create one web, app, and db private subnet in each AZ.
Ensure your web, app and db EC2 instances are split evenly between AZs (for this post assume the DBs are running hot/hot and the apps are stateless).
Use an ALB / autoscaling to distribute load across the web tier. From what I read ALBs provide HA across AZs within the same region.
Utilize Internet gateways to provide a target route for Internet traffic.
Use NAT gateways to SRC NAT the private subnet VMs so they can get out to
the Internet.
With this approach do you need to deploy one Internet and NAT gateway to each AZ? If you only deploy one what happens when you have an AZ outage. Are these services AZ aware (can't find a good answer for this question). Any and all feedback (glad to RTFM) is welcomed!
Thank you,
- Mick
[1] Last two resources I reviewed
Deploying production grade VPCs
High Availability Application Architectures in Amazon VPC
You need NAT Gateway in each AZ as the redundancy is limited to a single AZ. Here is the snippet from the official documentation
Each NAT gateway is created in a specific Availability Zone and
implemented with redundancy in that zone.
You need just a single Internet gateway for a VPC as it is redundant across AZs and a VPC level resource. Here is the snippet from Internet Gateway offical documentation
An internet gateway is a horizontally scaled, redundant, and highly
available VPC component that allows communication between instances in
your VPC and the internet. It therefore imposes no availability risks
or bandwidth constraints on your network traffic.
Here is a highly available architecture image showing NAT GW per AZ and Internet GW as a VPC resource
Image source: https://aws.amazon.com/quickstart/architecture/vpc/
Q1: Is a hub & spoke model with vpc peering better compared to using a shared vpc. See below tenancy design in AWS, we are trying to bring a similar structure.
Q2: Is there any native service/virtual appliance(Firewall) or feature available to route traffic between spokes without spoke to spoke peering?
Q3: Cross account access – Is it possible to have cross-account access
Q4: Do we use Subnet to create zones in GCP, refer diagram above
Q5: Is there randomization of the zones in GCP as in AWS (Zone a in one account would be Zone b in another)
Q2: Is there any native service/virtual appliance(Firewall) or feature available to route traffic between spokes without spoke to spoke peering?
No, the VPC peering is not transitive in nature. This means that VPC A peered with VPC B, VPC A peered with VPC C would mean that VPC B cannot see or send ICMP traffic to VPC C.
Q3: Cross account access – Is it possible to have cross-account access
Yes, as long as there is a peering is established between all communicating accounts
Q4: Do we use Subnet to create zones in GCP, refer diagram above
Availability zones in AWS and Zones in GCP are comparable. Subnet's are further slicing down the VPC to create dedicated areas for inbound/outbound traffic management and resources placements
Q5: Is there randomization of the zones in GCP as in AWS (Zone a in one account would be Zone b in another)
Yes the zones are randomized in GCP and do not represent fixed or known locations all the times.
Q1: Is a hub & spoke model with vpc peering better compared to using a shared vpc. See below tenancy design in AWS, we are trying to bring a similar structure.
Regarding Q1, the merits of the solution depend on the features accounted for the suitability of the solution, that would be different for each case.
There are two approaches for this hub-and-spoke architecture: shared VPC and peered VPC.
Shared VPC [1] allows one organization to connect resources from multiple projects to a common VPC network, so that they can communicate with each other securely and efficiently using internal IPs from that network. There will be:
A host project
One or more other service projects attached to it
VPC Network Peering [2] allows private connectivity across two VPC networks which may belong to one or multiple projects or organizations.
[1] https://cloud.google.com/vpc/docs/shared-vpc
[2] https://cloud.google.com/vpc/docs/vpc-peering
I know it's old but needs a correction:
Q2 - yes, you can route between spokes via hub VPC if you have a routing VM in hub and a proper custom route via that VM. Hub needs to export custom routes and spokes need to import. Pretty standard design for threat inspection between VPCs.
I have one VPC where i configured NAT Gateway. Another VPC(s) do not have any "public subnet" nor IGW. I would like to share single NAT Gateway among many VPCs.
I tried to configure Routing table but it does not allow to specify NAT Gateway from different VPC.
As posible solution, I installed http/s proxy in VPC with IGW and configured proxy settings on every instance in different VPC. It worked, but I would like use NAT Gateway due to easier management.
Is it possible to make this kind of configuration at AWS?
There are few VPCs and I do not want to add NAT Gateway to each VPC.
Zdenko
You can't share a NAT Gateway among multiple VPCs.
To access a resource in another VPC without crossing over the Internet and back requires VPC peering or another type of VPC-to-VPC VPN, and these arrangements do not allow transit traffic, for very good reasons. Hence:
You cannot route traffic to a NAT gateway through a VPC peering connection, a VPN connection, or AWS Direct Connect. A NAT gateway cannot be used by resources on the other side of these connections.
http://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/vpc-nat-gateway.html#nat-gateway-other-services
The instances in the originating VPC are, by definition, "on the other side of" one of the listed interconnection arrangements.
AWS Transit Gateway now provides an option to do what you wish, although you will want to consider the costs involved -- there are hourly and data charges. There is a reference architecture published in which multiple VPCs share a NAT gateway without allowing traffic between the VPCs:
https://aws.amazon.com/blogs/networking-and-content-delivery/creating-a-single-internet-exit-point-from-multiple-vpcs-using-aws-transit-gateway/
You basically have 3 options
connect to a shared VPC (typically in a shared "network" account) that holds the NAT via VPC peering. No additional costs for the VPC peering, but cumbersome to setup if you have a lot of accounts
same, but using Transit Gateway. A Peering Attachment is almost the same cost as a single NAT, so this will only save costs if you use multiple NAT gateways to have a high bandwidth
Setup a shared VPC (e.g. in an infrastructure account that holds the NAT. Then share private subnets via AWS resource manager (RAM) to the VPCs that need outgoing access. This has the additional benefit you have a single place where you allocate VPC IP ranges and not every account needs to bother with setting up the full VPC. More details in AWS VPC sharing best practices. This setup avoids both the Transit Gateway costs and the burden of setting up VPC peering. But needs more careful planning to keep things isolated (and likely not everything in the same VPC)