Private link vs VPC peering - amazon-web-services

I need to set up a connection between VPCs. My plan was VPC peering but customer asks for Private Link as they heard it is the secure way. But I am mostly concerned with performance overhead with the private link. What I understood (maybe wrong); in the Private Link architecture there is an extra NLB. Does not this introduce a latency because of extra network hop?

VPC peering and Private Link serve two different purposes.
VPC peering enables you to connect two VPC in a same way you would connect to local networks together, and remote networks using VPN. VPC peering allows network traffic from one VPC to the second VPC. For example, you can SSH from an instance in VPC A into an instance in VPC B.
Private Link is used to expose individual services of yours in VPC A to VPC B. But this does not allow for free flow of network traffic from VPC A to VPC B. For example, let's say you've developed very cool application for image segmentation. The application and all its databases and other resources that it requires are in VPC A. Now a friend comes and he/she would like to use your application. But the friend is in VPC B. Since your application is private, not exposed to the internet, a way for your friend to use the app would be to expose it through Private Link. For this you create NLB in-front of your application, and your friend will get network interface in his VPC B through which he can access your private application in VPC A.
Based on this and your question, there is no clear answer as the two options are used for different purposes. I would suggest to clarify exactly what are your or your customer requirements.
But generally, both will be equally fast. AWS docs write about VPC peering the following:
AWS uses the existing infrastructure of a VPC to create a VPC peering connection; it is neither a gateway nor a VPN connection, and does not rely on a separate piece of physical hardware. There is no single point of failure for communication or a bandwidth bottleneck.
Other examples from AWS docs is here:
Example: Services Using AWS PrivateLink and VPC Peering
Edit: Based on #Michael comment.

Related

AWS: Public subnet + VPN gateway

Question
Can we make a route table which has both igw-id (Internet gateway ID) and vgw-id (VPN gateway ID)? If we can't/shouldn't do it, why?
Example
10.0.0.0/16 --> Local
172.16.0.0/12 --> vgw-id
0.0.0.0/0 --> igw-id
Reference
https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Scenario3.html
(I mainly refer to "Overview" section and "Alternate routing" section.)
Explanation
This webpage above shows the scenario where one VPC has one public subnet, one private subnet and one VPN gateway. In this example, the VPN gateway is always accessed from instances in the private subnet (meaning its route table doesn't have the record "igw-id"). I wonder why one route table doesn't have both "igw-id" and "vgw-id".
Yes, we can have both igw and vgw. In fact, the example above would be a perfect for a public subnet which can connect to your corporate network through direct connect or site-to-site VPN, and also have internet access and be accessible from the internet.
Now, weather you would want this or not, it is an architectural decision. In the example scenario given by AWS, they try to segregate subnets by having a public subnet (with the igw) which can contain services accessible from the internet and a private subnet for other backend services (exmaple: databases). These backend services can be accessed from the corporate network using a site-to-site VPN, this is why the subnet has the vgw.
Yes, you can have a route table with the 3 routes you specified. However, bear in mind that with a route 0.0.0.0/0 --> igw-id, hosts on the internet can initiate connections with the instances in that subnet. Typically, you would want to secure the instances in a subnet that you allow a route to your on-premise network, and not expose it to the internet. If these instances require to connect to the internet, AWS recommends NAT devices for your VPC.
While it's technically possible, the main reason not to do that is due to a concept called network segmentation. It basically follows a "defense in depth" approach that network layers should be separated in a "public" and a "private" part (more zones are also often used like third tier for data storage and a fourth for infra used to manage the other three tiers).
Since your public subnet is directly exposed to the internet, it'most likely to be breached when you misconfigure something. If you would have routes in your public subnet to your VPN gateway, a single breach enables an attacker to attack your on-prem environment as well.
This is why its best practice to add one or two network tiers to your architecture. Even when the public tier is compromised, they still need to get into an instance in the private tier before they can attack your on prem environment.
It's recommended that your public subnet only contains an elastic load balancer or similar when possible. Its a pretty good way to reduce the attack surface of your application servers (less chance that you expose an unwanted port etc.).
A nice guide on this can be found here. It doenst include vpns, but you can condisider them part of the app layer.

Shared VPC and VPC Peering mix

On Google cloud, I have setup new three projects - dev, research and prod. So, then created an Shared VPC Host and three Service Projects as listed above. Also intend to have separate VPCs for each of these service projects (to add more security layer), hence also intend to use now VPC Peering. But confused here can we configure both Shared VPCs and VPC Peering on same set of Projects?. If so then i do not find any links on this and also is this an right thing to do?
Peering and Shared have their own usage. With peering, you are limited to 25 per project and the transitivity isn't possible.
For example, with peering, if you set up a peering between dev and research and between research and prod; dev can't reach the prod (transitivity is forbidden), you have to set up a peering between dev and prod for this. The peering can be interesting when you want to share a VPN or Interconnect endpoint. You perform a peering between the interconnect project and these that want to reuse this connexion.
With share VPC, you don't have the transitivity limitation, all the VM can be in the same VPC, even if they are in different projects.
However, with this config, you break the project strong isolation, your dev project can access to the prod without limitation!
Thereby I recommend you to set up VM network with at least "2 legs": 1 in the shared VPC, the other in a project dedicated VPC. And then to set up the correct firewalls rules on your VPC network for limiting interactions in the shared VPC, but by keeping an unrestricted limitation at project level with the leg in the VPC project.
Peering:
Peering allows internal IP address connectivity across two Virtual Private Cloud (VPC) networks regardless of whether they belong to the same project or the same organization. you are limited to 25 per project and the transitivity isn't possible.
VPC sharing:
Shared VPC allows an organization to connect resources from multiple projects to a common Virtual Private Cloud (VPC) network, so that they can communicate with each other securely and efficiently using internal IPs from that network. When you use Shared VPC, you designate a project as a host project and attach one or more other service projects to it. The VPC networks in the host project are called Shared VPC networks. Eligible resources from service projects can use subnets in the Shared VPC network.

Why can't we implement multiple network interface in a single VPC network on GCP? Where as it is possible in AWS and Azure

Why can't we implement multiple network interface on a single VPC (Which has multiple subnets) in GCP? Where as it is possible in AWS and Azure.
I came across with a problem where I had to implement multiple network interface in GCP. But in my case, all the subnets was present in the single VPC network, I read GCP documentation and got to know that, in GCP it is not possible to have multiple network interface in a single VPC network, in order to implement multiple network interface, all the subnets must be in a different VPC network, where as its completely opposite in AWS and Azure.
In AWS - all network interface must be available in the same VPC, and cannot at network interface from other VPC network.
In Azure vNet - all network interface must be available in the same VPC, and cannot at network interface from other vNet.
Of course, VPC in google cloud is little different from AWS, as an example, Azure vNet and AWS VPC's are regional in nature where as in GCP it is global in nature. And there are several other difference as well.
Was just curious to know about this limitation in GCP which I got.
Your assumption is wrong. You cannot attach more than one network interface to the same subnet, but you can to different subnets in the same VPC.

Access gcp instance in another project

How to access instance in a different project without using external IP
I have two projects, say A and B, and I want to ssh from a instance in project-A to a instance in project-B.
What I found is that I was able to ping the instance in B using its eternal IP from the instance in A, not its internal IP. After I add my public key to the instance in B, I was able to ssh to it using its eternal IP from the instance in A(I have my private key here).
I wonder if I can access project B instance from, project A instance without going through external IP as it will go out of the GCP and comes back. Is there a way that I could do this internally?
Both project A and B are under the same gcp account.
Google VPC's use RFC 1918 IP addresses. These addresses are not routable across the Internet. VPC's can use the same address range in more than one VPC.
If your VPCs are not using overlapping IP addresses, you can enable VPC Network Peering to connect the two VPCs together. You can then use private IP addresses to access resources in each VPC subnet.
Google VPC Network Peering
As I understand it, your could create a shared VPC and have your Compute Engines in your distinct projects have network interfaces to this shared VPC. They would then be able to access each other directly. For full details on shared VPC, see GCP Shared VPC.
Another solution would be to use GCP VPC Peering which allows two distinct but NOT overlapping networks to connect to each other using the GCP VPC Peering capabilities.
There is so much to say about these concepts it doesn't seem to make sense repeating that here. I encourage you to read the docs in the links above and post new specific questions as needed.

How to setup VPC to VPC connection without VPN?

I am looking to find a way to communicate between 2 VPCs in AWS without the use of VPN connections to and from a certain company (outside AWS) - so that the traffic does not pass through the company's gateway. Or, simply said, access an EC2 instance in a VPC from another VPC (both in AWS) without leaving the Amazon Network (not going out on the internet, not even encrypted).
Basically what I want to do is to have a VPC acting as a "proxy" (let's call it PROX) and one acting as a "target" (called TARG). Now I want to connect a company through VPC to the PROX and inside the PROX route the requests to the TARG. Is this achievable? I would go for a traditional public-private single VPC, but I was asked to look into the previously described "architecture".
Use two Linux machines as VPN GW, each in each VPC.
Configure IPsec VPN between them.
That's all you need
This is not possible. You have to use a VPN connection between the two VPCs. You can directly connect them though relatively easily using the pair of IPSec gateways though. This is the recommended method of cross-connecting VPC's across regions.