What is an ideal place for creating GCP Private Service Connect Endpoint, Publish Service etc. in case of "Shared VPC" setup? - google-cloud-platform

I understand GCP Private Service Connect (PSC) is an effective solution for enabling service-centric private network connectivity for GCP APIs and other hosted services within and across VPC projects/organizations/on-prem setup based on Producer/Consumer project model.
GCP documentation on Private Service Connect explains the purpose, PSC Endpoint and Publish Service configuration however I don't find relevant details on what should be an ideal location (or best practice) for creating and configuring PSC Endpoint, Publish Service when you have Shared VPC based network setup.
IMO, PSC Endpoint & Publish Service are network resources so the ideal place is to create in 'Host Project' of Shared VPC Network as a 'Host Project' is meant for centralized network management.
Also, having PSC Endpoint and Publish Service in 'Host Project' will help in sharing a single 'PSC endpoint' for all the 'Service Project' resources (which otherwise require multiple PSC endpoints per Service Project). However I would like to understand from you if you have come across and/or implemented such scenario.
Update: I tried a Shared VPC setup wherein PSC creation is allowed in Service-Project which means GCP doesn't restrict the creation of PSC in Service-Project.
Host Project:
Service Project:
Service Project PSC setup:

Related

What's the best way to load balance between Cloud Run services across projects?

Consider a scenario with two identical Cloud Run services ("Real Time App") deployed in two different regions, let's say EU and US. These services use Firestore for real time communication, and minimizing latency is important. Since Firestore only allows specifying one region per project, each Cloud Run service is deployed in its own project and uses its regional Firestore instance. It is not needed for a service in the US to access Firestore data in the EU and vice versa.
Is there a way to deploy a global HTTPS load balancer to route requests to Cloud Run service closest to the user when services are defined in different projects?
I attempted a setup with a shared VPC between a Host "Global" project (in the US) and 2 service projects (EU and US). I created a Cloud Run Service, Network Endpoint Group (NEG), and Backend Service in each regional project. I then attempted to create a Global forwarding rule, Target HTTPs proxy, and URL Map in the host project. However, the URL Map cannot be fed a backend service in another project, complaining that:
Cross-project references for this resource are not allowed.
Indeed, per the Shared VPC Architecture and Cross-project service referencing section of the documentation it seems that:
Cross-project service referencing is not supported for the global external HTTP(S) load balancer
and that, if I understood correctly, the following rules apply:
The NEG must be defined in the same project as the Cloud Run Service
The Backend Service must be in the same project as the NEG
The Target HTTP(s) Proxy and associating URL map must be in the same project as the Backend Service
The Forwarding Rule must be in the same project as the Backend Service
essentially requiring the entire chain to be defined in one project.
Are there recommended workarounds for this scenario?
One solution I can think of is to create a "Router" Cloud Run Service in the Host global project behind a load balancer, with multi region deployment. Its sole purpose is to respond to the client with the regional URL endpoint of the closest "Real Time App" Cloud Run service.
I am wondering whether there is a more elegant solution, though.

GCP - issues with connecting Vertex.AI to shared VPC

We are trying to create training job in Vertex.AI and we need to connect with resources in our shared VPC. Project in which we are creating this job is service project. We have VPC with private services access configured already. (as described in https://cloud.google.com/vertex-ai/docs/general/vpc-peering)
When we are trying to create a job and use this host network, we get a very generic error message:
Unable to start training due to the following error: Internal error encountered.
Everything seems alright and peering connection with private services (servicenetworking) is in an active state.
Does anyone maybe have an idea where can we look for more information about this problem or maybe some guides or pointers that could help us?
A few points should be verified in this particular setup:
The Compute Engine and Service Networking APIs should be enabled for host and service projects, and the Vertex AI API should be enabled for the service project.
The VPC peering connection within your VPC and Google Services should be created in the host project.
You must specify the name of the network that you want Vertex AI to have access to (shared VPC), as stated in the following document 1.
Verify that the service/user account used has the proper role (Compute Network user).

Private service to service communication for Google Cloud Run

I'd like to have my Google Cloud Run services privately communicate with one another over non-HTTP and/or without having to add bearer authentication in my code.
I'm aware of this documentation from Google which describes how you can do authenticated access between services, although it's obviously only for HTTP.
I think I have a general idea of what's necessary:
Create a custom VPC for my project
Enable the Serverless VPC Connector
What I'm not totally clear on is:
Is any of this necessary? Can Cloud Run services within the same project already see each other?
How do services address one another after this?
Do I gain the ability to use simpler by-convention DNS names? For example, could I have each service in Cloud Run manifest on my VPC as a single first level DNS name like apione and apitwo rather than a larger DNS name that I'd then have to hint in through my deployments?
If not, is there any kind of mechanism for services to discover names?
If I put my managed Cloud SQL postgres database on this network, can I control its DNS name?
Finally, are there any other gotchas I might want to be aware of? You can assume my use case is very simple, two or more long lived services on Cloud Run, doing non-HTTP TCP/UDP communications.
I also found a potentially related Google Cloud Run feature request that is worth upvoting if this isn't currently possible.
Cloud Run services are only reachable through HTTP request. you can't use other network protocol (SSH to log into instances for example, or TCP/UDP communication).
However, Cloud Run can initiate these kind of connection to external services (for instance Compute Engine instances deployed in your VPC, thanks to the serverless VPC Connector).
the serverless VPC connector allow you to make a bridge between the Google Cloud managed environment (where live the Cloud Run (and Cloud Functions/App Engine) instances) and the VPC of your project where you have your own instances (Compute Engine, GKE node pools,...)
Thus you can have a Cloud Run service that reach a Kubernetes pods on GKE through a TCP connection, if it's your requirement.
About service discovery, it's not yet the case but Google work actively on that and Ahmet (Google Cloud Dev Advocate on Cloud Run) has released recently a tool for that. But nothing really build in.

Run multiple servers with interconnection on Amazon AWS

We are developing applications and devices that communicate with our servers. We have one "main" Java Spring server which handles almost all the HTTP requests including user authentication, storing relevant user data and giving that data to the applications. Furthermore, we have a few smaller HTTP servers (written in golang) which are both used by the "main" server to perform certain tasks but also have some public API's that apps and devices use directly.
In our current non-production setup we run all the servers locally on one machine with an apache2 in front which directs the requests. So the servers can be accessed via the apache2 by a user by their respective subdomains but they also perform some communication between each other. When doing so, currently we simply send the request to localhost:{PORT} since they all run on the same machine. They furthermore all utilize the same mysql-server running on that same machine.
We are now looking to get it more production-ready and are looking to deploy it to AWS. They are currently not containerized so a solution that requires containerization (ECS? K8s?) would most likely require more work. What would be the most straightforward way to do the following:
Deploy a number of servers on AWS where they are exposed publicly with their respective domains but can also communicate internally with one another (or would they just communicate with one another using their public domains?)
Deploy a managed SQL database (Amazon RDS?) which is accessible for all the servers.
Setup the routing of the requests. Currently run our own configured apache2 but I assume we can add a managed API Gateway in AWS and configure it for our servers.
Q. Deploy a number of servers on AWS where they are exposed publicly
with their respective domains but can also communicate internally with
one another (or would they just communicate with one another using
their public domains?)
On AWS you create a VPC(1st default VPC is created when you login for the first time).
You can deploy a number of EC2 instances(virtual servers) with just private IP addresses and without any public access and put them behind an ELB(elastic load balancer). The ELB will take all the traffic and distribute the load onto the servers based on endpoint.
However the EC2 instances won't have public IPs A VPC(virtual Private Gateway) allows your services to communicate to each other via private IPs (something like 172.31.xx.xx), You can also provide domain/sub-domain names to these private IP addresses using Route53 service of AWS.
For example You launch 2 servers:
Your Java Application - on 172.31.1.1 (you name it
xyz.myjavaapp.something.com on Route53)
Your Angular Application - on 172.31.1.2
The angular application can reach your java application on 172.31.1.1:8080 or
xyz.myjavaapp.something.com:8080
Q. Deploy a managed SQL database (Amazon RDS?) which is accessible for
all the servers.
Yes you can deploy an SQL database on RDS and it will be available to the EC2 instances. Just make sure you create proper security groups to allow only your servers to access it, and not leave it open for public internet.
Example for a VPC only security group entry is 172.31.0.0/16 This will allow only ther servers in you VPC to connect to the RDS DB. given that your VPC subnet has the range 172.31.x.x
Q. Setup the routing of the requests. Currently run our own configured
apache2 but I assume we can add a managed API Gateway in AWS and
configure it for our servers.
You can set up public/private APIs and manage different endpoints using API Gateway.
Another way it to put your application server behind an Application ELB. The ELB can take care of load balancing as well as endpoint management.
for example :
if you decide to deploy 2 servers for /getData and 1 server for /doSomethingElse. It can be easily managed by ELB.
I would suggest you use at-least servers for critical services and load balance them behind and ELB for production env.
On another note, containerizing and deploying to kubernetes is not that difficult or time consuming. But yes it has got some learning curve, but the benefits outweigh it.
Feel free to ask questions.

How can i set up a private web app on Azure using an App Service Environment

I have a web app and a web service (which will be uploaded to Azure as an web app). How can i make my web service private (not accessible to the public, only accessible by the web app). Apparently you're able to do it with an App Service Environment but there isn't much documentation on it.
Is it possible?
You can follow this article to set it up: https://azure.microsoft.com/en-us/documentation/articles/app-service-web-how-to-create-an-app-service-environment/
The main difference between App Service and App Service Environment (ASE) is that App Services run on a pre-built, shared tenant hyper scaled web farm, but ASEs are purpose built (on demand) web farms provisioned directly in your subscription that must be attached to a VNET. Because you can attach your ASE to a VNET, you can then apply Network Security Groups (NSG) to the VNET to prevent/allow traffic to flow to the ASE.
Here is the page describing how to add the layered security to your ASE once you've built it:
Layered Security Architecture with App Service Environments
So with ASE you get the deployment/monitoring/management features of App Services, but with the network layer control of a VM.
How can i make my web service private (not accessible to the public, only accessible by the web app).
Network Security Groups could be used to control network traffic rules at the networking level, we could apply Network security group to the subnet to let Network security group act as a firewall in the cloud. #Russell Young has shared us a good article about setting up Network security group, you could read it. And you could check this blog that explained securing network access using Network Security Groups.
Besides, it is easy to implement a custom authentication to prevent unauthenticated client from accessing to your Web service at application layer. For example, we could use SOAP headers for authentication. Web service client credentials would be passed within the SOAP header of the SOAP message when the client want to access to Web service, and then Web service will validate SOAP header, if it contains the authentication credentials, the client will be authorized to access to the Web service.
You could check Implement Custom Authentication Using SOAP Headers.