Google APIs and Services that are supported by VPC Service Controls based on Supported products and limitations available here includes Pub/Sub, Cloud Monitoring and Cloud Logging.
However a related documentation available here about configuring Private Google Access for on-premises hosts available here has Pub/Sub, Monitoring and Logging listed under Reached using Private Google Access but not secured by VPC Service Controls.
I am confused reading this. Can Pub/Sub access (as well as Monitoring and Logging) be secured by VPC Service Controls or not?
Edit
Uploaded image of new VPC Service Control creation screen that allows PubSub to be selected as one of the services to be restricted.
After reviewing both documents, I can see that, as you commented, Pub/Sub is a Supported VPC SC product. However, the combination of these 3 products: Private Google Access + VPC SC + Pub/Sub will not work. Therefore you can secure these products (Pub/Sub, Monitoring and Logging) by using VPC Service Controls without using Private Google Access (service that allow on-premises hosts to reach the Google APIS without using public IPS)
Related
It is feasible to pull messages from GCP Pub/Sub subscription over public Internet by reaching to the public GCP Pub/Sub API endpoint.
However, is it feasible to pull messages over the GCP dedicated Interconnect for more stable network connection? I would like to reduce the workload of proxy for reaching Public Internet by going through the private dedicated Interconnect channel.
GCP provide private access options for private routing. One of the example that suits the use case is the private service connect endpoint.
A private endpoint is deployed in the GCP project. On-premise host can access google API via this endpoint through VPC or Interconnect access, instead of the public API endpoint, such that the traffic can avoid exposing in the public Internet.
https://cloud.google.com/vpc/docs/private-access-options
Please see the docs on service APIs overview, which explain that:
Requests to the global endpoint made over Interconnect are routed similarly to requests originating in the region associated with the interconnection. In case Pub/Sub becomes unavailable in a region, requests originating within the same Google Cloud region are not load balanced to a different region.
My application deployed to our private server and I want to use some service from GCP like Bucket and Secret manager.
Suppose my application deployed in internal server and my applicate use GCP services. Is it possible or we should deploy our app to GCP also. My application is in JSP.
How to do this. Which is best practice for this.
You have more than one option. You can use Cloud VPN, as it securely connects your peer network to your Virtual Private Cloud (VPC) network through an IPsec VPN connection. Follow this GCP’s official documentation to set it up.
Another option is Google Cloud Hybrid Connectivity focused on Cloud Interconnect as it allows you to connect your infrastructure to Google Cloud. Visit the following link for the best practices and the set up guide.
Finally, see the following thread for more reference on your connection requirement.
Is it possible with Workspace and GCP to restrict geographical where a user can access projects and resources from?
For example, all users in the Workspace should only be able to access GCP resources from Australia. User A decides to go on holiday to USA but will do some remote work. Their access should be blocked to select Workspace and GCP resources unless over ruled (ie. User A enabled access from USA).
This is something I've seen possible in Azure AD, does GCP/Workspace have a similar functionality?
Use Context-Aware Access to create granular access control policies for Google Workspace. Not all versions of Google Workspace enable this feature. This does not affect access to Google Cloud Platform.
If you are using Identity-Aware Proxy to control access to your resources in Google Cloud, then you can extend Identity-Aware Proxy with Context-Aware Proxy. However, this does not limit access to the Google Cloud GUI or other Google owned resources - only the ones you configure IAP authorization.
Setting up context-aware access with Identity-Aware Proxy
Context-Aware Access can also be integrated with VPC Service Control perimeter ingress rules to allow access based on network origin (IP and VPC).
Context-aware access with ingress rules
Summary:
Integrate Context-Aware Access with resources you create that support Identity-Aware Proxy.
Use VPC Service Controls to control access to Google Cloud resources that support VPCs (Cloud Storage, BigQuery, etc).
If your goal is to limit access to the Google Cloud Console GUI, I am not aware of one. Use Two-Step Verification to control user access from new locations.
I was wondering if anyone else has come across this problem, and if so, what solution was chosen.
We currently use GCP Uptime Checks for our internet facing endpoints - however, we do want to also use them to check non-internet facing endpoints (i.e locked down by a security group for example).
Already considered:
Whitelist the GCP Uptime IPs - this was dismissed as we do not want to maintain a mass amount of IPs on our security groups - which may or may not change over time.
Use GCPs new Private Uptime Checks - this only supports checking endpoints residing in GCP (we need to also check private endpoints in AWS)
Set up a reverse NGINX Proxy VM - which we can lock down to GCP Uptime IPs (in one place, as opposed to multiple) - our services can then just whitelist the Proxy VM
Try and achieve the same logic using the NGINX Ingress Controller in GKE
Has anyone else faced this problem?
Thanks in advance!
You could Create private uptime checks for your GCP endpoints and use an AWS connector project to import the metrics from your AWS endpoints.
Follow these steps provided in the Quickstart for Amazon EC2 guide to monitor your AWS endpoints:
Create a Google Cloud project
Connect an AWS account
Authorize AWS applications
Use Monitoring services with AWS
I want to execute AWS CLI commands of RDS not via the internet, but via a VPC network for mainly creating manual snapshots of RDS.
However, VPC endpoints support only RDS Data API according to the following document:
VPC endpoints - Amazon Virtual Private Cloud
Why? I need to execute a command within closed network for security rules.
Just to reiterate you can still connect to your RDS database through the normal private network using whichever library you choose to perform any DDL, DML, DCL and TCL commands. Although in your case you want to create a snapshot which is via the service endpoint.
VPC endpoints are to connect to the service APIs that power AWS (think the interactions you perform in the console, SDK or CLI), at the moment this means for RDS to create, modify or delete resources you need to use the API over the public internet (using HTTPS for encrypted traffic).
VPC endpoints are added over time, just because a specific API is not there now does not mean it will never be there. There is an integration that has to be carried out by the team of that AWS service to allow VPC endpoints to work.