Question about how GCP resources communicate - google-cloud-platform

I've got a few people I know who operate under the assumption that all GCP resources communicate over Google's internal network, even if something is configured to talk to an instance's public IP address (VM, SQL, etc). This seems possible through some complicated NAT management, but I'm not sure it's true.
For example, a web server is set up in one project to use a MySQL database that lives in another project. Without setting up VPC peering, we set the website to use the public IP of the MySQL database, since the private IP isn't available without peering. At this point, is the web server communicating with the MySQL database over the internet? Does this traffic leave Google's network at any point?
I've been unable to find docs that answer this question, but I could be searching the wrong terms. If someone could provide docs that would be very helpful.
Please let me know if I should clarify further.
Thanks!

Related

Why doesn't GCP's "Memorystore for Redis" doesnot allow option to add Public IP?

Currently, when trying to create "MemoryStore for Redis" in GCP, there is no option to add Public IP.
This poses a problem as I am unable to connect to it from a Compute Engine from external network with this REDIS instance in another network.
Why is this missing?
Redis is designed to be accessed by trusted clients inside trusted
environments. This means that usually it is not a good idea to expose
the Redis instance directly to the internet or, in general, to an
environment where untrusted clients can directly access the Redis TCP
port or UNIX socket.
Redis Security
I think because a design decision but in general this is not something we will know since we are not part of the Product team so I don't think this question can be easily answered in SO.
According to this Issue Tracker there are no plans to support this a near future.
Said that you may want to take a look to at this doc where it shows some workarounds to connect from a network outside the VPC.

AWS EC2 for QuickBooks

AWS and network noob. I've been asked to migrate QuickBooks Desktop Enterprise to AWS. This seems easy in principle but I'm finding a lot of conflicting and confusing information on how best to do it. The requirements are:
Setup a Windows Server using AWS EC2
QuickBooks will be installed on the server, including a file share that users will map to.
Configure VPN connectivity so that the EC2 instance appears and behaves as if it were on prem.
Allow additional off site VPN connectivity as needed for ad hoc remote access
Cost is a major consideration, which is why I am doing this instead of getting someone who knows this stuff.
The on-prem network is very small - one Win2008R2 server (I know...) that hosts QB now and acts as a file server, 10-15 PCs/printers and a Netgear Nighthawk router with a static IP.
My approach was to first create a new VPC with a private subnet that will contain the EC2 instance and setup a site-to-site VPN connection with the Nighthawk for the on-prem users. I'm unclear as to if I also need to create security group rules to only allow inbound traffic (UDP,TCP file sharing ports) from the static IP or if the VPN negates that need.
I'm trying to test this one step at a time and have an instance setup now. I am remote and am using my current IP address in the security group rules for the test (no VPN yet). I setup the file share but I am unable to access it from my computer. I can RDP and ping it and have turned on the firewall rules to allow NB and SMB but still nothing. I just read another thread that says I need to setup a storage gateway but before I do that, I wanted to see if that is really required or if there's another/better approach. I have to believe this is a common requirement but I seem to be missing something.
This is a bad approach for QuickBooks. Intuit explicitly recommends against using QuickBooks with a file share via VPN:
Networks that are NOT recommended
Virtual Private Network (VPN) Connects computers over long distances via the Internet using an encrypted tunnel.
From here: https://quickbooks.intuit.com/learn-support/en-us/configure-for-multiple-users/recommended-networks-for-quickbooks/00/203276
The correct approach here is to host QuickBooks on the EC2 instance, and let people RDP (remote desktop) into the EC2 Windows server to use QuickBooks. Do not let them install QuickBooks on their client machines and access the QuickBooks data file over the VPN link. Make them RDP directly to the QuickBooks server and access it from there.

External requests in Cloud Run project

Currently my projects in Cloud Run that make external requests come out with random IP from Google IP's pool.
A new micro-service that I am developing that needs to make an external request on a critical external micro-service that is limited by IP.
Google Cloud Platform has any solution to channel the output from a specific IP to the outside? Some kind of proxy for these kinds of needs?
Thanks
As clarified in this other case here, there is no way to directly setup a static or specific IP for outbound requests for Cloud Run. The only possibility as clarified in this answer from a Google's developer, unless Cloud Run starts supporting Cloud NAT or Serverless VPC Access, you won't be able to achieve such configuration.
There are some workarounds.
One of them would be to create a SOCKS proxy by running a ssh client that routes the traffic through a GCE VM instance that has a static external IP address. More details here.
Another solution is to send your outbound requests through a proxy that has a static IP. You can get details here.
Both these two were provided by developers from Google, so they should be good to go and use it.

Static IP to access GCP Machine Learning APIs via gRPC stream over HTTP/2

We're living behind a corporate proxy/firewall, that can only consume static IP rules and not FQDNs.
For our project, we need to access Google Speech To Text API: https://speech.googleapis.com. If outside of corporate network, we use gRPC stream over HTTP/2 to do that.
The ideal scenario looks like:
Corporate network -> static IP in GCP -> forwarded gRPC stream to speech.googleapis.com
What we have tried is creating a global static external IP, but failed when configuring the Load Balancer, as it can only connect to VMs and not APIs.
Alternatively, we were thinking to use output of nslookup speech.googleapis.com IP address ranges and update it daily, though it seems pretty 'dirty'.
I'm aware we can configure a compute engine resource / VM and forward the traffic, but this really doesn't seem like an elegant solution either. Preferably, we can achieve that with existing GCP networking components.
Many thanks for any pointers!
Google does not publish a CIDR block for you to use. You will have daily grief trying to whitelist IP addresses. Most of Google's API services are fronted by the Global Frontend (GFE). This uses HTTP Host headers to route traffic and not IP addresses, which will cause routing to fail.
Trying to lookup the IP addresses can be an issue. DNS does not have to return all IP addresses for name resolution in every call. This means that a DNS lookup might return one set of addresses now and a different set an hour from how. This is an edge example of grief you will cause yourself with whitelisting IP addresses.
Solution: Talk to your firewall vendor.
Found a solution thanks to clever networking engineers from Google, posting here for future reference:
You can use a CNAME in your internal DNS to point *.googleapis.com to private.googleapis.com. This record in public DNS points to two public IP addresses (199.36.153.8/30) that are not reachable from the public internet but through a VPN tunnel or Cloud interconnect only.
So if setting up a VPN tunnel to a project in GCP is possible (and it should be quite easy, see https://cloud.google.com/vpn/docs/how-to/creating-static-vpns), then this should solve the problem.

Open app from AWS instance

First of all let me say that I'm new to AWS and don't know much about servers but trying to learn something now!
I have been given access to AWS instance. I can access the server using ssh. It's ubuntu server.
There is an application deployed under var/www/. I have also public IP of server but when I try to access this public IP it's not opening and I also can't ping that IP.
Am I doing something wrong? I will note that I don't have very big experience with servers.
You will need to check security group credentials associated with your ec2. There http and any other required protocols will need to be opened. And also that internet gateway is correctly NATed. Good luck. It's a steep but fast learning curve with aws.