Setting up SAP support channel with SAProute - google-cloud-platform

According to the following GCP public documentation
If you need to allow an SAP support engineer to access your SAP HANA
systems on Google Cloud, you can do so using SAProuter. Follow these
steps:
Launch the Compute Engine VM instance that the SAProuter software will
be installed on, and assign an external IP address so the instance
has internet access.
Create a new, static external IP address and then assign this IP
address to the instance.
Create and configure a specific SAProuter firewall rule in your
network. In this rule, allow only the required inbound and outbound
access to the SAP support network, for the SAProuter instance.
Question
Use of external IP address is restricted in my environment, so I will like to know if I can used a public Load balance to achieve this.
Context
I have a public Loadbalancer infront of a FW, how can I use this Public Load balancer IP to setup my SAP Router in GCP? Is this even possible?

You may want to use Load Balancing Forwarding Rules to allow your External IP to access your environment.
Internal forwarding rules
Internal forwarding rules forward traffic that originates inside a Google Cloud network. The clients can be in the same Virtual Private Cloud (VPC) network as the backends, or the clients can be in a connected network.
Internal forwarding rules are used by two types of Google Cloud load balancing products:
Internal TCP/UDP Load Balancing
Internal HTTP(S) Load Balancing
External forwarding rules
External forwarding rules accept traffic from client systems that have internet access, including:
A client outside of Google Cloud
A Google Cloud VM with an external IP address
A Google Cloud VM without an external IP address using Cloud NAT or an instance-based NAT system
Adding a forwarding rule
Create the load balancer's forwarding rule
Go to the Load balancing page in the Google Cloud Console.
Click Create load balancer.
Select a load balancer type, including the traffic type and whether the load balancer faces the Internet or is internal only.
Click Continue.
Click Frontend configuration. In the New Frontend IP and port section, make the following changes:
a. Name: FORWARDING_RULE_NAME
b. Subnetwork: SUBNET_OF_YOUR_RESERVED_IP_ADDRESS \
c. From Internal IP or from IP Address, select your pre-reserved IP address.
Optionally, you can reserve an IP address now in this UI, or you can use an ephemeral IP address.
d. Select the protocol, port numbers, and IP version.
Only some load balancer types support IPv6.
e. Verify that there is a blue check mark next to Frontend configuration before continuing. Review this step if not.
Click Review and finalize. Double-check your settings.
Click Create.
Using IAP for TCP forwarding
IAP's TCP forwarding feature lets you control who can access administrative services like SSH and RDP on your backends from the public internet. The TCP forwarding feature prevents these services from being openly exposed to the internet. Instead, requests to your services must pass authentication and authorization checks before they get to their target resource.
IAP forwarding Step by Step setup
You can also check links below for your reference.
Forwarding rules overview
Using IAP for TCP forwarding

Related

GCP Kubernetes block IP address?

What is the standard way to block an external IP from accessing my GCP cluster? Happy for the answer to include another Google service.
Because your cluster is deployed on Compute Engine instance, you can simply set a firewall rule to discard connection from a specific IP.
If you use an HTTP load balancer, you can add Cloud Armor policy to exclude some IPs.
In both case, keep in mind that IP filtering isn't very efficient. A VPN or Proxy can be easily and freely used on the internet and change the IP source of the requester.

How can I use AWS load balancer to check IP changes?

I have an instance running on premise and its IP address is changed regularly. My other services are running on AWS and they are using IP to connect to the premise's services. I have to update the IP address saved on AWS services whenever the IP is changed on premise network. I have a thought about using DNS but it is still a need to update A record.
I am looking for a way to do some auto-detect instead of manual updating. I wonder whether I can use load balancer to do the check. I know there will be a range of IP addresses on premise network. Can load balancer do a health check on these IP within the range? So my AWS service can send request to the load balancer. Is there any side-effect on this approach?
You need to use hostname instead of IP address as you mentoned the IP addresses keeps changing. AWS VPC can use a DNS forwarder like Unbound, which can forward the requests to your on premise DNS server when VPC resolution is unable to resolve the hostnames. This appraoch is quite effective as you send only those DN resolution to on-premise DNS that are missed by AWS VPC DNS.
Unbound allows resolution of requests originating from AWS by
forwarding them to your on-premises environment—and vice versa. For
the purposes of this post, I will focus on a basic installation of
Amazon Linux with the configuration necessary to direct traffic to
on-premises environments or to the Amazon VPC–provided DNS, as
appropriate. Review the Unbound documentation for details and other
configuration options.
Further reading : How to setup DNS resolution from AWS to on premise servers

Is it possible to use Google Cloud NAT for VMs behind a TCP/Proxy LB so that all servers can utilize egress from a single IP?

I am trying to use the Google Cloud NAT on a set of VMs running on Compute Engine which are in their own specific subnet such that all of the servers make requests to customer websites from a single static IP address. Unfortunately when I add these VMs to a TCP/SSL Proxy LB they don't appear to be using the NAT which I believe is configured correctly.
I have tried configuring the TCP Proxy LB as well as an HTTP(S) LB and the Cloud NAT and when I try and make an egress http request it results in a timeout. The ingress via the LB is working properly. The VM instances do not have external IPs which is a requirement for the Cloud NAT.
I expect the http requests to hit the server and for the web-server to make outbound http request via the Cloud NAT such that other servers need only whitelist a single IP address (a static IP assigned to the Cloud NAT)
I'm trying to understand why would you need Cloud NAT in this scenario, since a TCP/SSL proxy load balancer will connect to the backends using a private conneciton and the backends won't be exposed to the Internet. Configuring just a TCP/SSL proxy would be enough for your scenario imo.
The following official documentation will explain my point1:
Backend VMs for HTTP(S), SSL Proxy, and TCP Proxy load balancers do
not need external IP addresses themselves, nor do they need Cloud NAT
to send replies to the load balancer. HTTP(S), SSL Proxy, and TCP
Proxy load balancers communicate with backend VMs using their primary
internal IP addresses.

Why is it required to provide external IPs to Cloud SQL services for authorization?

I am taking the Google's GCP Fundamentals: Core Infrastructure course on Coursera. In the demonstration video of the Google Storage module, the presenter authorizes a compute engine instance to access a MySQL instance via it's external IP address.
Aren't these two resources part of the same VPC if they are part of the same project ? Why can't this authorization be done using the vm instance's internal IP address ?
Aren't these two resources part of the same VPC if they are part of
the same project ?
A Cloud SQL instance isn't created in one of your project's VPC network but in a Google-managed project, within its own network.
What happens when you enable private IP is that this network will be peered with the network of your choice in your project, where your Compute Engine instance resides:
You can then connect to the Cloud SQL instance from your VM via the internal IP address. The VM is considered trusted if your network configuration allows it to reach the Cloud SQL instance.
When you set an external IP address on the Cloud SQL instance, it means that the instance is accessible to the internet and the connection needs to be authorized. One way to do it is to whitelist the IP address of the caller as you mentioned. This works well if the caller's IP doesn't change. Another (easier) option is to connect via the cloud_sql_proxy, which handles authorization and encryption for you. You then don't need to whitelist the IP.

How to setup VPN from on-premises to Google Cloud VPC

We want to be able to connect to my on-premise database from our google cloud kubernetes.
We are currently attempting to do so by using "Create a VPN connection" from within the google console.
In the field IP address, I am forced to create (or pick from existing) "External IP Addresses".
I am able to link a single VM-instance to this External IP Address. But I want my VPN connection/tunnel to be between my on-premises network and EVERYTHING within my Google cloud network.
This IP should not just work as External IP Addr. for a single instance. I need to make it a gateway to the network as a whole. What am I missing?
Thanks in advance.
Another way to frame the question:
How do I find the IP Address of the gateway to my Google cloud network (VPC) and how do I supply that IP to the VPN Connection creation ?
The Cloud VPN connects your on-premises to the VPC, that means every Instance, Cluster or other products that use Google Cloud Engine (GCE).
As mentioned in a previous answer from avinoam-meir the VPN has at least two components: Gateway and Tunnel but I will add a third one: Type of routing.
a) Gateway: This is where you can add an existing or reserve any static IP address (from the Google Pool of External IP Addresses).
b) Tunnel: Where the encapsulated and encrypted traffic will flow to reach the Local IP ranges.
c) Type of routing: Cloud VPN has three possibilities:
Tunnel using Dynamic Routing
Route Based VPN
Policy based VPN
Depending on the type you choose, the routing happens in a different way but in general terms, it will propagate your subnetwork(s) to your on-premises network and receive the routes from it.
Important: Remember to open your firewall on your GCP VPC to receive traffic from your on-premises IP Ranges as the default and implied rule for Ingress will block it.
The implied allow egress rule: An egress rule whose action is allow, destination is 0.0.0.0/0, and priority is the lowest possible (65535) lets any instance send traffic to any destination.
The implied deny ingress rule: An ingress rule whose action is deny, source is 0.0.0.0/0, and priority is the lowest possible (65535) protects all instances by blocking incoming traffic to them.
The answer was simpler than I thought.
My question was:
How do I find the IP Address of the gateway to my Google cloud network
(VPC) and how do I supply that IP to the VPN Connection creation ?
The answer is simply to fill out the "Create a VPN connection" page. It automatically sets up whatever IP you get/choose in the "IP Address" field as the gateway. I did NOT need to configure this IP address to work as a gateway. Simply getting it assigned in this step is enough. Google does the rest behind the scenes.
You need to distinguish between gateway IP address and local IP range of the VPN tunnel
The gateway IP address is the IP of the gateway where all the packets from your on-premises arrive encapsulated and encrypted.
The local IP range of the VPN tunnel is the range of IPs that can be reached through the VPN tunnel. By default this is all the
private IP addresses of your GCP network
Create a NAT gateway [1] with Kubernetes Engine and Compute Engine Network Routes to route outbound traffic from an existing GKE cluster through the NAT Gateway instance.
Use that NAT gateway IP address to create a VPN connection to remote peer gateway.
[1] https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine