GCP open firewall only to cloud shell - google-cloud-platform

Is there a way in GCP to explicitly allow firewall rule only from cloud shell. All the GCP demos and videos add the rule allow 22 to 0.0.0.0/0 to ssh to the instance from cloud shell.
However is there a way we could restrict the access only from cloud shell - either using cloud shell's IP range or service account ?

Google does not publish the public IP address range for Cloud Shell.
VPC firewall rules allow specifying the service account of the source and target. However, Cloud Shell does not use a service account. Cloud Shell uses the identity of the person logged into the Google Cloud Console. This means OAuth 2 User Credentials. User Credentials are not supported for VPC Firewall rules.
My recommendation is to use TCP forwarding and tunnel SSH through IAP (Identity Aware Proxy). Google makes this easy in the Cloud SDK CLI.
Open a Cloud Shell in the Google Cloud Console. Then run this command:
gcloud compute ssh NAME_OF_VM_INSTANCE --tunnel-through-iap
This also works for VM instances that do not have public IP addresses.
The Identity Aware Proxy CIDR netblock is 35.235.240.0/20. Create a VPC Firewall rule that allows SSH traffic from this block. This rule will prevent public SSH traffic and only allow authorized traffic thru Identity Aware Proxy.

Google has published the detailed info in this article - Configuring secure remote access for Compute Engine VMs
From the admin console, click Security then select Identity-Aware Proxy.
If you haven’t used Cloud IAP before, you’ll need to configure the oAuth screen:
Configure the consent screen to only allow internal users in your domain, and click Save.
Next, you need to define users who are allowed to use Cloud IAP to connect remotely. Add a user to the “IAP-secured Tunnel User” role on the resource you’d like to connect to.
Then, connect to the machine via the ssh button in the web UI or gcloud.
When using the web UI, notice the URL parameter useAdminProxy=true.
Tip: If you don’t have gcloud installed locally, you can also use Cloud Shell:
gcloud beta compute ssh {VM-NAME} --tunnel-through-iap
You should now be connected! You can verify that you don’t have internet connectivity by attempting to ping out. 8.8.8.8 (Google’s Honest DNS) is a good address to try this with.

Related

Connecting Google Cloud Run Service to Google Cloud SQL database

I have 2 google cloud services:
Google Cloud Run Service (Node Js / Strapi)
Google Cloud SQL Service (Mysql)
I have added the Cloud SQL connection to the Google Cloud Run Service from the UI, and have a public IP for the Google Cloud SQL Service. On top of that I have added the Run Service IP to the Authorised networks of SQL Service.
If I try and connect from another server (external from Google cloud) I can easily connect to the Google Cloud SQL Service and execute queries.
But if I try and connect from inside the GCloud Run Service with exactly the same settings (Ip, database_name, etc) my connection hangs and I get a timeout error in the logs...
How to properly allow Gcloud SQL to accept connections from GCloud RUN?
I looked for other answers in here, but they all look very old (around 2015 )
You can use 3 modes to access to your database
Use the built-in feature. In this case, you don't need to specify the IP address, it's a linux socket that is open to communicate with the database as described in the documentation
Use Cloud SQL private IP. This time, no need to configure a connection in the Cloud Run service, you won't use it because you will use the IP, not the linux socket. This solution required 2 things
Firstly attach your database to your VPC and give it a private IP
Then, you need to route the private IP traffic of Cloud Run through your VPC. For this you have to create, and then to attach to the Cloud RUn service, a serverless VPC Connector
Use CLoud SQL public IP. This time again, no need to configure a connection in the Cloud Run service, you won't use it because you will use the IP, not the linux socket. To achieve this, you need more steps (and it's less secure)
You need to route all the egress traffic of Cloud Run through your VPC. For this you have to create, and then to attach to the Cloud RUn service, a serverless VPC Connector
Deploy your Cloud Run service with the Serverless VPC Connector and the egress connectivity param to "all"
Then create a Cloud NAT to route all the VPC Connector ip range traffic to a single IP (or set of IPs) (The link is the Cloud Functions documentation, but it works exactly in the same way)
Finally authorize the Cloud NAT IP(s) on Cloud SQL authorized networks.
In your case, you have whitelisted the Cloud Run IP, but it's a shared IP (other service can use the same!! Be careful) and it's not always the same, there is a pool of IP addresses used by Google cloud.

connecting to VM instance having no external IP

I am trying to connect to a google cloud VM instance having no external IP address via cloud shell and cloud SDK.
Google document says that we can connect it using IAP
Connecting through IAP: refer using IAP
a) Grant the roles/iap.tunnelResourceAccessor role to the user that wants to connect to the VM.
b) Connect to the VM using below command
gcloud compute ssh instance-name --zone zone
OR
Using IAP for TCP forwarding: refer using TCP forwarding
we can also connect by setting a ingress firewall rule for IP '35.235.240.0/20' with port TCP:22
and select a IAM role Select Cloud IAP > IAP-Secured Tunnel User
what's the difference between these two different approach and what's the difference in these two separate IAM roles
roles/iap.tunnelResourceAccessor
IAP-secured Tunnel User
I am new to cloud so please bear with my basic knowledge.
It's exactly the same thing. Look at this page
IAP-Secured Tunnel User (roles/iap.tunnelResourceAccessor)
You have the display name of the role: IAP-Secured Tunnel User that you see in the GUI, and you have the technical name of the role roles/iap.tunnelResourceAccessor that you have to use in the script and CLI
The link mentioned in the question ("refer using IAP") actually points to the
Connecting to instances that do not have external IP addresses > Connecting through a bastion host.
Connection through a bastion host is another method apart from access via IAP.
As described in the document Connecting to instances that do not have external IP addresses > Connecting through IAP,
IAP's TCP forwarding feature wraps an SSH connection inside HTTPS.
IAP's TCP forwarding feature then sends it to the remote instance.
Therefore both parts of the question (before OR and after OR) belong to the same access method: Connect using Identity-Aware Proxy for TCP forwarding. Hence the answer to the first question is "no difference" because all of that describes how the IAP TCP forwarding works and those are the steps to set it up and use:
1. Create a firewall rule that:
applies to all VM instances that you want to be accessible by using IAP;
allows ingress traffic from the IP range 35.235.240.0/20 (this range contains all IP addresses that IAP uses for TCP forwarding);
allows connections to all ports that you want to be accessible by using IAP TCP forwarding, for example, port 22 for SSH.
2. Grant permissions to use IAP:
Use GCP Console or gcloud to add a role IAP-Secured Tunnel User (roles/iap.tunnelResourceAccessor) to users.
Note: Users with Owner access to a project always have permission to use IAP for TCP forwarding.
3. Connect to the target VM with one of the following tools:
GCP Console: use the SSH button in the Cloud Console;
gcloud compute ssh INSTANCE_NAME
There's an important explanation of how IAP TCP forwarding is invoked for accessing a VM instance without Public IP. See Identity-Aware Proxy > Doc > Using IAP for TCP forwarding:
NOTE. If the instance doesn't have a Public IP address, the connection automatically uses IAP TCP tunneling. If the instance does have a public IP address, the connection uses the public IP address instead of IAP TCP tunneling.
You can use the --tunnel-through-iap flag so that gcloud compute ssh always uses IAP TCP tunneling.
As already noted by guillaume blaquiere, roles/iap.tunnelResourceAccessor and IAP-secured Tunnel User are not the different IAM roles, but the Role Name and the Role Title of the same Role. There is one more resource that represents this in a convenient form:
Cloud IAM > Doc > Understanding roles > Predefined roles > Cloud IAP roles

How do I SSH tunnel to a remote server whilst remaining on my machine?

I have a Kubernetes cluster to administer which is in it's own private subnet on AWS. To allow us to administer it, we have a Bastion server on our public subnet. Tunnelling directly through to our cluster is easy. However, we need to have our deployment machine establish a tunnel and execute commands to the Kubernetes server, such as running Helm and kubectl. Does anyone know how to do this?
Many thanks,
John
In AWS
Scenario 1
By default, this API server endpoint is public to the internet, and access to the API server is secured using a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access Control (RBAC).
if that's the case you can use the kubectl commands from your Concourse server which has internet access using the kubeconfig file provided, if you don't have the kubeconfig file follow these steps
Scenario 2
when you have private cluster endpoint enabled (which seems to be your case)
When you enable endpoint private access for your cluster, Amazon EKS creates a Route 53 private hosted zone on your behalf and associates it with your cluster's VPC. This private hosted zone is managed by Amazon EKS, and it doesn't appear in your account's Route 53 resources. In order for the private hosted zone to properly route traffic to your API server, your VPC must have enableDnsHostnames and enableDnsSupport set to true, and the DHCP options set for your VPC must include AmazonProvidedDNS in its domain name servers list. For more information, see Updating DNS Support for Your VPC in the Amazon VPC User Guide.
Either you can modify your private endpoint Steps here OR Follow these Steps
Probably there are more simple ways to get it done but the first solution which comes to my mind is setting simple ssh port forwarding.
Assuming that you have ssh access to both machines i.e. Concourse has ssh access to Bastion and Bastion has ssh access to Cluster it can be done as follows:
First make so called local ssh port forwarding on Bastion (pretty well described here):
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user#<kubernetes-cluster-ip-address-or-hostname>
Now you can access your kubernetes api from Bastion by:
curl localhost:<kube-api-server-port>
however it isn't still what you need. Now you need to forward it to your Concourse machine. On Concource run:
ssh -L <kube-api-server-port>:localhost:<kube-api-server-port> ssh-user#<bastion-server-ip-address-or-hostname>
From now you have your kubernetes API available on localhost of your Concourse machine so you can e.g. access it with curl:
curl localhost:<kube-api-server-port>
or incorporate it in your .kube/cofig.
Let me know if it helps.
You can also make such tunnel more persistent. More on that you can find here.

Can you force SSH in browser to tunnel through IAP for instances with an external IP?

I have some compute engine instances with external IPs that have firewall rules blocking SSH. These instances also have internal IPs, that have firewall rules whitelisting SSH for the IAP netblock (although the IAP help in the console incorrectly says I need to add a rule due to not enough resource, but I digress).
A related comment seems to indicate that SSH in browser will not use IAP if there's an external IP, but I wasn't sure if there was a workaround.
I can use the Google Cloud SDK to SSH into the instances with gcloud compute ssh <instance> --tunnel-through-iap, however is there a way to force the same via the browser so I can easily log in on the go?
The related comment is correct.
The document on ‘Using Cloud IAP for TCP forwarding’ describes that you can only use the SSH button in the GCP Console if the VM is configured to only have an internal IP.
There isn’t a workaround for the scenario you described but you can always check out advanced SSH methods should they work better for you.

I want to create an in-house system. About GCE Firewall

I want to create an in-house system with GCE. I want to make HTTP and SSH connections only for people in the company, but not others. What should I do with a firewall?
By default, a Google Cloud project you create in Google Cloud Platform comes with the default firewall rules:
default-allow-icmp – allow from any source to all the network IP. ICMP protocol is mostly used to ping the target.
default-allow-internal – allow connectivity between instances on any port.
default-allow-rdp – allow RDP session to connect to Windows servers from any source.
default-allow-ssh – enable SSH session to connect to UNIX servers from any source.
You can create firewall rules in combination with network tags so the VM instances with this associated tag will be the target of your firewall rule. Moreover, you can combine multiple ports in a single rule.
Here below there is an example to allow HTTP and SSH connections via gcloud command in the Cloud Shell (alternatively, you can use the GCP graphical interface):
gcloud compute firewall-rules create allow-ssh-and-http --network default --allow tcp:22,80 --direction ingress --priority 1000 --target-tags ssh-and-http --source-ranges [CIDR_RANGE]
Afterwards, you have to add the network tag to the specific GCE instance.
gcloud compute instances add-tags [INSTANCE-NAME] --zone [ZONE] --tags ssh-and-http
If you wish to have a more granular access control, you have to set the proper permissions for each user or service account via IAM & Admin.