I have a VM with 2 NICs. For all intents and purposes, it's a VPN server that takes connection requests on one interface and then forwards traffic out to the other interface.
Periodically, I need to change the IP on the second interface, which is easily done via the web interface. I'd like to make this change using GCP scripting tools to make the process less manual.
I have managed to automate all steps except updating the access-config. This is because both interfaces have the same access-config name ("External NAT"). I've been unable to find a way to rename or recreate this access-config name, nor have I found any workaround.
Any input would be greatly appreciated.
- accessConfigs:
- kind: compute#accessConfig
name: External NAT
natIP: ##.##.##.##
networkTier: STANDARD
type: ONE_TO_ONE_NAT
fingerprint: ==========
kind: compute#networkInterface
name: nic0
network: https://www.googleapis.com/compute/v1/projects/#######/global/networks/inbound
networkIP: 10.#.#.#
subnetwork: https://www.googleapis.com/compute/v1/projects/#######/regions/northamerica-northeast1/subnetworks/inbound
- accessConfigs:
- kind: compute#accessConfig
name: External NAT
natIP: ##.##.##.##
networkTier: STANDARD
type: ONE_TO_ONE_NAT
fingerprint: =========
kind: compute#networkInterface
name: nic1
network: https://www.googleapis.com/compute/v1/projects/#######/global/networks/outbound
networkIP: 10.0.2.3
subnetwork: https://www.googleapis.com/compute/v1/projects/#######/regions/northamerica-northeast1/subnetworks/outbound
I believe (!?) [really am not certain] that you must delete and then create; you can't update an existing access config to change the IP using gcloud.
Someone else please confirm!
PLEASE try this on a sacrificial instance before you use it on the production instance
Thus:
PROJECT=[[YOUR-PROJECT]]
ZONE=[[YOUR-ZONE]]
INSTANCE=[[YOUR-INSTANCE]]
INTERFACE=[[YOUR-INTERFACE]] # Perhaps "nic1"
# Show what we have currently
gcloud compute instances describe ${INSTANCE} \
--zone=${ZONE} --project=${PROJECT} \
--format="yaml(networkInterfaces)"
# Delete the "External NAT" for ${INTERFACE}
gcloud compute instances delete-access-config instance-1 \
--zone=${ZONE} --project=${PROJECT} \
--network-interface=${INTERFACE} \
--access-config-name="External NAT"
# Show what we have currently **without** "External NAT" for ${INTERFACE}
gcloud compute instances describe ${INSTANCE} \
--zone=${ZONE} --project=${PROJECT} \
--format="yaml(networkInterfaces)"
# Create a new "External NAT" for ${INTERFACE}
# Include --address=ADDRESS if you have one
gcloud compute instances add-access-config ${INSTANCE} \
--zone=${ZONE} --project=${PROJECT} \
--network-interface=${INTERFACE} \
--access-config-name="External NAT"
# Show what we have currently with a **new** "External NAT" for ${INTERFACE}
gcloud compute instances describe ${INSTANCE} \
--zone=${ZONE} --project=${PROJECT} \
--format="yaml(networkInterfaces)"
Update
This was bugging me.
You can filter in the describe commands by ${INTERACE} value:
gcloud compute instances describe ${INSTANCE} \
--zone=${ZONE} --project=${PROJECT} \
--format="yaml(networkInterfaces[].filter(name:${INTERFACE})"
Because gcloud has proprietary filtering|formatting, it's often better to format as JSON and then use jq. Using jq, we can filter by ${INTERFACE} and return only the 'External NAT` access config:
gcloud compute instances describe ${INSTANCE} \
--zone=${ZONE} --project=${PROJECT} \
--format="json" \
jq -r ".networkInterfaces[]|select(.name==\"${INTERFACE}\")|.accessConfigs[]|select(.name==\"External NAT\")"
Related
I'm required to decompose the following gcloud bash script in different lines, starting each line with gcloud. The code I have to decompose is:
gcloud compute instances create myinstance-1 --project=[PROJECT_ID] --zone=us-central1-c --machine-type=n1-standard-1 --network-interface=subnet=default,no-address --metadata=enable-oslogin=true --maintenance-policy=MIGRATE --provisioning-model=STANDARD --service-account=[SERVICE_ACCOUNT]-compute#developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append --create-disk=auto-delete=yes,boot=yes,device-name=myinstance-1,image=projects/debian-cloud/global/images/debian-10-buster-v20221102,mode=rw,size=10,type=projects/[PROJECT_ID]/zones/us-central1-c/diskTypes/pd-balanced --no-shielded-secure-boot --shielded-vtpm --shielded-integrity-monitoring --reservation-affinity=an
I did manage to do the first as follows:
gcloud compute instances create myinstance-1 --project=[PROJECT_ID] --zone=us-central1-c --machine-type=n1-standard-1 --network-interface=subnet=default
Any help would be much appreciated.
I've tried
gcloud compute instances create myinstance-1 --project=[PROJECT_ID] --zone=us-central1-c --machine-type=n1-standard-1 --network-interface=subnet=default
This part works but I'm clueless for the other part.
I did figure it and it worked:
Thank you all
#!/bin/bash
#utility virtual machine
gcloud compute disks create disk-1 --size=10GB --zone=us-west1-a
gcloud compute instances create myinstance-1 \
--zone=us-west1-a --machine-type=n1-standard-1
gcloud compute instances attach-disk myinstance-1 --disk=disk-1 \ --zone=us-west1-a
gcloud compute networks create myinstance-1-network
gcloud compute firewall-rules create myinstance-1-firewall \ --network myinstance-1-network --allow tcp,udp,icmp --source-ranges 0.0.0.0/0
#Windows virtual machine
gcloud compute disks create disk-2 --size=10GB --type=pd-ssd \ --zone=us-west1-a
gcloud compute instances create windows-instance --zone=us-west1-a \ --machine-type=n1-standard-2 --image=windows-server-2016-dc-core-v20221109 --image-project=windows-cloud --boot-disk-size=100GB
gcloud compute instances attach-disk windows-instance \ --disk=disk-2 --zone=us-west1-a
gcloud compute networks create windows-instance-network
gcloud compute firewall-rules create windows-instance-firewall \ --network windows-instance-network --allow tcp,udp,icmp --source-ranges 0.0.0.0/0
#Custom virtual machine
gcloud compute disks create disk-3 --size=10GB --zone=us-west1-a
gcloud compute instances create myinstance-3 --zone=us-west1-a \ --machine-type=e2-medium --image=debian-10-buster-v20220118 \ --image-project=debian-cloud --boot-disk-size=10GB
gcloud compute instances attach-disk myinstance-3 --disk=disk-3 \ --zone=us-west1-a
gcloud compute networks create myinstance-3-network
gcloud compute firewall-rules create myinstance-3-firewall --network myinstance-3-network \ --allow tcp,udp,icmp --source-ranges 0.0.0.0/0
I am attempting to install the AWS Distro for OpenTelemetry (ADOT) into my EKS cluster.
https://docs.aws.amazon.com/eks/latest/userguide/adot-reqts.html
I am following this guide to create the service account for the IAM role (irsa technique in AWS):
https://docs.aws.amazon.com/eks/latest/userguide/adot-iam.html
When I run the eksctl commands:
eksctl create iamserviceaccount \
--name adot-collector \
--namespace monitoring \
--cluster <MY CLUSTER> \
--attach-policy-arn arn:aws:iam::aws:policy/AmazonPrometheusRemoteWriteAccess \
--attach-policy-arn arn:aws:iam::aws:policy/AWSXrayWriteOnlyAccess \
--attach-policy-arn arn:aws:iam::aws:policy/CloudWatchAgentServerPolicy \
--approve \
--override-existing-serviceaccounts
I am getting this output:
2 existing iamserviceaccount(s) (hello-world/default,monitoring/adot-collector) will be excluded
iamserviceaccount (monitoring/adot-collector) was excluded (based on the include/exclude rules)
metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
no tasks
This Kubernetes service account does not exist in the target namespace or in any other:
k get sa adot-collector -n monitor
k get serviceAccounts -A | grep abot
Expected output:
1 iamserviceaccount (monitoring/adot-collector) was included (based on the include/exclude rules)
metadata of serviceaccounts that exist in Kubernetes will be updated, as --override-existing-serviceaccounts was set
...
created serviceaccount "monitoring/adot-collector"
When I check in the AWS Console under CloudFormation , I see that the stack was complete, with a message of "IAM role for serviceaccount "monitoring/adot-collector" [created and managed by eksctl]"
What can I do to troubleshoot this? Why is the Kubernetes service account not getting built?
This was resolved after discovering there as a ValidatingWebhookConfiguration that was blocking the creating of service accounts without a specific label. Temporarily disabling the webhook enabled the Stack to run to completion, and the SA was created.
Can I use google cloud's identity aware proxy to connect to the gRPC endpoint on a TPU worker? By "TPU worker" I mean that I am creating a TPU with no associated compute instance (using gcloud compute tpus create) and I wish to connect to the gRPC endpoint found by running gcloud compute tpus describe my-tpu:
ipAddress: <XXX>
port: <YYY>
I can easily set up an SSH tunnel to connect to this endpoint from my local machine but I would like to use IAP to create that tunnel instead. I have tried the following:
gcloud compute start-iap-tunnel my-tpu 8470
but I get
- The resource 'projects/.../zones/.../instances/my-tpu' was not found
This makes sense because a TPU is a not a compute instance, and the command gcloud compute start-iap-tunnel expects an instance name.
Is there any way to use IAP to tunnel to an arbitrary internal IP address? Or more generally, is there any other way that I can use IAP to create a tunnel to my TPU worker?
Yes, it can be done using the internal ip address of the TPU Worker, here is an example:
gcloud alpha compute start-iap-tunnel \
10.164.0.2 8470 \
--local-host-port="localhost:$LOCAL_PORT" \
--region $REGION \
--network $SUBNET \
--project $PROJECT
Be aware that Private Google Access must be enabled in the TPU subnet, which can be easily done with the following command:
gcloud compute networks subnets update $SUBNET \
--region=$REGION \
--enable-private-ip-google-access
Just as a reference, here you have an example on how to create a TPU Worker with no external ip address:
gcloud alpha compute tpus tpu-vm create \
--project $PROJECT \
--zone $ZONE \
--internal-ips \
--version tpu-vm-tf-2.6.0 \
--accelerator-type v2-8 \
--network $SUBNET \
$NAME
AUTHENTICATION
To successfully authenticate the endpoint source of the IAP tunnel, you need to add the SSH keys to the project's metadata following these steps:
Check if you already have SSH keys generated in your endpoint:
ls -1 ~/.ssh/*
#=>
/. . ./id_rsa
/. . ./id_rsa.pub
If you don't have any, you can generate them with the command: ssh-keygen -t rsa -f ~/.ssh/id_rsa -C id_rsa.
Add the SSH keys to your project's metadata:
gcloud compute project-info add-metadata \
--metadata ssh-keys="$(gcloud compute project-info describe \
--format="value(commonInstanceMetadata.items.filter(key:ssh-keys).firstof(value))")
$(whoami):$(cat ~/.ssh/id_rsa.pub)"
#=>
Updated [https://www.googleapis.com/compute/v1/projects/$GCP_PROJECT_NAME].
Assign the iap.tunnelResourceAccessor role to the user:
gcloud projects add-iam-policy-binding $GCP_PROJECT_NAME \
--member=user:$USER_ID \
--role=roles/iap.tunnelResourceAccessor
This is related to the following questions, which are outdated
Possible to get static IP address for Google Cloud Functions?
Google Cloud - Egress IP / NAT / Proxy for google cloud functions
Currently GCP has VPC Serverless Connector that allows you to route all traffic through a VPC Connector and set up Cloud NAT to get static IP addresses.
I have followed the following guide https://cloud.google.com/functions/docs/networking/network-settings#associate-static-ip using the region us-east4 but external requests from my cloud function always timed out.
I'm not sure this is a bug or I have missed something.
Edit:
To make sure I have followed everything, I did all the steps using gcloud, command where possible. These commands are copied from the guides from GCP.
Setting project id for future use
PROJECT_ID=my-test-gcf-vpc-nat
Go to Console and enable billing
Set up a VPC and a test VM to test Cloud NAT
gcloud services enable compute.googleapis.com \
--project $PROJECT_ID
gcloud compute networks create custom-network1 \
--subnet-mode custom \
--project $PROJECT_ID
gcloud compute networks subnets create subnet-us-east-192 \
--network custom-network1 \
--region us-east4 \
--range 192.168.1.0/24 \
--project $PROJECT_ID
gcloud compute instances create nat-test-1 \
--image-family debian-9 \
--image-project debian-cloud \
--network custom-network1 \
--subnet subnet-us-east-192 \
--zone us-east4-c \
--no-address \
--project $PROJECT_ID
gcloud compute firewall-rules create allow-ssh \
--network custom-network1 \
--source-ranges 35.235.240.0/20 \
--allow tcp:22 \
--project $PROJECT_ID
Created IAP SSH permissions using Console
Test network config, the VM should not have internet access without Cloud NAT
gcloud compute ssh nat-test-1 \
--zone us-east4-c \
--command "curl -s ifconfig.io" \
--tunnel-through-iap \
--project $PROJECT_ID
command responded with connection timed out
Set up Cloud NAT
gcloud compute routers create nat-router \
--network custom-network1 \
--region us-east4 \
--project $PROJECT_ID
gcloud compute routers nats create nat-config \
--router-region us-east4 \
--router nat-router \
--nat-all-subnet-ip-ranges \
--auto-allocate-nat-external-ips \
--project $PROJECT_ID
Test network config again, the VM should have internet access with Cloud NAT
gcloud compute ssh nat-test-1 \
--zone us-east4-c \
--command "curl -s ifconfig.io" \
--tunnel-through-iap \
--project $PROJECT_ID
command responded with IP address
Created VPC Access Connector
gcloud services enable vpcaccess.googleapis.com \
--project $PROJECT_ID
gcloud compute networks vpc-access connectors create custom-network1-us-east4 \
--network custom-network1 \
--region us-east4 \
--range 10.8.0.0/28 \
--project $PROJECT_ID
gcloud compute networks vpc-access connectors describe custom-network1-us-east4 \
--region us-east4 \
--project $PROJECT_ID
Added permissions for Google Cloud Functions Service Account
gcloud services enable cloudfunctions.googleapis.com \
--project $PROJECT_ID
PROJECT_NUMBER=$(gcloud projects describe $PROJECT_ID --format="value(projectNumber)")
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:service-$PROJECT_NUMBER#gcf-admin-robot.iam.gserviceaccount.com \
--role=roles/viewer
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:service-$PROJECT_NUMBER#gcf-admin-robot.iam.gserviceaccount.com \
--role=roles/compute.networkUser
There are suggestions I should add additional firewall rules and service account permissions
# Additional Firewall Rules
gcloud compute firewall-rules create custom-network1-allow-http \
--network custom-network1 \
--source-ranges 0.0.0.0/0 \
--allow tcp:80 \
--project $PROJECT_ID
gcloud compute firewall-rules create custom-network1-allow-https \
--network custom-network1 \
--source-ranges 0.0.0.0/0 \
--allow tcp:443 \
--project $PROJECT_ID
# Additional Permission, actually this service account has an Editor role already.
gcloud projects add-iam-policy-binding $PROJECT_ID \
--member=serviceAccount:$PROJECT_ID#appspot.gserviceaccount.com \
--role=roles/compute.networkUser
Deployed test Cloud Functions
index.js
const publicIp = require('public-ip')
exports.testVPC = async (req, res) => {
const v4 = await publicIp.v4()
const v6 = await publicIp.v6()
console.log('ip', [v4, v6])
return res.end(JSON.stringify([v4, v6]))
}
exports.testNoVPC = exports.testVPC
# Cloud Function with VPC Connector
gcloud functions deploy testVPC \
--runtime nodejs10 \
--trigger-http \
--vpc-connector custom-network1-us-east4 \
--egress-settings all \
--region us-east4 \
--allow-unauthenticated \
--project $PROJECT_ID
# Cloud Function without VPC Connector
gcloud functions deploy testNoVPC \
--runtime nodejs10 \
--trigger-http \
--region us-east4 \
--allow-unauthenticated \
--project $PROJECT_ID
The Cloud Function without VPC Connector responded with IP address
https://us-east4-my-test-gcf-vpc-nat.cloudfunctions.net/testNoVPC
The Cloud Function with VPC Connector timed out
https://us-east4-my-test-gcf-vpc-nat.cloudfunctions.net/testVPC
Configure a sample Cloud NAT setup with Compute Engine. Use the Compute Engine to test if your settings for Cloud NAT were done successfully.
Configuring Serverless VPC Access. Make sure you create the VPC connector on the custom-network1 made in step 1.
Create a Google Cloud Function
a.Under Networking choose the connector you created on step 2 and Route all traffic through the VPC connector.
import requests
import json
from flask import escape
def hello_http(request):
response = requests.get('https://stackoverflow.com')
print(response.headers)
return 'Accessing stackoverflow from cloud function: {}!'.format(response.headers)
The Region for Cloud Nat, Vpc Connector and Cloud Function is us-central1
4.Test the function to see if you have access to internet:
Accessing stackoverflow from cloud function: {'Cache-Control': 'private', 'Content-Type': 'text/html; charset=utf-8', 'Content-Encoding': 'gzip', 'X-Frame-Options': 'SAMEORIGIN', 'X-Request-Guid': 'edf3d1f8-7466-4161-8170-ae4d6e615d5c', 'Strict-Transport-Security': 'max-age=15552000', 'Feature-Policy': "microphone 'none'; speaker 'none'", 'Content-Security-Policy': "upgrade-insecure-requests; frame-ancestors 'self' https://stackexchange.com", 'Content-Length': '26391', 'Accept-Ranges': 'bytes', 'Date': 'Sat, 28 Mar 2020 19:03:17 GMT', 'Via': '1.1 varnish', 'Connection': 'keep-alive', 'X-Served-By': 'cache-mdw17354-MDW', 'X-Cache': 'MISS', 'X-Cache-Hits': '0', 'X-Timer': 'S1585422197.002185,VS0,VE37', 'Vary': 'Accept-Encoding,Fastly-SSL', 'X-DNS-Prefetch-Control': 'off', 'Set-Cookie': 'prov=78ecd1a5-54ea-ab1d-6d19-2cf5dc44a86b; domain=.stackoverflow.com; expires=Fri, 01-Jan-2055 00:00:00 GMT; path=/; HttpOnly'}!
Success, now you can specify a static IP address for NAT
Check if the cloud nat routers were created in the same VPC used by the Serverless VPC Access.
Also check if the Cloud Function is deployed in the same region of the Cloud Routers used by the Cloud Nat.
Following https://course.fast.ai/start_gcp.html this set up:
export IMAGE_FAMILY="pytorch-latest-gpu" # or "pytorch-latest-cpu"
for non-GPU instances
export ZONE="us-west2-b" # budget: "us-west1-b"
export INSTANCE_NAME="my-fastai-instance"
export INSTANCE_TYPE="n1-highmem-8" # budget: "n1-highmem-4"
# budget: 'type=nvidia-tesla-k80,count=1'
gcloud compute instances create $INSTANCE_NAME \
--zone=$ZONE \
--image-family=$IMAGE_FAMILY \
--image-project=deeplearning-platform-release \
--maintenance-policy=TERMINATE \
--accelerator="type=nvidia-tesla-p100,count=1" \
--machine-type=$INSTANCE_TYPE \
--boot-disk-size=200GB \
--metadata="install-nvidia-driver=True" \
--preemptible
Got this error:
(gcloud.compute.instances.create) Could not fetch resource:
- The resource 'projects/xxxxxx/zones/us-west2-b/acceleratorTypes/nvidia-tesla-p100' was not found
Anyone?
I tried replicating the same steps you followed from the tutorial and got the same error.
According to Google's documentation, NVIDIA-TESLA-P100 is only available in these zones:
us-west1-a
us-west1-b
us-central1-c
us-central1-f
us-east1-b
us-east1-c
europe-west1-b
europe-west1-d
europe-west4-a
asia-east1-a
asia-east1-c
australia-southeast1-c
And you may have selected us-west2-b, which is not available.
Therefore, I would just change your zone to one of the previously mentioned ones.
To get this list in a more programmatic way, using Cloud SDK for example, you could issue:
gcloud compute accelerator-types list --filter "name=nvidia-tesla-p100" --format "table[box,title=Zones](zone:sort=1)" 2>/dev/null
The error you are reporting is caused because this GPU is not available in the zone “us-west2-b”, you can review where GPU you can use in this official documentation.
In this case, according at the region you are using, you can use in:
us-west1-a
us-west1-b
Regards.