Trying to copy some data into a newly created GCP Filestore with the gcloud CLI.
gcloud compute scp --recurse /somedirectore/somefile somefilestore-1:/somemount
gcloud seems unable to find the instance:
ERROR: (gcloud.compute.scp) Could not fetch resource:
- The resource 'projects/k8-spark/zones/us-central1-a/instances/somefilestore-1' was not found
The filestore instance does exist. Wondering if compute scp actually works with filestores? The documentation seems to think so:
https://cloud.google.com/filestore/docs/copying-data
Any help much appreciated!
The error indicates that "somefilestore-1" is not a name for a Compute Engine (GCE) instance, not a Filestore instance. You can find the instance name in Compute Engine [1]. If your instances were created by the Kubernetes Engine, it will likely start with "gke-< your_K8_cluster_name >".
Some section of the documentation refers to GCE instances as "VM instance", note that the Cloud Filestore fileshare is mounted on a Compute Engine Windows VM instance.
[1] https://console.cloud.google.com/compute/instances
Related
I have created a nfs-server using Google Cloud's FileStore and have linked it to a kubernetes engine. I would like to know how to access the stored data on FileStore?
Thanks, Look forward to the suggestions.
You can only get operation information using the gcloud tool. To use the gcloud tool, you must either install the Cloud SDK or use the Cloud Shell that is built into the Cloud Console.
You can get a list of Filestore operations by running the operations list command:
gcloud filestore operations list
--[project=project-id] \
--[zone=zone]
where:
project-id is the project ID of the Cloud project that contains the
Filestore instance. You can skip this flag if the Filestore instance
is in the gcloud default project. You can set the default project by
running:
gcloud config set project project-id
zone is the zone where the Filestore instance resides. Run the gcloud
filestore zones list command to get a list of supported zones. You
can skip this flag if the Filestore instance is in the gcloud default
zone. You can set the default zone by running:
gcloud config set filestore/zone zone
The command returns a response similar to the following example:
OPERATION_NAME LOCATION TYPE TARGET STATUS CREATE_TIME DURATION
operation-1505929956434-559a2a41c217c-231e6a94-a4b6a803 us-central1-c create nfs1 DONE 2017-09-20T17:52:36 <1S
operation-1505931180862-559a2ed176d0d-a0d70ae0-35ef2e71 europe-west1-b create nfs2 DONE 2017-09-20T18:13:00 <1S
You can get details of a specific Filestore operation by running the operations describe command:
gcloud filestore operations describe operation-name \
--[project=project-id] \
--[zone=zone]
where:
operation-name is the name of the Filestore operation. Use the
operations list command to get a list of operation names.
project-id is the project ID of the Cloud project that contains the
Filestore instance. You can skip this flag if the Filestore instance
is in the gcloud default project. You can set the default project by
running:
gcloud config set project project-id
zone is the zone where the Filestore instance resides. Run the gcloud
filestore zones list command to get a list of supported zones. You
can skip this flag if the Filestore instance is in the gcloud default
zone. You can set the default zone by running:
gcloud config set filestore/zone zone
The command returns a response similar to the following:
done: true
metadata:
'#type': type.googleapis.com/google.cloud.common.OperationMetadata
apiVersion: v1beta1
createTime: '2017-10-09T22:18:09.347400Z'
endTime: '2017-10-09T22:20:04.392199183Z'
target: projects/filestore-test/locations/us-central1-c/instances/filer3
verb: delete
name: projects/filestore-test/locations/us-central1-c/operations/operation-1507587489330-55b2490c4f394-faece090-1c0e16db
Getting operation information document: https://cloud.google.com/filestore/docs/getting-operation-information
Here is also how to monitor your Filestore instances and set up alerts for low disk space and low backups quota.
You can monitor Filestore instances using Cloud Monitoring.
So, I created a GCP Compute optimized VM and gave it full access to all cloud apis as well as full HTTP and HTTPS traffic access. I now want to create a TPU from inside this VM i.e. run the following command:
gcloud compute tpus create node-1 --zone us-central1-a --project $PROJECT_NAME --version 2.5.0 --accelerator-type v3-8 --no-async
and it constantly errors with:
ERROR: (gcloud.compute.tpus.create) PERMISSION_DENIED: Permission 'tpu.nodes.create' denied on 'projects/$PROJECT_NAME/locations/us-central1-a/nodes/node-1'
I only ever get this error in the VM, but when I run this command on my local machine with my local install of gcloud, everything works fine. It is really weird because all other commands like gcloud list and gsutil all work fine, but creating TPUs doesn't work. I even tried adding a service account into ~/.credentials and setting that in my bashrc:
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.credentials/service-account.googleapis.com.json
but this doesn't solve the problem. I even tried with the execution groups as well:
gcloud compute tpus execution-groups create --name=node-1 --zone=us-central1-a --tf-version=2.5.0 --accelerator-type=v3-8 --tpu-only --project $PROJECT_NAME
but this also fails.
Below are two possible reasons why you have Permission denied Error:
Service Account does not have Allow full access to all Cloud APIs.
Account doesn't have a role TPU ADMIN.
I tried to create TPU using your command. I got the same error before modifying the service account. Here is the output that TPU has been created.
$ gcloud compute tpus create node-1 --zone us-central1-a --project $PROJECT_NAME --version 2.5.0 --accelerator-type v3-8 --no-async \
Create request issued for: [node-1]
Waiting for operation [projects/project-id/locations/us-central1-a/operations/operation-1634780772429-5ced30f39edf6-105ccd39-96d571fa] to complete...done.
Created tpu [node-1].
Try creating the TPU again after following these instructions:
a. Make sure to Enable TPU API
b. Go to VM Instance and stop/down VM before editing service account.
c. Refresh VM instance page and click Edit
d. At the bottom of Instance details page Select Compute Engine Service Account and Allow full Access to all Cloud APIs and Save.
(As recommended by #John Hanley)
e. On your Instance Page check and note your Service Account.
f. Go to IAM page and look for the Service Account and Edit
g. Click Add Role and select TPU ADMIN and Save
h. Start your VM instance and SSH to Server
i. Run this command
gcloud compute tpus create node-1 --zone us-central1-a --project $PROJECT_NAME --version 2.5.0 --accelerator-type v3-8 --no-async
I encountered error at first because there was existing TPU on the same zone I entered. Make sure that your TPU has not been created with the same zone.
I am new to google cloud. I have seen the similar question but I couldn't understand the answer. It will be great if someone could give easy instruction to tackle this problem.
I have two linux VM instances under same project on google cloud. I want to copy files from one VM to other VM.
I tried copy-files command. It threw error "deprecated, use scp instead"
I tried "gcloud compute scp user#vm2_instance_name:vm2_instance_file_path"
other answers say use "service account". I read about them and created one and created key as well in .json format but not sure what to do after that. Appreciate any suggestions.
If you are in one instance, don't worry about Google Cloud. Simply perform a scp to copy file from VM to another one.
If you don't have customize users on the VM, you can omit it
scp <my local file path> <vm name>:<destination path>
About service account, if your VM are in Google Cloud, they have the compute engine service account by default <projectNumber>-compute#developer.gserviceaccount.com
You can customize this service account if you want. This service account is mandatory to identify the VM which perform API call or gcloud command
Google's documentation addresses this. Personally, I have always preferred using gcloud compute scp as it provides both a simplistic way of performing transfers while not necessarily taking away any of the complexities and features that other transferring options provide.
In any case, in the documentation provided you will most likely find the method that are more in-line with what you want.
This is the solution that worked for me:
1. gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE
INTERNAL_IP EXTERNAL_IP STATUS
instance-1 us-central1-a n2-standard-8
10.128.0.60 34.66.177.187 RUNNING
instance-2 us-central1-a n1-standard-1
10.128.15.192 34.69.216.153 STAGING
2. gcloud compute ssh instance-1 --zone=us-central1-a
3. user#instance-1:~$ ls
myfile
4. usernstance-1:~$ gcloud compute scp myfile user#instance-2:myfile
5. gcloud compute ssh instance-2 --zone=us-central1-a
6. user#instance-2:~$ ls
myfile
I'm now learning Google Cloud Platform instance creation. As part of learning, trying to launch RHEL 6 instance on a f1.micro instance-type in us-east1-b region.
Here's is the Gcloud command I've used:
gcloud compute --project=<project-id> instances create cldinit-vm --zone=us-east1-b --machine-type=f1-micro--subnet=default --network-tier=PREMIUM --metadata-from-file startup-script=initscript.sh --maintenance-policy=MIGRATE --service-account=<account-id>#developer.gserviceaccount.com --scopes=https://www.googleapis.com/auth/devstorage.read_only,https://www.googleapis.com/auth/logging.write,https://www.googleapis.com/auth/monitoring.write,https://www.googleapis.com/auth/servicecontrol,https://www.googleapis.com/auth/service.management.readonly,https://www.googleapis.com/auth/trace.append --min-cpu-platform="Intel Broadwell" --tags=http-server --image=rhel-6-v20181210 --image-project=rhel-cloud --boot-disk-size=10GB --boot-disk-type=pd-standard --boot-disk-device-name=cldinit-vm --labels=name=cloudinit-vm
When I run the command, it is showing the error below,
ERROR: (gcloud.compute.instances.create) Could not fetch resource:
- Invalid value for field 'resource.machineType': 'https://www.googleapis.com/compute/v1/projects/<project-id>/zones/us-east1-b/machineTypes/f1-micro--subnet=default'.
Machine type with name 'f1-micro--subnet=default' does not exist in zone 'us-east1-b'.
I've two questions:
I could not modify the Subnet settings from "default", as it is the only option available to choose from "network" in instance launching page.
So could anyone help to resolve the issue please?
Since I'm learning GCP, I've launched the CLI command into "CloudShell" directly from the link located at bottom of GCP compute engine - instance launching page.
Is there a correction needs to be done from "Google" to provide the working command ?
As part of learning, found that there was a missing space in between the option value f1-micro and --subnet.
So here is the corrected command snippet
gcloud compute --project=<project-id> instances create cldinit-vm --zone=us-east1-b --machine-type=f1-micro --subnet=default ....
I am trying to create a backup snapshot of my GCP instance. However, every-time I create a snapshot and boot it up, the /home/ folder contents seem to be missing from my original instance.
Any idea why this is happening and how to fix it?
Could you give more details about the steps that you follow, to create the instance from the snapshot.
In my case I've used this commands and he have my home on the new instance:
gcloud compute --project=your-project-name disks snapshot disk_name_of_your_instance --zone=zone_of_your_instance --snapshot-names=name_of_your_snapshot
gcloud compute --project your-project-name disks create "your-new-instance" --size "10" --zone "us-central1-c" --source-snapshot "name_of_your_snapshot" --type "pd-standard"
gcloud beta compute --project=your-project-name instances create your-new-instance --zone=us-central1-c --machine-type=n1-standard-1 --subnet=your-subnet