I have created a nfs-server using Google Cloud's FileStore and have linked it to a kubernetes engine. I would like to know how to access the stored data on FileStore?
Thanks, Look forward to the suggestions.
You can only get operation information using the gcloud tool. To use the gcloud tool, you must either install the Cloud SDK or use the Cloud Shell that is built into the Cloud Console.
You can get a list of Filestore operations by running the operations list command:
gcloud filestore operations list
--[project=project-id] \
--[zone=zone]
where:
project-id is the project ID of the Cloud project that contains the
Filestore instance. You can skip this flag if the Filestore instance
is in the gcloud default project. You can set the default project by
running:
gcloud config set project project-id
zone is the zone where the Filestore instance resides. Run the gcloud
filestore zones list command to get a list of supported zones. You
can skip this flag if the Filestore instance is in the gcloud default
zone. You can set the default zone by running:
gcloud config set filestore/zone zone
The command returns a response similar to the following example:
OPERATION_NAME LOCATION TYPE TARGET STATUS CREATE_TIME DURATION
operation-1505929956434-559a2a41c217c-231e6a94-a4b6a803 us-central1-c create nfs1 DONE 2017-09-20T17:52:36 <1S
operation-1505931180862-559a2ed176d0d-a0d70ae0-35ef2e71 europe-west1-b create nfs2 DONE 2017-09-20T18:13:00 <1S
You can get details of a specific Filestore operation by running the operations describe command:
gcloud filestore operations describe operation-name \
--[project=project-id] \
--[zone=zone]
where:
operation-name is the name of the Filestore operation. Use the
operations list command to get a list of operation names.
project-id is the project ID of the Cloud project that contains the
Filestore instance. You can skip this flag if the Filestore instance
is in the gcloud default project. You can set the default project by
running:
gcloud config set project project-id
zone is the zone where the Filestore instance resides. Run the gcloud
filestore zones list command to get a list of supported zones. You
can skip this flag if the Filestore instance is in the gcloud default
zone. You can set the default zone by running:
gcloud config set filestore/zone zone
The command returns a response similar to the following:
done: true
metadata:
'#type': type.googleapis.com/google.cloud.common.OperationMetadata
apiVersion: v1beta1
createTime: '2017-10-09T22:18:09.347400Z'
endTime: '2017-10-09T22:20:04.392199183Z'
target: projects/filestore-test/locations/us-central1-c/instances/filer3
verb: delete
name: projects/filestore-test/locations/us-central1-c/operations/operation-1507587489330-55b2490c4f394-faece090-1c0e16db
Getting operation information document: https://cloud.google.com/filestore/docs/getting-operation-information
Here is also how to monitor your Filestore instances and set up alerts for low disk space and low backups quota.
You can monitor Filestore instances using Cloud Monitoring.
Related
So, I created a GCP Compute optimized VM and gave it full access to all cloud apis as well as full HTTP and HTTPS traffic access. I now want to create a TPU from inside this VM i.e. run the following command:
gcloud compute tpus create node-1 --zone us-central1-a --project $PROJECT_NAME --version 2.5.0 --accelerator-type v3-8 --no-async
and it constantly errors with:
ERROR: (gcloud.compute.tpus.create) PERMISSION_DENIED: Permission 'tpu.nodes.create' denied on 'projects/$PROJECT_NAME/locations/us-central1-a/nodes/node-1'
I only ever get this error in the VM, but when I run this command on my local machine with my local install of gcloud, everything works fine. It is really weird because all other commands like gcloud list and gsutil all work fine, but creating TPUs doesn't work. I even tried adding a service account into ~/.credentials and setting that in my bashrc:
export GOOGLE_APPLICATION_CREDENTIALS=$HOME/.credentials/service-account.googleapis.com.json
but this doesn't solve the problem. I even tried with the execution groups as well:
gcloud compute tpus execution-groups create --name=node-1 --zone=us-central1-a --tf-version=2.5.0 --accelerator-type=v3-8 --tpu-only --project $PROJECT_NAME
but this also fails.
Below are two possible reasons why you have Permission denied Error:
Service Account does not have Allow full access to all Cloud APIs.
Account doesn't have a role TPU ADMIN.
I tried to create TPU using your command. I got the same error before modifying the service account. Here is the output that TPU has been created.
$ gcloud compute tpus create node-1 --zone us-central1-a --project $PROJECT_NAME --version 2.5.0 --accelerator-type v3-8 --no-async \
Create request issued for: [node-1]
Waiting for operation [projects/project-id/locations/us-central1-a/operations/operation-1634780772429-5ced30f39edf6-105ccd39-96d571fa] to complete...done.
Created tpu [node-1].
Try creating the TPU again after following these instructions:
a. Make sure to Enable TPU API
b. Go to VM Instance and stop/down VM before editing service account.
c. Refresh VM instance page and click Edit
d. At the bottom of Instance details page Select Compute Engine Service Account and Allow full Access to all Cloud APIs and Save.
(As recommended by #John Hanley)
e. On your Instance Page check and note your Service Account.
f. Go to IAM page and look for the Service Account and Edit
g. Click Add Role and select TPU ADMIN and Save
h. Start your VM instance and SSH to Server
i. Run this command
gcloud compute tpus create node-1 --zone us-central1-a --project $PROJECT_NAME --version 2.5.0 --accelerator-type v3-8 --no-async
I encountered error at first because there was existing TPU on the same zone I entered. Make sure that your TPU has not been created with the same zone.
I have multiple configurations created via the gcloud init command. It does not give me the option to set a Default Compute Region or a Default Compute Zone.
When I run, gcloud config configurations list, the default zone and region are empty.
gcloud config configurations ... only provides following commands. (No Update command)
activate
create
delete
describe
list
Can't I set a default region and zone when initializing a configuration? If no, how can I update certain fields of a gcloud configuration? Eg: COMPUTE_DEFAULT_ZONE or COMPUTE_DEFAULT_REGION
There is an associated gcloud command called gcloud config set that is used to set/update properties in the currently active configuration. What this means is that you can create a configuration, activate it and then perform gcloud config set commands to change the settings. Looking at the docs, both compute region and compute zone are documented as being present to set the default region and default zone respectively.
Just like Kolban mentioned if you look at the set configs you will find:
gcloud config set compute/zone [YOUR ZONE NAME HERE]
EXAMPLE: (gcloud config set compute/zone asia-east1-b)
Additionally, I like to also set the region as well at the same time
gcloud config set compute/region [YOUR REGION NAME HERE]
EXAMPLE: (gcloud config set compute/region asia-east1)
I am new to google cloud. I have seen the similar question but I couldn't understand the answer. It will be great if someone could give easy instruction to tackle this problem.
I have two linux VM instances under same project on google cloud. I want to copy files from one VM to other VM.
I tried copy-files command. It threw error "deprecated, use scp instead"
I tried "gcloud compute scp user#vm2_instance_name:vm2_instance_file_path"
other answers say use "service account". I read about them and created one and created key as well in .json format but not sure what to do after that. Appreciate any suggestions.
If you are in one instance, don't worry about Google Cloud. Simply perform a scp to copy file from VM to another one.
If you don't have customize users on the VM, you can omit it
scp <my local file path> <vm name>:<destination path>
About service account, if your VM are in Google Cloud, they have the compute engine service account by default <projectNumber>-compute#developer.gserviceaccount.com
You can customize this service account if you want. This service account is mandatory to identify the VM which perform API call or gcloud command
Google's documentation addresses this. Personally, I have always preferred using gcloud compute scp as it provides both a simplistic way of performing transfers while not necessarily taking away any of the complexities and features that other transferring options provide.
In any case, in the documentation provided you will most likely find the method that are more in-line with what you want.
This is the solution that worked for me:
1. gcloud compute instances list
NAME ZONE MACHINE_TYPE PREEMPTIBLE
INTERNAL_IP EXTERNAL_IP STATUS
instance-1 us-central1-a n2-standard-8
10.128.0.60 34.66.177.187 RUNNING
instance-2 us-central1-a n1-standard-1
10.128.15.192 34.69.216.153 STAGING
2. gcloud compute ssh instance-1 --zone=us-central1-a
3. user#instance-1:~$ ls
myfile
4. usernstance-1:~$ gcloud compute scp myfile user#instance-2:myfile
5. gcloud compute ssh instance-2 --zone=us-central1-a
6. user#instance-2:~$ ls
myfile
Trying to copy some data into a newly created GCP Filestore with the gcloud CLI.
gcloud compute scp --recurse /somedirectore/somefile somefilestore-1:/somemount
gcloud seems unable to find the instance:
ERROR: (gcloud.compute.scp) Could not fetch resource:
- The resource 'projects/k8-spark/zones/us-central1-a/instances/somefilestore-1' was not found
The filestore instance does exist. Wondering if compute scp actually works with filestores? The documentation seems to think so:
https://cloud.google.com/filestore/docs/copying-data
Any help much appreciated!
The error indicates that "somefilestore-1" is not a name for a Compute Engine (GCE) instance, not a Filestore instance. You can find the instance name in Compute Engine [1]. If your instances were created by the Kubernetes Engine, it will likely start with "gke-< your_K8_cluster_name >".
Some section of the documentation refers to GCE instances as "VM instance", note that the Cloud Filestore fileshare is mounted on a Compute Engine Windows VM instance.
[1] https://console.cloud.google.com/compute/instances
I am very new to Google Cloud. I was able to setup a wordpress site and am working on it now. However, it appears that my vm instance is using the following asia-east1-a for its zone. I was able to change the Region and Zone using gcloud commands with the following output:
$ gcloud config list compute/region
Your active configuration is: [default]
[compute]
region = us-east4
$ gcloud config list compute/zone
Your active configuration is: [default]
[compute]
zone = us-east4-b
How does one change the active default to the new set zone? I would like my instance to run in the North East Coast of the USA?
Thanks,
T
Use commands below at cloud shell.
To check your preferred region:
$ gcloud compute regions list
To change compute regions, I select us-east4 region:
$ gcloud config set compute/region us-east4
Updated property [compute/region].
$ gcloud config list compute/region
[compute]
region = us-east4
In a similar way, you can change compute/zone.
As described here, project-info metadata can be added per project to specify the default regions and zones. This is used only at the time of initializing gcloud (using gcloud init).
In addition, gcloud supports locally setting the default region and zone using the compute/region and compute/zone configurations (which is what you seem to have added to your local gcloud config). When these properties are set, they will override any configuration set in the project-info.
Since you have set these properties according to your requirements, I think your defaults are set as long as you're using that gcloud configuration.
Do remember that you can always override the zone and region using the --zone and --region arguments to any of the gcloud commands.
Moving instance from one zone to another
Changing the default zone/region does not move any of the existing VMs to a new zone. If you wish to move a VM from one zone to another, you can take a snapshot of the persistent disks, launch a new instance in the desired zone using the snapshot and cleanup the resources used by the original VM.
You can do this using either gcloud or follow a set of steps manually to achieve the same result.
gcloud compute instances move INSTANCE_NAME --zone SOURCE_ZONE --destination-zone DESTINATION_ZONE
In detail, Compute Engine will:
Take snapshots of persistent disks attached to the source instance.
Create copies of the persistent disks in the destination zone.
For instances moving within the same region, temporarily promote any ephemeral external IP addresses assigned to the instance to a static
external IP address.
Create a new instance in the destination zone.
Attach the newly created persistent disks to your new instance.
Assign an external IP address to the new instance. If necessary, demote the address back to an ephemeral external IP address.
Delete the snapshots, original disks, and original instance.
If you want to manually move your instance, you can also perform these
steps by hand.
If you don't remember the specific commands, another option is to change the region and zone in the gcloud configurations file which is located in:
~/.config/gcloud/configurations/config_default
And contain the structure below:
[core]
account = my-account#my-domain
project = my-project
[compute]
zone = asia-south1-a
region = asia-south1
After changing region to us-central-1 you'll get the following output:
gcloud config list compute/region
[compute]
region = us-central1
gcloud config configurations list
NAME IS_ACTIVE ACCOUNT PROJECT COMPUTE_DEFAULT_ZONE COMPUTE_DEFAULT_REGION
default True my-account#my-domain my-project us-central1-a us-central1
Reference to all GCP regions and zones.
Create Image of the existing instance and after create a new instance with a new zone who you like and uploaded this image with you create before
gcloud compute instances move INSTANCE_NAME --destination-zone=DESTINATION_ZONE [--async] [--zone=ZONE] [GCLOUD_WIDE_FLAG …]
gcloud compute instances move facilitates moving a Compute Engine virtual machine from one zone to another.
EXAMPLE :
gcloud compute instances move compute-instance-1 --zone us-central1-b --destination-zone us-central1-f