Opening the command terminal from Google Cloud Platform you're greeted with a project level shell
account_name#cloudshell:/ (project_name)$
after starting up a new VM it's possible to send files from the project level file system to the VM like so:
account_name#cloudshell:/ (project_name)$ gcloud compute scp --recurse \
> ~/project-file vm-name:~
after ssh-ing into the new VM how do you perform the same file transfer from inside the VM?
Everything I've tried ends up looking like this:
account_name#vm-name:~$ gcloud compute scp --recurse \
> cloudshell:~/project-file ~
ERROR: (gcloud.compute.scp) Could not fetch resource:
- The resource 'projects/project_name/zones/my_zone/instances/cloudshell' was not found
The gcloud compute scp command does not support Google Cloud Shell as source/target.
Google has recently added new commands to the "alpha" version of gcloud which support Cloud Shell.
gcloud alpha cloud-shell scp cloudshell:~/REMOTE-DIR localhost:~/LOCAL-DIR
The problem using this command inside a VM instance is that VMs use Service Account credentials. Cloud Shell is assigned/created on a per-user credential basis. There is a different Cloud Shell instance for each User ID - created on the fly. With a service account, you cannot identify to Cloud Shell which instance you want to interact with. This means a new Cloud Shell instance is created not mapped to a user identity.
Note: It looks like the API might support this in the future, but the current implementation does not have a method to specify either the user name or OAuth credentials.
You need to use User Credentials (OAuth 2.0) to communicate with Cloud Shell. Unless you have a GUI desktop to run a web browser inside your VM instance, you cannot create User Credentials suitable for Google Cloud Shell authentication.
You can either limit yourself to copying files to/from the VM instance using commands in the Cloud Shell instance or look at a program that I just released that implements a CLI for Cloud Shell. If you chose the second method, authenticate to Cloud Shell from your desktop and then copy the user_credentials.json file with my program to your VM instance. Then you have a fairly powerful command line tool for Cloud Shell interaction.
Google Cloud Shell CLI
Related
I need a Compute Engine instance to import the exact configuration (IP, services, files, etc...) of the original machine, without impacting the frontend if it concerns a web server for example. While running this machine, I would be able to shut down the original machine to increase its RAM or vCPUs before starting it again and deleting the cloned instance.
The problem is that I want to automate this process, and that's why I need the gcloud command. So is there a way to clone an entire gcp instance using the gcloud command or another tool?
This is not possible with the gcloud. This is possible with the cloud console, but as you can see in this documentation:
Restrictions
You can only use this feature in the Cloud Console; this feature is not supported in the gcloud tool or the API.
What you could do is create similar (not completely equal) instances from a custom image, using that all you have to do is use the following command:
gcloud compute instances create --image=IMAGE
More details on that command can be found here
Does GCP assign a global location (gs://...something...) for the home directory in Cloud Shell? Like buckets are gs://$BUCKET_NAME/, where $BUCKET_NAME is globally unique.
At the shell prompt, all I can see is:
$USERNAME#cloudshell:~ ($PROJECT_ID)
Cloud Shell is globally distributed across multiple Google Cloud Platform regions. When you first connect to Cloud Shell, you will be automatically assigned to the closest available geographical region. You cannot pick your own region and in the event that Cloud Shell does not pick the most optimal region, it will try to migrate your Cloud Shell VM to a closer region when it is not in use.
To view your current region, run the following command from a Cloud Shell session:
$ curl metadata/computeMetadata/v1/instance/zone
Please note that Cloud Shell provisions 5 GB of free persistent disk storage mounted as your $HOME directory on the virtual machine instance. This storage is on a per-user basis and is available across projects. Unlike the instance itself, this storage does not time out on inactivity. All files you store in your home directory, including installed software, scripts and user configuration files like .bashrc and .vimrc, persist between sessions. Your $HOME directory is private to you and cannot be accessed by other users.
No, the cloud shell VM storage is not available as a storage bucket.
But if your intention was to use copy data in and out of the cloud-shell VM storage, you can easily do that
From doc ,
gcloud cloud-shell scp cloudshell:~/data.txt localhost:~data.txt
I used Deployment Manager to create a LAMP Stack for phpMyAdmin. Is it possible to access files on the VM from the Google Cloud Shell? If so, how would I navigate to the files in Google Cloud Shell?
When you start Cloud Shell, it provisions a Google Compute Engine virtual machine running a Debian-based Linux operating system. Cloud Shell instances are provisioned on a per-user, per-session basis. The instance persists while your Cloud Shell session is active; after an hour of inactivity, your session terminates and its VM, discarded. For more on usage quotas, refer to the limitations guide 1.
Yes you can access your LAMP VM-Instance using the cloud shell command as shown below:
gcloud beta compute ssh --zone "us-central1-a" "vm-name" --project "project-id".
Note: (Please replace the zone , vm-name and project-id as per your naming conventions).
Please follow the link 2 to get more information on Cloud Shell How to Guides.
I'm trying to execute a Script on a Google VM through Terraform.
First I tried it via Google Startup Scripts. But since the metadata is visible in the Google Console (startup scripts count as metadata) and that would mean that anybody with read access can see that script which is not acceptable.
So i tried to get the script from a Storage Account. But for that i need to attach a service account to the VM so the VM has the rights to access the Storage Account. Now people that have access to the VM also have access to my script as long as the service account is attached to the VM. In order to "detach" the service account i would have to stop the VM. Also if i don't want to permanently keep the attachment of the service account i would have to attach the service account via a script which requires another stop and start of the VM. This is probably not possible and also really ugly.
I don't understand how the remote-exec ressource works on GCP VMs. Because i have to specify a user and a userpassword to connect to the VM and then execute the script. But the windows password needs to be set manually via the google console, so i can't specify those things at this point in time.
So does anybody know how I can execute a Script where not anybody has access to my script via Terraform?
Greetings :) and Thanks in advance
I ended up just running a gcloud script in which i removed the Metadata from the VM after the Terraform apply was finished. In my Gitlab pipeline i just called the script in the "after_script"-section. Unfortunately the credentials are visible for approximately 3min.
I've got a batch job that I want to run in google compute engine on a NetBSD instance. I expected that I could just shutdown -hp now at the end of the job and the instance would be powered off. But when I do that it still remains in the running state according to the google cloud console and CLI. How do I make a NetBSD virtual machine in google cloud shut itself off when it's no longer needed?
Note: Google cloud SDK is not available on NetBSD
Normally the command line option -p will power off the virtual machine. This indicates that there is an issue/bug with the underlying ACPI code that invokes the ACPI function.
As a workaround use the Google Cloud SDK gcloud command. This command has the added benefit that Google Cloud will force a power off if the instance does not shutdown normally.
Add this command to your script. You may need to install the CLI first.
gcloud compute instances stop INSTANCE_NAME
Another option is to write a program that implements the Google Cloud API to stop the instance. There are examples in most languages including Go and Python. You do not even need the SDK as you can call the REST API endpoint with an Access Token.