I would like to trigger a python script on Cloud runtime from Cloud function. I am able to start the Cloud VM from cloud function using client library ,however i am not able to run 'gcloud ssh' command from cloud function.
Is there a way to achieve this? One option is to use the python script as the VM Start up script but I am trying not to do that .
Related
For instance, I want to run the following gcloud CLI command,
gcloud run services delete [SERVICE]
But, from a triggered Google Cloud Function.
I've looked in a few places and have found a few similar things,
https://www.googlecloudcommunity.com/gc/Infrastructure-Compute-Storage/Automatic-Resource-Deletion/m-p/172865
https://github.com/PolideaInternal/cats-love-money
Create a Google function from a Google cloud function
But, I find them a bit tricky to follow and/or replicate.
The Google Cloud CLI is a Python program. That means a lot of dependencies and a requirement for a shell and OS environment. Cloud Functions does not provide either.
A better option for running the CLI is Cloud Run. This provides the additional benefit of being able to test the Cloud Run container locally. You will need to wrap the CLI with an HTTP server responding to HTTP requests which then execute the CLI.
However, most CLI commands can be easily duplicated with the Python SDKs and/or direct REST API calls which are supported by Cloud Functions. This will require a solid understanding of the services, APIs, and language.
Can we run the jar in the GCP VM from the cloud function-
We thaught of deploying the jar in cloud function directly and running from there but we encountered the execution time out issues, so Is there any way that we can just to trigger the jar file in GCP VM from cloud function
Thanks in advance
If the only purpose of the VM is to run this process I would:
Cloud Scheduler calls a Cloud Function.
The Cloud Function starts a VM with a startup script that runs the jar process, and shuts down.
If you need to pass arguments to the script, you can do it using instance metadata when you create the instance (or while it is already running).
You can also have a look at this article (which is not supported by google). There you will find how you can start and stop instances using cloud functions.
I've got a batch job that I want to run in google compute engine on a NetBSD instance. I expected that I could just shutdown -hp now at the end of the job and the instance would be powered off. But when I do that it still remains in the running state according to the google cloud console and CLI. How do I make a NetBSD virtual machine in google cloud shut itself off when it's no longer needed?
Note: Google cloud SDK is not available on NetBSD
Normally the command line option -p will power off the virtual machine. This indicates that there is an issue/bug with the underlying ACPI code that invokes the ACPI function.
As a workaround use the Google Cloud SDK gcloud command. This command has the added benefit that Google Cloud will force a power off if the instance does not shutdown normally.
Add this command to your script. You may need to install the CLI first.
gcloud compute instances stop INSTANCE_NAME
Another option is to write a program that implements the Google Cloud API to stop the instance. There are examples in most languages including Go and Python. You do not even need the SDK as you can call the REST API endpoint with an Access Token.
Opening the command terminal from Google Cloud Platform you're greeted with a project level shell
account_name#cloudshell:/ (project_name)$
after starting up a new VM it's possible to send files from the project level file system to the VM like so:
account_name#cloudshell:/ (project_name)$ gcloud compute scp --recurse \
> ~/project-file vm-name:~
after ssh-ing into the new VM how do you perform the same file transfer from inside the VM?
Everything I've tried ends up looking like this:
account_name#vm-name:~$ gcloud compute scp --recurse \
> cloudshell:~/project-file ~
ERROR: (gcloud.compute.scp) Could not fetch resource:
- The resource 'projects/project_name/zones/my_zone/instances/cloudshell' was not found
The gcloud compute scp command does not support Google Cloud Shell as source/target.
Google has recently added new commands to the "alpha" version of gcloud which support Cloud Shell.
gcloud alpha cloud-shell scp cloudshell:~/REMOTE-DIR localhost:~/LOCAL-DIR
The problem using this command inside a VM instance is that VMs use Service Account credentials. Cloud Shell is assigned/created on a per-user credential basis. There is a different Cloud Shell instance for each User ID - created on the fly. With a service account, you cannot identify to Cloud Shell which instance you want to interact with. This means a new Cloud Shell instance is created not mapped to a user identity.
Note: It looks like the API might support this in the future, but the current implementation does not have a method to specify either the user name or OAuth credentials.
You need to use User Credentials (OAuth 2.0) to communicate with Cloud Shell. Unless you have a GUI desktop to run a web browser inside your VM instance, you cannot create User Credentials suitable for Google Cloud Shell authentication.
You can either limit yourself to copying files to/from the VM instance using commands in the Cloud Shell instance or look at a program that I just released that implements a CLI for Cloud Shell. If you chose the second method, authenticate to Cloud Shell from your desktop and then copy the user_credentials.json file with my program to your VM instance. Then you have a fairly powerful command line tool for Cloud Shell interaction.
Google Cloud Shell CLI
I am new to google cloud composer. I have some code in google cloud compute engine -
for eg: test.py
Currently I am using Jenkins as my scheduler - and I'm running the code like below
echo "cd /home/user/src/digital_platform &&/home/user/venvs/bdp/bin/python -m test.test.test" | ssh user#instance-dp
I want to run the same code from google cloud composer.
How I can do that..
Basically I need to ssh to an instance in google cloud and run the code in an automated way using google cloud composer.
It seems that SSHOperator might be something that might work for you. This operator is an Airflow feature, not Cloud Composer feature per se.
The other operator that you might want to take a look at before making your final decision is BaskOperator
You need to create a DAG (workflows), Cloud Composer schedules only the DAGs that are in the DAGs folder in the environment's Cloud Storage bucket. Each Cloud Composer environment has a web server that runs the Airflow web interface that you can use to manage DAGs.
Bash Operator is useful to run command-line programs. I suggest you follow the Cloud Composer Quickstart which shows you how to create a Cloud Composer environment in the Google Cloud Console and run a simple Apache Airflow DAG.