i am having issues with 3 VM instances created via dataflow
I have used a cloud function to launch a dataflow template, which ran to completion
but the VM instances generated for this are still running and i cannot delete them
could anyone help?
thanks and regards
so because i kicked off the template via cloud function, GCP didnt allow me to shut down the instance, the options were greyed out. HOwever it was saying that the instances were in use by few GCP groups, so once i deleted the group i was able to delete the instances
the problem seemed to come from my job, wher ei had a wait_until_finish() at the end of my pipeline,w hich was preventing the job from completing
Once i removed wait_until_finish, the job completed and the instances were shut down
thanks and regards
Marco
Related
When running vm instance cluster+ nodes even if I am using and running things on the cluster/ dataproc, the vm instance shuts off automatically after about 30 minutes or so. I cannot find this setting and would appreciate any help re: how to disable this to prevent it from shutting off or even how to configure a new cluster in a way that will prevent this from happening.
Thank you
Default Dataproc clusters do not have any kind of automatic shutdown.
If you are using the older Datalab initialization action, you are probably seeing Datalab's own non-Dataproc-aware shutdown functionality, which you can disable one of the ways suggested here: How to keep Google Dataproc master running?
Otherwise, if you're using some kind of template or copy/paste arguments for creating your Dataproc cluster, perhaps you're accidentally setting "scheduled deletion": https://cloud.google.com/dataproc/docs/concepts/configuring-clusters/scheduled-deletion
If neither of those settings explain your situation, you should visit your "activity logs" from the "Cloud Logging" interface, selecting Cloud Dataproc Cluster, and opening up the activity_log type of logs to see an audit log of who was deleting your cluster. Alternatively, if the cluster still existed in Dataproc, but the underlying VM was being shut down, visit the "Compute Engine VM" log category and also look at "activity logs" to see who was stopping your VMs. Sometimes, in a shared project, a project admin might be running some kind of script to automatically shut down VMs to save cost.
This might be a stupid question.
I'm just curious. I'm new to Redis and would like to experiment with it.
However, I would like to turn the instance on and off whenever I am experimenting as I want to save on costs rather than have the instance running all the time.
But I don't see a stop button like other products such as compute.
Is there a reason for this?
Thank you
You won't be able to manage a Cloud Memorystore for Redis instance as a Compute Engine instance as they are different products with different billing requirements and therefore you can't stop a Cloud Memorystore for Redis instance.
If you are only interested in learning more about Redis you can always install Redis on a Compute Engine instance (see the following tutorial for a clear path as to how to accomplish this or this other tutorial as to how to accomplish this task using docker) and afterwards delete the Compute Engine instance in order for charges to stop accruing.
To avoid incurring charges to your Google Cloud account for the resources used in this quickstart:
Go to the Memorystore for Redis page in the Cloud Console.
Memorystore for Redis
Click the instance ID of the instance you want to delete.
Click the Delete button.
In the prompt that appears, enter the instance ID.
Click Delete.
https://cloud.google.com/memorystore/docs/redis/quickstart-console#clean_up
I would like to know how to create a script that Automatically stop and start a Google Compute Engine instance. and how can I configure him to run every day and choose to run it only 5 days a week?
because we are not using the server it nights so i can save 9 hours a day.
can it be done?
thank you.
You can use gcloud command line tool for that (of course from another machine), it provides all controls, including starting and stopping instances. Setup cron on your local machine for:
gcloud compute instances stop INSTANCE_NAMES
gcloud compute instances start INSTANCE_NAMES
See more:
https://cloud.google.com/sdk/gcloud/reference/compute/instances/stop
https://cloud.google.com/sdk/gcloud/reference/compute/instances/start
As far as I know, GCE doesn't provide scheduled VM stop/start as a managed feature, it has to be triggered outside of the VM. For example, you can a GAE scheduled task which uses gcloud or GCE Python SDK to start and stop your VM.
You can use Google Cloud Scheduler in conjunction with Cloud Functions to run lightweight cronjobs which start/stop GCE VM instances based on a schedule that you control.
You can find a step-by-step tutorial in the official docs, but the general flow is:
Use Cloud Scheduler to publish start/stop messages to a Cloud Pub/Sub topic at the desired times (ex: every weekday at 9am, write a start VM event, every weekday at 5pm, write a stop VM event)
Create a Cloud Function which subscribes to the Pub/Sub topic, and makes the appropriate calls to the GCE APIs to trigger start/stop VM.
We are using AWS EC2 via cloud formation to launch stacks of instances. We create stacks that are combinations of custom and marketplace images. This worked perfectly until Friday. Starting on Friday, 10/7, about 10% of all instances we launch simply stall upon launch. So far we have only seen this for the custom AMIs we created (both Win7 and Win10) but I'm not sure if that is a coincidence or not as the stack is mostly comprised of instance launching from those 2 AMIs.
Note that we did not change the AMIs recently nor have we changed anything else about our process.
The issue eventually manifests as a failure when cloud formation times out.
I detached one of the boot volumes and attached it to a working instance so that I could view the logs. There are simply no new entries from the attempted launch in the following logs (or any accompanied error logs)
Ec2ConfigLog.txt
%WINDIR%\Panther\SetupAct.log
%WINDIR%\Panther\UnattendGC\SetupAct.log
The screenshot of the instance (via the AWS console) shows a static windows icon with no text around it.
Grabbing the system log (via the AWS console) returns nothing (empty console).
Force stopping and then starting the instance does kick off the customization and launch of windows but building in a restart upon failure is a hack I really don't want to do, even if cloud formation allows it (about which I'm not sure).
Does anyone have ideas for how we can troubleshoot further?
Thanks!
Jason
I have a number of google cloud dataflows marked as "Running" in the Dataflow console, but there are no GCE instances running. I manually terminated the instances to avoid being billed. The dataflows seem to be permanently stuck in "running" state. If I try to cancel them from the console or gcloud utility, I receive a warning that the flow is already in "finishing state" so the request was ignored.
I am now at the running quota of 10, so I am stuck. Is there any solution to this other than creating a new project?
There was an issue in the Dataflow service that caused cancel requests to become stuck. It has since been resolved.