I would like to run a shutdown script that waits potentially up to 5 minutes before really shutting down a windows instance.
I know how to run a shutdown script but not how to prevent GCP to kill the instance after a certain time. In this documentation, it is mentioned that there is a (non reliable) limit of 90 seconds before the instance is completely shut down by GCP.
Is it possible to increase that limit ?
Unfortunately 90 seconds limit is something that will not be changed. There are several feature requests for this, but unlikely that will be implemented soon.
Related
I have a use case where I need to start EC2 instances on-demmand, so starting fast is relevant to our users. Currently our startup time is 2 minutes on average, varing according time of the day and instance type.
We are lauching it using the NodeJS SDK and straight from our custom AMI, not using lauching templates, and we noted that smaller image sizes launch faster, but unfortunately we are unable to reduce it.
When the instance starts I have a #reboot cronjob with an application that notifies an api that the instance is ready, everything is installed in the AMI,built for this purpose based on ubuntu 18, and no hard work is done when it starts. We are measuring this startup time based as the difference of the time when it was started and the time it notified is ready.
The startup time is higher when no instances with that AMI were launched recently, suggesting that AWS has some kind of cold start in this case. We also noticed that increasing the disk size from 30gb to 45gb increased this startup from 1 min to the 2 min average that I mentioned.
What strategies may I try to reduce this startup time?
My team had a monolithic service for a small scale project but for a re-architecture and scaling, we are planning to move to cloud services of Amazon AWS and evaluating for orchestration whether to run Luigi as a container task or use AWS Step Functions instead? I don't have any experience with any of them especially Luigi.
Can anyone point out any issues that they have seen with Luigi or how it can prove to be better than AWS if at all? Any other suggestions for the same.
Thanks in advance.
I don't know about how AWS does orchestration, but if you are planning to at any time scale to at least thousands of jobs, I would not recommend investing in Luigi. Luigi is extremely useful for small to medium(ish) projects. It provides a fantastic interface for defining jobs and ensuring job completion through atomic filesystem actions. However, the problem when it comes to Luigi is the framework for running jobs. Luigi requires constant communication to workers for them to run, which in my own experience destroyed network bandwidth when I tried to scale.
For my research, I will generate a network of 10,000 tasks on a light to medium workflow, using my university's cluster computing grid which runs SLURM. All of my tasks don't take that long to complete, maybe 5 min max each. I have tried the following three methods to use Luigi efficiently.
SciLuigi's slurm task to submit jobs to SLURM from a central luigi worker (not using central scheduler). This method works well if your jobs will be accepted quickly and run. However, it uses an unreasonable amount of resources on the scheduling node, as each worker is a new process. Further, it destroys any priority you would have in the system. A better method would be to first allocate many workers and then have them continually work on jobs.
The second method I attempted was just that. I started the Luigi central scheduler on my home server (because otherwise I could not monitor the state of work, just like in the above workflow) and started up workers on the SLURM cluster that all had the same configuration, so each of them could run any part of the experiment. The problem was, even with 500Mbps internet, past ~50 workers Luigi would stop functioning and so would my internet connection to my server. So, I began running jobs with only 50 workers, which drastically slowed my workflow. In addition, each worker had to register each job with the central scheduler (another huge pain point), which could take hours with only 50 workers.
To mitigate this startup time I decided to partition the root-task subtrees by their parameters and submit each to SLURM. So now the startup time is reasonably low, but I lost the ability for any worker to run any job, which is still pretty important. Also, I can still only work with ~50 workers. When I completed the subtrees, I ran one last job to finish the experiment.
In conclusion, Luigi is great for small to medium-small workflows, but once you start hitting 1,000+ tasks and workers, the framework quickly fails to keep up. I hope that my experiences provide some insight into the framework.
I've written a simple batch file that starts apache and sends a curl request to my server at start time. I am using windows server 2016 and n-4 compute engine instance.
I've noticed that 2 identical machines require vastly different start up times. One sends a message in just 40s, other one takes almost 80s. While in console, both seem to start at the same time, the reality is different, since the other one is inaccessible for 80s via RD tools.
The second machine is made from disk image of the first one. What factors contribute to the start time? Where should I trip the fat?
The delay could occur if the instances are in different regions and also if the second instance has some additional memory intensive applications or additional customizations done. The boot disk type for the instance also contributes to the booting time. Are you getting any information from the logs about this delay during the startup time? You could also compare traceroute results on both instances to see if there is a delay at some point in the network.
I have a persistent server that unpredictably receives new data from users, needing about 10 GPU instances to crank at the problem for about 5 minutes, and I send the answer back to the users. The server itself is a cheap always-persistent single CPU Google Cloud instance. When a user request comes in, my code launches my 10 created but stopped Google Cloud GPU instances with
gcloud compute instances start (instance list)
In the rare case if the stopped instances don't exist (sometimes they get wiped) that's detected and they're recreated with
gcloud beta compute instances create (...)
This system all works fine. My only complaint is that even with created but stopped instances, the launch time before my GPU code finally starts to run is about 5 minutes. Most of this is just the time for the instance itself to launch its Ubuntu host and call my code.. the delay once Ubuntu is running to start the GPU is only about 10 seconds.
How can I reduce this 5 minute delay? I imagine most of it comes from Google having to copy over the 4GB of instance data to the target machine, but the startup time of (vanilla) Ubuntu adds probably 1 more minute. I'm not even sure if I could quantify these two numbers independently, I only can measure the combined 3-7 minutes delay from the launch until my code starts responding.
I don't think Ubuntu OS startup time is the major startup latency contributor since I timed an actual machine with the same Ubuntu and same GPU on my desk from poweron boot up and it began running my GPU code in 46 seconds.
My goal is to get results back to my users as soon as possible, and that 5 minute startup delay is a bottleneck.
Would making a smaller instance SIZE of say 2GB help? What else can I do to reduce the latency?
2GB is large. That's a heckuva big image. You should be able to cut that down to 100MB, perhaps using Alpine instead of Ubuntu.
Copying 4GB of data is also less than ideal. Given that, I suspect the solution will be more of an architecture change than a code change.
But if you want to take a whack at everything which is NOT about your 4GB of data, there is a capability to prepare a custom image for your VMs. If you can build a slim custom image that will help.
There's good resources for learning more, the two I would start with include:
- Improve GCE Boot Times with Custom Images
- Three steps to Compute Engine startup-time bliss: Google Cloud Performance Atlas
I have a mongo database running on a Google Cloud Computing instance. For the second time now (in a few months), the server unexpectedly shut down into mode "TERMINATED". How do I find the cause of the shutdown?
The serial console just says, "The resource 'projects/my-project/zones/europe-west1-b/instances/mongo-db' is not ready".
I looked into the database logs, seems it received an external signal to shut down ("got signal 15 (Terminated)").
Nothing suspicious in the syslogs or messages logs after spinning up a new instance on the same disk. Also, there was no planned maintenance as far as I'm aware.
Any idea where to look?
Since your mongo database actually received a terminate signal, your instance was probably shutdown gracefully somehow. It sounds like something related to automatic migrations, but there are a couple of things to look at to help narrow this down.
In the Google Developers Console go to Compute -> Compute Engine -> VM instances -> mongo-db. There should be a section called "Availability policies." Check "On host maintenance" to make sure "Migrate VM instance" is selected. Otherwise, the VM will shutdown instead of migrating for maintenance.
You can also look at the operations for an instance at Compute -> Compute Engine -> Operations. This has all the operations that you and the system performed for your instances. You may see something around the time that the process terminated. You can also see this with the gcloud CLI with gcloud compute operations list