I found this answer https://stackoverflow.com/a/35456310/80353 and it recommends either API or using user_data which actually is cloud-init underneath.
I can think of several ways to possibly get notified that a server is up:
detect droplet status via API
I notice that the status never changes during reboot so I guess this is out.
using DigitalOcean native monitoring agent
The monitoring agent seems to only cover resource utilisation. No alert when the server is being rebooted or finishes booting up
using cloud-init
This answer https://stackoverflow.com/a/35456310/80353 I mentioned earlier uses wget to send signals out. I can possibly use wget for every time the droplet finishes booting up using bootcmd in cloud-init. But not for reboot.
There's also the issue of how to ensure the wget request from the right DigitalOcean droplet can correctly identify itself to my server.
Any advice on how to accomplish getting notifications at my server whenever a droplet reboots or finishes booting up?
cloud-init
bootcmd actually runs every time. Check out the module frequency key in the docs
Another module you might consider for this is phone home.
Systemd
Since the OP is looking for notifications on shutdown/reboot as well, cloud-init is probably not the best for a single solution since it handles boot/init primarily. Assuming systemd:
This post discusses setting up a service to run on shutdown.
This post discusses setting up a service to run on startup.
Related
I am working on a project where I can see all of the dags are queued up and not moving (appx over 24H or more)
Looks like its scheduler is broken but I need to confirm that.
So here are my questions
How to see if scheduler is broken
How to reset my airflow (web server) scheduler?
Expecting some help regarding how to reset airflow schedulers
The answer will depend a lot on how you are running Airflow (standalone, in Docker, Astro CLI, managed solution...?).
If your scheduler is broken the Airflow UI will usually tell you the time since the last heartbeat like this:
There is also an API endpoint for a scheduler health check at http://localhost:8080/health (if Airflow is running locally).
Check the scheduler logs. By default they are in a file at $AIRFLOW_HOME/logs/scheduler.
You might also want to look at how to do health checks in Airflow in general.
In terms of resetting it is usually best to restart the scheduler and again this will depend on how you started it in the first place. If you are using a standalone instance and have the processes in the foreground simply do ctr+c or close the terminal to stop it. If you are running airflow in docker restart the container, for the Astro CLI there is astro dev restart.
I'm new at GCP and I'm trying to keep my process running on Jupyter Notebook after shutting down my local PC. Does anyone know how can I do it? Nowaday I open a terminal on my VM run jupter notebook and then after start the process on jupyter I'd like to turn my machine off.
I keep following the process on my cellphone and shutdown on there. Does anyone know how to turn this off automatically when it stops?
Sorry to make two questions at once, but I think that one is related with another. If it does not I can edit and make another one.
This is a technical limitation of Jupyter Notebooks unfortunately. The browser window contains the code which updates the notebook itself, so if you close the browser window then there is not process running to update the notebook.
However, there is one workaround which you may find useful.
There is a library called Fairing that you can use with GCP's new AI Platform Notebooks which allows you to pack up your notebook and run it remotely, and that library will save the results of that execution in a GCP Storage bucket. No active internet connection required (once you kick of the notebook run).
You can learn how to use it by creating a new GCP AI Platform Notebook and looking at the tutorials folder inside it. You can also find additional tutorials for Fairing here
Typically to keep your remote sessions up in the event of network connectivity loss (which also covers shutting down the local computer) you'd use a terminal multiplexer application. From Known issues:
Intermittent disconnects: At this time, we do not offer a specific SLA for connection lifetimes. Use terminal multiplexers like tmux
or screen if you plan to keep the terminal window open for an
extended period of time.
But these multiplexers are terminal/text-mode apps, so you'd have to launch the notebook with the --no-browser and then connect your local browser to its port.
You can find a recipe based on tmux and a local browser connection to the notebook using an SSH tunnel at Using Jupyter notebooks securely on remote linux machines.
As for shutting down the session - you'd just have to instruct the multiplexer application to end the session (or terminate the multiplexer app itself) - which you could do automatically via a wrapper script first invoking your process and immediately after the process ends invoking the commands to shutdown the session.
I am trying to get some scripts to run upon an aws termination action. I have created /etc/init.d/Script.sh and linked symbolically to /etc/rc01.d/K01Script.sh
However terminating through aws console did not produce the output I was looking for. (It is a script that does a quick API call to a server over https should take only a few seconds).
Then I tried again but specifically changed a kernel parameter:
'sudo sysctl -w kernel.poweroff_cmd=/etc/rc0.d/K01Script.sh'
and again no output.
I get the message "The system is going down for power off NOW!" when terminating the server so I'm pretty sure the Ubuntu server is going into runlevel 0. Permissions are owned by root.
I know I could create a lifecycle to do something like this but my team prefers the quick and dirty way.
any help very much appreciated!
I'm unable to run a Meteor leaderboard demo after a failed keepalive error on an AWS EC2 micro.T1 instance. If I start from a freshly booted Amazon Machine Instance (AMI) I'm able to run the leaderboard demo at localhost:3000 from Firefox when I'm connected with a VNC client (TightNVC Viewer). It runs very, very slowly, but it runs.
If I fail to interact with it soon enough however I get these messages
I2051-00:03:03.173(0)?Failed to receive keepalive! Exiting.
=> Exited with code:1
=> Meteor server restarted
From that point forward everything on that instance runs at a glacial pace. Switching back to the Firefox window takes 3 minutes. when I try to connect to //localhost:3000 Firefox I usually get a message about a script no longer running and eventually the terminal window adds this to what I wrote above:
I2051-00:06:02.443(0)?Failed to receive keepalive! Exiting.
=> Exited with code:1
=> Meteor server restarted
I2051-00:08:17.227(0)?Failed to receive keepalive! Exiting.
=> Exited with code:1
=> Your application is crashing. Waiting for file change.
Can anyone translate for me what is happening?
I'm wondering whether the t1.micro instance I'm running is just too under-powered or because it's not shutting down meteor properly thereby leaving an instance of MongoDB running and trying to launch another.
I'm using Amazon Machine Image ubuntu-precise-12.04-amd64-server-20130411.1 (ami-70f96e40) which says this about it's configuration:
Size: t1.micro
ECUs: up to 2
vCPUs: 1
Memory (GiB): 0.613
Instance Storage (GiB): EBS only
EBS-Optimized Available: -
Netw. Performance: -Very Low
Micro instances
Micro instances are a low-cost instance option, providing a small amount of CPU resources. They are suited for lower throughput applications, and websites that require additional compute cycles periodically, but are not appropriate for applications that require sustained CPU performance. Popular uses for micro instances include low traffic websites or blogs, small administrative applications, bastion hosts, and free trials to explore EC2 functionality.
If my guess is right, can anyone suggest an AMI suitable for Meteor development?
Thanks
check this answer
Try to remove meteor remove autopublish
How are you running the app on ec2? I have been able to run apps on a micro instance so I don't see why this should be an issue.
If you are running it by using 'meteor' as you would locally that's probably the issue. You get way better performance when running it as a node app, this typically isn't an issue when developing locally but may be too much for a ec2 micro.
What you want to do is 'meteor bundle example.tgz', upload that to the server and run it as a node app.
Here is a guide that I remember using a while ago to get it done on ec2:
http://julien-c.fr/2012/10/meteor-amazon-ec2/
You shouldn't need to use VNC either, you can access it from your own computer in a browser using the public address your instance gets assigned.
If you get a node fibers error message which is pretty common then cd into bundle/program/server do 'npm uninstall fibers' and then 'npm install fibers'
I'm still cheap.
I have a software development environment which is a bog-standard Ubuntu 11.04 plus a pile of updates from Canonical. I would like to set it up such that I can use an Amazon EC2 instance for the 2 hours per week when I need to do full system testing on a server "in the wild".
Is there a way to set up an Amazon EC2 server image (Ubuntu 11.04) so that whenever I fire it up, it starts, automatically downloads code updates (or conversely accepts git push updates), and then has me ready to fire up an instance of the application server. Is it also possible to tie that server to a URL (e.g ec2.1.mydomain.com) so that I can hit my web app with a browser?
Furthermore, is there a way that I can run a command line utility to fire up my instance when I'm ready to test, and then to shut it down when I'm done? Using this model, I would be able to allocate one or more development servers to each developer and only pay for them when they are being used.
Yes, yes and more yes. Here are some good things to google/hunt down on SO and SF
--ec2 command line tools,
--making your own AMI's from running instances (to save tedious and time consuming startup gumf),
--route53 APIs for doing DNS magic,
--ubunutu cloud-init for startup scripts,
--32bit micro instances are your friend for dev work as they fall in the free usage bracket
All of what James said is good. If you're looking for something requiring less technical know-how and research, I'd also consider:
juju (sudo apt-get install -y juju). This lets you start up a series of instances. Basic tutorial is here: https://juju.ubuntu.com/docs/user-tutorial.html