Shutdown scripts to run upon AWS termination - amazon-web-services

I am trying to get some scripts to run upon an aws termination action. I have created /etc/init.d/Script.sh and linked symbolically to /etc/rc01.d/K01Script.sh
However terminating through aws console did not produce the output I was looking for. (It is a script that does a quick API call to a server over https should take only a few seconds).
Then I tried again but specifically changed a kernel parameter:
'sudo sysctl -w kernel.poweroff_cmd=/etc/rc0.d/K01Script.sh'
and again no output.
I get the message "The system is going down for power off NOW!" when terminating the server so I'm pretty sure the Ubuntu server is going into runlevel 0. Permissions are owned by root.
I know I could create a lifecycle to do something like this but my team prefers the quick and dirty way.
any help very much appreciated!

Related

How to get notified when droplet reboots and when droplet finishes boot?

I found this answer https://stackoverflow.com/a/35456310/80353 and it recommends either API or using user_data which actually is cloud-init underneath.
I can think of several ways to possibly get notified that a server is up:
detect droplet status via API
I notice that the status never changes during reboot so I guess this is out.
using DigitalOcean native monitoring agent
The monitoring agent seems to only cover resource utilisation. No alert when the server is being rebooted or finishes booting up
using cloud-init
This answer https://stackoverflow.com/a/35456310/80353 I mentioned earlier uses wget to send signals out. I can possibly use wget for every time the droplet finishes booting up using bootcmd in cloud-init. But not for reboot.
There's also the issue of how to ensure the wget request from the right DigitalOcean droplet can correctly identify itself to my server.
Any advice on how to accomplish getting notifications at my server whenever a droplet reboots or finishes booting up?
cloud-init
bootcmd actually runs every time. Check out the module frequency key in the docs
Another module you might consider for this is phone home.
Systemd
Since the OP is looking for notifications on shutdown/reboot as well, cloud-init is probably not the best for a single solution since it handles boot/init primarily. Assuming systemd:
This post discusses setting up a service to run on shutdown.
This post discusses setting up a service to run on startup.

Hadoop single node cluster slows down AWS instance

Happy ugly Christmas sweater day :-)
I am running into some strange problems with my AWS Linux 16.04 instance running Hadoop 2.9.2.
I have just successfully installed and configured Hadoop to run in a simulated distributed mode. Everything seems to be fine. When I start hdfs and yarn I don't get any errors. But as soon as I try to do even something as simple as list the contents of the root hdfs directory, or create a new directory, the whole instance becomes super slow. I wait for about 10 min and it never produces a directory listing so I hit Ctrl+C and it takes another 5 minutes to kill the process. Then I try to stop both, the hdfs and yarn, and it succeeds but also takes a long time to do that. And even after hdfs and yarn have been stopped the instance is still being barely responsive. At this point all I can do to make it function normally again is to go to AWS console and restart it.
Does anyone have any idea what I might've screwed up ( I am pretty sure it's something I did. It usually is :-) )?
Thank you.
Well, I think I figured out what was wrong and the answer is trivial. Basically, my ec2 instance doesn't have enough RAM. It's a basic free tier eligible instance and by default it comes with only 1GB of RAM. Hilarious. Totally useless.
But I learned something useful anyway. One other thing I had to do to make my Hadoop installation work (I was getting "connection refused" error but I did make it work) was that in core-site.xml file I had to change the line that says
<value>hdfs://localhost:9000</value>
to
<value>hdfs://ec2-XXX-XXX-XXX-XXX.compute-1.amazonaws:9000</value>
(replace the XXXs in the above with your instance's IP address)

Promise Technology VTrak configure webserver

I inherited management of some Promise VTrak disk array servers. They recently had to be transferred to a different location. We've got them set up and networking is all configured, and even have a linux server mounting to it. Before they were transferred I was trained with the web gui it comes with. However, since the move we have not been able to connect to the web gui interface.
I can ssh into the system and really do everything from there, but I would love to figure out why webserver is not coming up.
The VTRAK system does not allow for much configuration it seams. From the CLI I can start, stop, or restart the webserver, and the only thing I can configure is the amount of time someone can be logged into the gui for. I don't see anywhere where you can configure http or anything like that.
We're pretty sure it's not a firewall issue as well.
I got the same issue this morning, and resolved it as follows:
It appears there are conditions that can render the WebPam webserver with invalid configuration HttpPort and HttpsPort values. One condition that causes this is if the sessiontimeout parameter exceeds 1439 set in the GUI. Apparently the software then freaks and creates invalid configuration which locks out the GUI because the http ports have been changed to 25943 and 29538.
To fix this login through CLI:
swmgt -a mod -n webserver -s "sessiontimeout=29"
swmgt -a mod -n webserver -s "HttpPort=80"
swmgt -a mod -n webserver -s "HttpsPort=443"
swmgt -a restart -n Webserver
You should now be able to access the WebPam webserver interface again through your web browser.
After discussing with my IT department we found it was a firewall issue.

How to determine that an AWS EC2 instance is still initialising from a script

Is there a way to determine through a command line interface or other trick if an AWS EC2 instance is ready to receive ssh connections?
The running state seems not to be enough. Trying to connect in in the first minutes of the running state, the machine Status checks still shows initialising and ssh times out while trying to connect.
(I am using the awscli pip package.)
Running is similar to turning a computer on and finishing a bios check. As far as the hypervisor is concerned your instance is on.
The best way to know when your instance is ready, is to run a script at the end of startup (or when certain services are on) that will report its status to some other listener. Using that data, or event, you should know that your instance is ready to be connected to. This is purposely vague since there are so many different ways this can be accomplished.
You could also time the expected startup time, and try to connect after that and retry the connection if it fails. Still need a point at which you would stop trying as instances can fail to launch in some cases.

how to read list of running processes on a remote computer in C++

What can be done to know and list all running processes on a remote computer?
One idea is to have a server listening to our request on the remote machine and the other one is to use ssh.
The problem is i dont know whether there will be such a server running on the remote machine and i cannot use ssh because it needs authentication.
Is there any other way out ?
If you
cannot install a server program on the remote machine
cannot use anything that requires authentication
then you should not be allowed to know the list of all running processes on a machine. That request would be a security nightmare!
You can do something much simpler without (as many) security problems: scan the publicly-available ports for programs that are running. Programs like nmap.org let you know a fair bit of information about the publicly-running programs on machines.
I have done something similar in the past using SNMP. I don't have the specifics in front of me, but something like "snmpwalk -v2 -c public hostname prTable" got me the process table. I recall later configuring SNMP to generate errors when the number of processes didn't meet our specified requirement, like httpd must have at least 1 and less than 50.
I suggest you look at the code for a remote login, rlogin. You could remotely login to an account that has the privileges that you need. Once logged in, you can fetch a list of processes.
This looks like a good application for a script rather than a C or C++ program.