shell script to check time drift - regex

I need a script to check time difference between local servers and internet time.
To obtain local time I'd run something like
[root#check ~]# ntpdate -q timeserver
server x.x.x.x, stratum 1, offset -0.034282, delay 0.04228
11 Dec 12:04:21 ntpdate[62714]: adjust time server x.x.x.x offset -0.034282 sec
from above output we could grab,
11 Dec 12:04:21
To obtain internet time I'd like to run something like
[root#check~]# curl http://www.timeapi.org/utc/now
2015-12-11T12:04:43+00:00
from above output we could grab,
2015-12-11T12:04:43
Then echo ("Time drift detected"), if local time > internet time
I'm a newbie to shell scipting and your support is appreciated.
Many thanks

Related

HTCondor - Partitionable slot not working

I am following the tutorial on
Center for High Throughput Computing and Introduction to Configuration in the HTCondor website to set up a Partitionable slot. Before any configuration I run
condor_status
and get the following output.
I update the file 00-minicondor in /etc/condor/config.d by adding the following lines at the end of the file.
NUM_SLOTS = 1
NUM_SLOTS_TYPE_1 = 1
SLOT_TYPE_1 = cpus=4
SLOT_TYPE_1_PARTITIONABLE = TRUE
and reconfigure
sudo condor_reconfig
Now with
condor_status
I get this output as expected. Now, I run the following command to check everything is fine
condor_status -af Name Slotype Cpus
and find slot1#ip-172-31-54-214.ec2.internal undefined 1 instead of slot1#ip-172-31-54-214.ec2.internal Partitionable 4 61295 that is what I would expect. Moreover, when I try to summit a job that asks for more than 1 cpu it does not allocate space for it (It stays waiting forever) as it should.
I don't know if I made some mistake during the installation process or what could be happening. I would really appreciate any help!
EXTRA INFO: If it can be of any help have have installed HTCondor with the command
curl -fsSL https://get.htcondor.org | sudo /bin/bash -s – –no-dry-run
on Ubuntu 18.04 running on an old p2.xlarge instance (it has 4 cores).
UPDATE: After rebooting the whole thing it seems to be working. I can now send jobs with different CPUs requests and it will start them properly.
The only issue I would say persists is that Memory allocation is not showing properly, for example:
But in reality it is allocating enough memory for the job (in this case around 12 GB).
If I run again
condor_status -af Name Slotype Cpus
I still get something I am not supposed to
But at least it is showing the correct number of CPUs (even if it just says undefined).
What is the output of condor_q -better when the job is idle?

How to execute command on multiple servers for executing a command

i have set of servers (150) for logging and a command (to get disk space). How can i execute this command for each server.
Suppose if script is taking 1 min to get report of command for single server, how can i send report for all the servers for every 10 min?
use strict;
use warnings;
use Net::SSH::Perl;
use Filesys::DiskSpace;
# i have almost more than 100 servers..
my %hosts = (
'localhost' => {
user => "z",
password => "qumquat",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
);
# file system /home or /dev/sda5
my $dir = "/home";
my $cmd = "df $dir";
foreach my $host (keys %hosts) {
my $ssh = Net::SSH::Perl->new($host,port => 22,debug => 1,protocol => 2,1 );
$ssh->login($hostdata{$host}{user},$hostdata{$host}{password} );
my ($out) = $ssh->cmd($cmd});
print "$out\n";
}
It has to send output of disk space for each server
Is there a reason this needs to be done in Perl? There is an existing tool, dsh, which provides precisely this functionality of using ssh to run a shell command on multiple hosts and report the output from each. It also has the ability, with the -c (concurrent) switch to run the command at the same time on all hosts rather than waiting for each one to complete before going on to the next, which you would need if you want to monitor 150 machines every 10 minutes, but it takes 1 minute to check each host.
To use dsh, first create a file in ~/.dsh/group/ containing a list of your servers. I'll put mine in ~/.dsh/group/test-group with the content:
galera-1
galera-2
galera-3
Then I can run the command
dsh -g test-group -c 'df -h /'
And get back the result:
galera-3: Filesystem Size Used Avail Use% Mounted on
galera-3: /dev/mapper/debian-system 140G 36G 99G 27% /
galera-1: Filesystem Size Used Avail Use% Mounted on
galera-1: /dev/mapper/debian-system 140G 29G 106G 22% /
galera-2: Filesystem Size Used Avail Use% Mounted on
galera-2: /dev/mapper/debian-system 140G 26G 109G 20% /
(They're out-of-order because I used -c, so the command was sent to all three servers at once and the results were printed in the order the responses were received. Without -c, they would appear in the same order the servers are listed in the group file, but then it would wait for each reponse before connecting to the next server.)
But, really, with the talk of repeating this check every 10 minutes, it sounds like what you really want is a proper monitoring system such as Icinga (a high-performance fork of the better-known Nagios), rather than just a way to run commands remotely on multiple machines (which is what dsh provides). Unfortunately, configuring an Icinga monitoring system is too involved for me to provide an example here, but I can tell you that monitoring disk space is one of the checks that are included and enabled by default when using it.
There is a ready-made tool called Ansible for exactly this purpose. There you can define your list of servers, group then and execute commands on all of them.

Performance Issue in Executing Shell Commands

In my application, I need to execute large amount of shell commands via c++ code. I found the program takes more than 30 seconds to execute 6000 commands, this is so unacceptable! Is there any other better way to execute shell commands (using c/c++ code)?
//Below functions is used to set rules for
//Linux tool --TC, and in runtime there will
//be more than 6000 rules to be set from shell
//those TC commans are like below example:
//tc qdisc del dev eth0 root
//tc qdisc add dev eth0 root handle 1:0 cbq bandwidth
// 10Mbit avpkt 1000 cell 8
//tc class add dev eth0 parent 1:0 classid 1:1 cbq bandwidth
// 100Mbit rate 8000kbit weight 800kbit prio 5 allot 1514
// cell 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:0 classid 1:2 cbq bandwidth
// 100Mbit rate 800kbit weight 80kbit prio 5 allot 1514 cell
// 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:0 classid 1:3 cbq bandwidth
// 100Mbit rate 800kbit weight 80kbit prio 5 allot 1514 cell
// 8 maxburst 20 avpkt 1000 bounded
//tc class add dev eth0 parent 1:1 classid 1:1001 cbq bandwidth
// 100Mbit rate 8000kbit weight 800kbit prio 8 allot 1514 cell
// 8 maxburst 20 avpkt 1000
//......
void CTCProxy::ApplyTCCommands(){
FILE* OutputStream = NULL;
//mTCCommands is a vector<string>
//every string in it is a TC rule
int CmdCount = mTCCommands.size();
for (int i = 0; i < CmdCount; i++){
OutputStream = popen(mTCCommands[i].c_str(), "r");
if (OutputStream){
pclose(OutputStream);
} else {
printf("popen error!\n");
}
}
}
UPDATE
I tried to put all the shell commands into a shell script and let the test app call this script file using system("xxx.sh"). This time it takes 24 seconds to execute all 6000 entries of shell commands, less than what we toke before. But this is still much bigger than what we expected! Is there any other way that can decrease the execution time to less than 10 seconds?
So, most likely (based on my experience in a similar type of thing), the majority of the time is spent starting a new process running a shell, the execution of the actual command in the shell is very short. (And 6000 in 30 seconds doesn't sound too terrible, actually).
There are a variety of ways you could do this. I'd be tempted to try to combine it all into one shell script, rather than running individual lines. This would involve writing all the 'tc' strings to a file, and then passing that to popen().
Another thought is if you can actually combine several strings together into one execute, perhaps?
If the commands are complete and directly executable (that is, no shell is needed to execute the program), you could also do your own fork and exec. This would save creating a shell process, which then creates the actual process.
Also, you may consider running a small number of processes in parallel, which on any modern machine will likely speed things up by the number of processor cores you have.
You can start shell (/bin/sh) and pipe all commands there parsing the output. Or you can create a Makefile as this would give you more control on how the commands whould be executed, parallel execution and error handling.

RRDTool: RRD file not updating

My RRD file is not updating, what is the reason?
The graph shows the legend with: -nanv
I created the RRD file using this syntax:
rrdtool create ups.rrd --step 300
DS:input:GAUGE:600:0:360
DS:output:GAUGE:600:0:360
DS:temp:GAUGE:600:0:100
DS:load:GAUGE:600:0:100
DS:bcharge:GAUGE:600:0:100
DS:battv:GAUGE:600:0:100
RRA:AVERAGE:0.5:12:24
RRA:AVERAGE:0.5:288:31
Then I updated the file with this syntax:
rrdtool update ups.rrd N:$inputv:$outputv:$temp:$load:$bcharge:$battv
And graphed it with this:
rrdtool graph ups-day.png
-t "ups "
-s -1day
-h 120 -w 616
-a PNG
-cBACK#F9F9F9
-cSHADEA#DDDDDD
-cSHADEB#DDDDDD
-cGRID#D0D0D0
-cMGRID#D0D0D0
-cARROW#0033CC
DEF:input=ups.rrd:input:AVERAGE
DEF:output=ups.rrd:output:AVERAGE
DEF:temp=ups.rrd:temp:AVERAGE
DEF:load=ups.rrd:load:AVERAGE
DEF:bcharge=ups.rrd:bcharge:AVERAGE
DEF:battv=ups.rrd:battv:AVERAGE
LINE:input#336600
AREA:input#32CD3260:"Input Voltage"
GPRINT:input:MAX:" Max %lgv"
GPRINT:input:AVERAGE:" Avg %lgv"
GPRINT:input:LAST:"Current %lgv\n"
LINE:output#4169E1:"Output Voltage"
GPRINT:output:MAX:"Max %lgv"
GPRINT:output:AVERAGE:" Avg %lgv"
GPRINT:output:LAST:"Current %lgv\n"
LINE:load#FD570E:"Load"
GPRINT:load:MAX:" Max %lg%%"
GPRINT:load:AVERAGE:" Avg %lg%%"
GPRINT:load:LAST:" Current %lg%%\n"
LINE:temp#000ACE:"Temperature"
GPRINT:temp:MAX:" Max %lgc"
GPRINT:temp:AVERAGE:" Avg %lgc"
GPRINT:temp:LAST:" Current %lgc"
You will need at least 13 updates, each 5min apart (IE, 12 PDP (primary data points)) before you can get a single CDP (consolidated data point) written to your RRAs, enabling you to get a data point on the graph. This is because your smallest resolution RRA is a Count 12, meaning you need 12 PDP to make one CDP.
Until you have enough data to write a CDP, you have nothing to graph, and your graph will always have unknown data.
Alternatively, add a smaller resolution RRA (maybe Count 1) so that you do not need to collect data for so long before you have a full CDP.
The update script needs to be run at exactly the same interval as defined in your database.
I see it has a step value of 300 so the database should be updated every 5 minutes.
Just place you update script in a cron job (you can also do it for your graph script)
For example,
sudo crontab -e
If run for the first time choose your favorite editor (I usually go with Vim) and add the full path location of your script and run it every 5 minutes. So add this (don't forget to rename the path):
*/5 * * * * /usr/local/update_script > /dev/null && /usr/local/graph_script > /dev/null
Save it, and wait a couple of minutes. I usually redirect the output to /dev/null in case of the output that can be generated by a script. So if a script that will be executed gives an output crontab will fail and send a notification.

Capture CPU and Memory usage dynamically

I am running a shell script to execute a c++ application, which measures the performance of an api. i can capture the latency (time taken to return a value for a given set of parameters) of the api, but i also wish to capture the cpu and memory usage alongside at intervals of say 5-10 seconds.
is there a way to do this without effecting the performance of the system too much and that too within the same script? i have found many examples where one can do outside (independently) of the script we are running; but not one where we can do within the same script.
If you are looking for capturing CPU and Mem utilization dynamically for entire linux box, then following command can help you too:
CPU
vmstat -n 15 10| awk '{now=strftime("%Y-%m-%d %T "); print now $0}'> CPUDataDump.csv &
vmstat is used for collection of CPU counters
-n for delay value, in this case it's 15, that means after every 15 sec, stats will be collected.
then 10 is the number of intervals, there would be 10 iterations in this example
awk '{now=strftime("%Y-%m-%d %T "); print now $0}' this will dump the timestamp of each iteration
in the end, the dump file with & for continuation
Memory
free -m -s 10 10 | awk '{now=strftime("%Y-%m-%d %T "); print now $0}'> DataDumpMemoryfile.csv &
free is for mem stats collection
-m this is for units of mem (you can use -b for bytes, -k for kilobytes, -g for gigabytes)
then 10 is the number of intervals (there would be 10 iterations in this example)
awk'{now=strftime("%Y-%m-%d %T "); print now $0}' this will dump the timestamp of each iteration
in the end, the dump & for continuation
I'd suggest to use 'time' command and also 'vmstat' command. The first will give CPU usage of executable execution and second - periodic (i.e. once per second) dump of CPU/memory/IO of the system.
Example:
time dd if=/dev/zero bs=1K of=/dev/null count=1024000
1024000+0 records in
1024000+0 records out
1048576000 bytes (1.0 GB) copied, 0.738194 seconds, 1.4 GB/s
0.218u 0.519s 0:00.73 98.6% 0+0k 0+0io 0pf+0w <== that's time result