Linux - Detecting idleness - c++

I need to detect when a computer is idle for a certain time period. My definition of idleness is:
No users logged in, either by remote methods or on the local machine
X server inactivity, with no movement of mouse or key presses
TTY keyboard inactivity (hopefully)
Since the majority of distros have now moved to logind, I should be able to use its DBUS interface to find out if users are logged in, and also to monitor logins/logouts. I have used xautolock to detect X idleness before, and I could continue using that, but xscreensaver is also available. Preferably however I want to move away from any specific dependencies like the screensaver due to different desktop environments using different components.
Ideally, I would also be able to base idleness on TTY keyboard inactivity, however this isn't my biggest concern. According to this answer, I should be able to directly query the /dev/input/* interfaces, however I have no clue how to go about this.
My previous attempts at making such a monitor have used Bash, due to the ease of changing a plain text script file, howver I am happy using C++ in case more advanced methods are required to accomplish this.

From a purely shell standpoint (since you tagged this bash), you can get really close to what you want.
#!/bin/sh
users_are_logged_in() {
who |grep -q .
return $?
}
x_is_blanked() {
local DISPLAY=:0
if xscreensaver-command -time |grep -q 'screen blanked'; then
return 0 # we found a blanked xscreensaver: return true
fi
# no blanked xscreensaver. Look for DPMS modes
xset -q |awk '
/DPMS is Enabled/ { dpms = 1 } # DPMS is enabled
/Monitor is On$/ { monitor = 1 } # The monitor is on
END { if(dpms && !monitor) { exit 0 } else { exit 1 } }'
return $? # true when DPMS is enabled and the monitor is not on
}
nobody_here() {
! users_are_logged_in && x_is_blanked
return $?
}
if nobody_here; then
sleep 2m
if nobody_here; then
# ...
fi
fi
This assumes that a user can log in in two minutes and that otherwise, there is no TTY keyboard activity.
You should verify that the who |grep works on your system (i.e. no headers). I had originally grepped for / but then it won't work on FreeBSD. If who has headers, maybe try [ $(who |grep -c .) -gt 1 ] which will tell you that the number of lines that who outputs is more than one.
I share your worry about the screensaver part; xscreensaver likely isn't running in the login manager (any other form of X would involve a user logged in, which who would detect), e.g. GDM uses gnome-screensaver, whose syntax would be slightly different. The DPMS part may be good enough, giving a far larger buffer for graphical logins than the two minutes for console login.
Using return $? in the last line of a function is redundant. I used it to clarify that we're actually using the return value from the previous line. nobody_here short circuits, so if no users are logged in, there is no need to run the more expensive check for the status of X.
Side note: Be careful about using the term "idle" as it more typically refers to resource (hardware, that is) consumption (e.g. CPU load). See the uptime command for load averages for the most common way of determining system (resource) idleness. (This is why I named my function nobody_here instead of e.g. is_idle)

Related

Informatica Looping

I am looking for information on looping in Informatica. Specifically, I need to check if a source table has been loaded, if it has, move to next step, if not wait X minutes and check the status table again. I would prefer direction to a place I can learn this on my own, but I need to confirm this is even possible as I have not found anything on my google searches.
You can use a simple shell script to do this wait and watch capability.
#/bin/sh
# call it as script_name.sh
# it will wait for 10 min and check again for data, in total it will wait for 2hours. change them if you want to
# Source is assumed as oracle. change it as per your source.
interval=600
loop_count=10
counter=0
while true
do
$counter=`expr $counter + 1 `
db_value=`sqlplus -s user/pass#local_SID <<EOF
set heading off
set feedback off
SELECT count(*) FROM my_source_table;
exit
EOF`;
if [ $db_value -gt 0 ]; then
echo "Data Found."
exit 0
else
if [ $counter -eq $loop_count ]
then
echo "No data found in source after 2hours"
exit 1
else
sleep $interval
fi
fi
done
And add this shell script(in a CMD task) to the beginning of the workflow.
Then use informatica link condition as if status= 0, proceed else email that wait time is over.
You can refer to the pic below. This will send a mail if wait time is over and still data is not there in source.
In general, looping is not supported in Informatica PowerCenter.
One way is to use scripts, as discussed by Koushik.
Another way to do that is to have a Continuously Running Workflow with a timer. This is configurable on Scheduler tab of your workflow:
Such configuration makes the workflow start again right after it succeeds. Over and over again.
Workflow would look like:
Start -> s_check_source -> Decision -> timer
|-> s_do_other_stuff -> timer
This way it will check source. Then if it has not been loaded trigger the timer. Thet it will succeed and get triggered again.
If source turns out to be loaded, it will trigger the other session, complete and probably you'd need another timer here to wait till next day. Or basically till whenever you'd like the workflow to be triggered again.

why does use "perf stat -B -e branches,branch-misses ./a.out" to get one process branch prediction miss is always zero in centos7 [duplicate]

I want to evaluate the performance of one process by "branch-misses" hardware event. But when I used perf stat to get "branch-misses" data, it always return 0 just because my os is in KVM.
Because it's trouble for me to get one real machine to do the test. So I want to know that is there any method to get "branch-misses" by "perf stat" when I am in KVM.
I'm really need your help. Thanks a lot.

How to perform a subroutine every 5 seconds?

I have a subroutine to check if a disk is mounted,
I would like to know how do I make this subroutine always run every 5 seconds.
thanks in advance!
on checkMyDiskIsMounted()
tell application "Finder"
activate
if exists disk "myDisk" then
--do anything
else
--do anything
end if
end tell
end checkMyDiskIsMounted
Using things like AppleScript's delay command, a shell utility such as sleep, or even a tight repeat loop should all be avoided, as those tend to block the user interface while they are running.
A repeating timer could be used to periodically poll, but instead of wasting time continually checking for something that may or may not happen, NSWorkspace can be used, as it provides notifications for exactly this kind of thing (amongst others). The way this works is your application registers for the particular notifications that it is interested in, and the specified handler is called when (if) the event occurs.
Note that the following script includes statements so that it can be run from the Script Editor as an example - observers are added to the application instance, and will stick around until they are removed or the application is quit:
use AppleScript version "2.4" -- Yosemite (10.10) or later
use framework "AppKit"
use scripting additions
on run -- or whatever initialization handler
# set up notifications
tell current application's NSWorkspace's sharedWorkspace's notificationCenter
its addObserver:me selector:"volumeMounted:" |name|:(current application's NSWorkspaceDidMountNotification) object:(missing value)
its addObserver:me selector:"volumeUnmounted:" |name|:(current application's NSWorkspaceDidUnmountNotification) object:(missing value)
end tell
end run
on volumeMounted:aNotification -- do something on mount
set volumeName to (NSWorkspaceVolumeLocalizedNameKey of aNotification's userInfo) as text
display notification "The volume " & quoted form of volumeName & " was mounted." with title "Volume mounted" sound name "Hero" -- or whatever
end volumeMounted:
on volumeUnmounted:aNotification -- do something on unmount
set volumeName to (NSWorkspaceVolumeLocalizedNameKey of aNotification's userInfo) as text
display notification "The volume " & quoted form of volumeName & " was unmounted." with title "Volume unmounted" sound name "Funk" -- or whatever
end volumeUnmounted:
Four options:
A repeat loop with delay as suggested by matt
repeat
-- code
delay 5
end repeat
A (stay open) applet with idle handler
on run
-- do intialiations
end run
on idle
-- code
return 5
end idle
An AppleScriptObjC notification like suggested by red_menace
A launchd agent observing the /Volumes folder
In favor of option 3 and 4 which inexpensively notify about a change the first two options which poll periodically are discouraged.

How to execute command on multiple servers for executing a command

i have set of servers (150) for logging and a command (to get disk space). How can i execute this command for each server.
Suppose if script is taking 1 min to get report of command for single server, how can i send report for all the servers for every 10 min?
use strict;
use warnings;
use Net::SSH::Perl;
use Filesys::DiskSpace;
# i have almost more than 100 servers..
my %hosts = (
'localhost' => {
user => "z",
password => "qumquat",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
'129.221.63.205' => {
user => "z",
password => "aardvark",
},
);
# file system /home or /dev/sda5
my $dir = "/home";
my $cmd = "df $dir";
foreach my $host (keys %hosts) {
my $ssh = Net::SSH::Perl->new($host,port => 22,debug => 1,protocol => 2,1 );
$ssh->login($hostdata{$host}{user},$hostdata{$host}{password} );
my ($out) = $ssh->cmd($cmd});
print "$out\n";
}
It has to send output of disk space for each server
Is there a reason this needs to be done in Perl? There is an existing tool, dsh, which provides precisely this functionality of using ssh to run a shell command on multiple hosts and report the output from each. It also has the ability, with the -c (concurrent) switch to run the command at the same time on all hosts rather than waiting for each one to complete before going on to the next, which you would need if you want to monitor 150 machines every 10 minutes, but it takes 1 minute to check each host.
To use dsh, first create a file in ~/.dsh/group/ containing a list of your servers. I'll put mine in ~/.dsh/group/test-group with the content:
galera-1
galera-2
galera-3
Then I can run the command
dsh -g test-group -c 'df -h /'
And get back the result:
galera-3: Filesystem Size Used Avail Use% Mounted on
galera-3: /dev/mapper/debian-system 140G 36G 99G 27% /
galera-1: Filesystem Size Used Avail Use% Mounted on
galera-1: /dev/mapper/debian-system 140G 29G 106G 22% /
galera-2: Filesystem Size Used Avail Use% Mounted on
galera-2: /dev/mapper/debian-system 140G 26G 109G 20% /
(They're out-of-order because I used -c, so the command was sent to all three servers at once and the results were printed in the order the responses were received. Without -c, they would appear in the same order the servers are listed in the group file, but then it would wait for each reponse before connecting to the next server.)
But, really, with the talk of repeating this check every 10 minutes, it sounds like what you really want is a proper monitoring system such as Icinga (a high-performance fork of the better-known Nagios), rather than just a way to run commands remotely on multiple machines (which is what dsh provides). Unfortunately, configuring an Icinga monitoring system is too involved for me to provide an example here, but I can tell you that monitoring disk space is one of the checks that are included and enabled by default when using it.
There is a ready-made tool called Ansible for exactly this purpose. There you can define your list of servers, group then and execute commands on all of them.

lockfile purpose in init.d daemon scripts (linux)

When looking at various daemon scripts in /etc/init.d/, I can't seem to understand the purpose of the 'lockfile' variable. It seems like the 'lockfile' variable is not being checked before starting the daemon.
For example, some code from /etc/init.d/ntpd:
prog=ntpd
lockfile=/var/lock/subsys/$prog
start() {
[ "$EUID" != "0" ] && exit 4
[ "$NETWORKING" = "no" ] && exit 1
[ -x /usr/sbin/ntpd ] || exit 5
[ -f /etc/sysconfig/ntpd ] || exit 6
. /etc/sysconfig/ntpd
# Start daemons.
echo -n $"Starting $prog: "
daemon $prog $OPTIONS
RETVAL=$?
echo
[ $RETVAL -eq 0 ] && touch $lockfile
return $RETVAL
}
What is the 'lockfile' variable doing?
Also, when writing my own daemon in C++ (such as following the example at the bottom of http://www.itp.uzh.ch/~dpotter/howto/daemonize), do I put the compiled binary directly in /etc/init.d/ or do I put a script there that calls the binary. (i.e. replacing the 'daemon $prog' in the code above with a call to my binary?)
The whole thing is a very fragile and misguided attempt to keep track of whether a given daemon is running in order to know whether/how to shut it down later. Using pids does not help, because a pid is meaningless to any process except the direct parent of a process; any other use has unsolvable and dangerous race conditions (i.e. you could end up killing another unrelated process). Unfortunately, this kind of ill-designed (or rather undesigned) hackery is standard practice on most unix systems...
There are a couple approaches to solving the problem correctly. One is the systemd approach, but systemd is disliked among some circles for being "bloated" and for making it difficult to use a remote /usr mounted after initial boot. In any case, solutions will involve either:
Use of a master process that spawns all daemons as direct children (i.e. inhibiting "daemonizing" within the individual daemons) and which thereby can use their pids to watch for them exiting, keep track of their status, and kill them as desired.
Arranging for every daemon to inherit an otherwise-useless file descriptor, which it will keep open and atomically close only as part of process termination. Pipes (anonymous or named fifos), sockets, or even ordinary files are all possibilities, but file types which give EOF as soon as the "other end" is closed are the most suitable, since it's possible to block waiting for this status. With ordinary files, the link count (from stat) could be used but there's no way to wait on it without repeated polling .
In any case, the lockfile/pidfile approach is ugly, error-prone, and hardly any better than lazy approaches like simply killall foobard (which of course are also wrong).
What is the 'lockfile' variable doing?
It could be nothing or it could be eg. injected into $OPTIONS by this line
. /etc/sysconfig/ntpd
The daemon takes the option -p pidfile where $lockfile could go. The daemon writes its $PID in this file.
do I put the compiled binary directly in /etc/init.d/ or do I put a script there that calls the binary
The latter. There should be no binaries in /etc, and its customary to edit /etc/init.d scripts for configuration changes. Binaries should go to /(s)bin or /usr/(s)bin.
The rc scripts keep track of whether or not it is running and don't bother stopping what is not running.