how to setup the way procmail is logging - procmail

right now in the non verbose log procmail is not logging who the recipient is.
It is just logging who the sender is, the subject, date and the delivery.
From info#essegisistemi.it Tue Apr 15 20:33:19 2014
Subject: ***** SPAM 7.3 ***** Foto
Folder: /usr/lib/dovecot/deliver -d christian -m Junk 132417
Where can I configure to include the To and the CC into the logfile ?

You simply assign the strings you want to log to the LOG pseudo-variable.
Usually you want to add a newline explicitly. Something like this:
NL="
"
TO=`formail -zxTo:`
Cc=`formail -zxCc:`
LOG=" To: $TO$NL Cc: $CC$NL"
These will usually end up before the "log abstract" (what you have in your question). If you need full control over what gets logged, maybe set LOGABSTRACT=no and implement your own log abstract instead. (This is fairly tricky, though.)
Note also that logging could be asynchronous. The log abstract gets written all at once, but if you have many messages arriving at roughly the same time, you might want to add disambiguating information to the log entries, or (in desperation) force locking during logging so that no two messages can perform logging at the same time.
LOCKFILE=procmail.lock
# Critical section -- only one Procmail instance at a time can execute these recipes
LOG="fnord$NL"
:0w
| /usr/lib/dovecot/deliver -d "$USER" -m "$FOLDER"
# delivery succeeded, lockfile will be released
The disambiguating information could be just the process ID. In order to include it in the log abstract also, you need to smuggle it in somehow. Not sure how to do that with Dovecot, is there an option you can pass in which will simply be ignored but logged by Procmail?
TO=`formail -zxTo:`
LOG="[$$] To: $TO$NL"
CC=`formail -zxCc:`
LOG="[$$] Cc: $CC$NL"
:
:
# deliver
:0w
| deliver -d "$USER" -m "$FOLDER" -o ignore=$$
... should end up logging something like Folder: deliver -d you -m INBOX -o ignore=1742 where 1742 would be the process-ID so that you can find the same PID in the previous log entries.

Related

Informatica Looping

I am looking for information on looping in Informatica. Specifically, I need to check if a source table has been loaded, if it has, move to next step, if not wait X minutes and check the status table again. I would prefer direction to a place I can learn this on my own, but I need to confirm this is even possible as I have not found anything on my google searches.
You can use a simple shell script to do this wait and watch capability.
#/bin/sh
# call it as script_name.sh
# it will wait for 10 min and check again for data, in total it will wait for 2hours. change them if you want to
# Source is assumed as oracle. change it as per your source.
interval=600
loop_count=10
counter=0
while true
do
$counter=`expr $counter + 1 `
db_value=`sqlplus -s user/pass#local_SID <<EOF
set heading off
set feedback off
SELECT count(*) FROM my_source_table;
exit
EOF`;
if [ $db_value -gt 0 ]; then
echo "Data Found."
exit 0
else
if [ $counter -eq $loop_count ]
then
echo "No data found in source after 2hours"
exit 1
else
sleep $interval
fi
fi
done
And add this shell script(in a CMD task) to the beginning of the workflow.
Then use informatica link condition as if status= 0, proceed else email that wait time is over.
You can refer to the pic below. This will send a mail if wait time is over and still data is not there in source.
In general, looping is not supported in Informatica PowerCenter.
One way is to use scripts, as discussed by Koushik.
Another way to do that is to have a Continuously Running Workflow with a timer. This is configurable on Scheduler tab of your workflow:
Such configuration makes the workflow start again right after it succeeds. Over and over again.
Workflow would look like:
Start -> s_check_source -> Decision -> timer
|-> s_do_other_stuff -> timer
This way it will check source. Then if it has not been loaded trigger the timer. Thet it will succeed and get triggered again.
If source turns out to be loaded, it will trigger the other session, complete and probably you'd need another timer here to wait till next day. Or basically till whenever you'd like the workflow to be triggered again.

Procmail not forwarding using ![my email address]

I have an account on a Linux server, and I'd like to have a copy of each non-spam email that's sent to this count be forwarded to my Gmail account.
I added these lines to my .procmailrc file:
:0c:
* .
!sigils.email.address#gmail.com
Here they are in the context of the whole file (sorry for the wall of text, but i don't know procmail well enough to isolate the relevant fragment):
LINEBUF=4096
MAILDIR=/mail/$LOGNAME/Maildir
DEFAULT=/mail/$LOGNAME/Maildir/
#LOGFILE=$HOME/.pmlog
VERBOSE=no
:0
* ^From:.somebody#hotmail.com
.somebody/
:0
* ^Subject:.*test
.IN-testing/
:0
* ^From:.*Network
/dev/null
:0
* ^From:.*Microsoft
/dev/null
:0
* ^From:.*Corporation
/dev/null
# Spam filtering
:0
SCORE=|/usr/bin/spamprobe receive
:0 wf
|/usr/bin/formail -I "X-SpamProbe: $SCORE"
:0 a
*^X-SpamProbe: SPAM
.spam/
:0
./
:0c:
* .
!sigils.email.address#gmail.com
But nothing is being forwarded to my Gmail account. Emails are successfully reaching my account on the Linux server. I checked my Gmail spam folder, but they aren't there either. How do I actually set up copy forwarding?
The earlier delivering recipe takes care of the message, so your forwarding recipe never executes.
:0
./
Switch the order of the last two recipes, or move the c flag from the last recipe to this one.
Incidentally, you can omit the condition to do stuff unconditionally, like you already do in this recipe, but not in the new one you added.
Also, for basic troubleshooting, set VERBOSE=yes and examine the log - this would readily have allowed you to diagnose this yourself.
For more debugging tips, see e.g. http://www.iki.fi/era/mail/procmail-debug.html

How can I persuade syslog output on console to include timestamps?

Consider the following program:
#include <syslog.h>
int main()
{
openlog("test-app", LOG_CONS | LOG_NDELAY | LOG_PERROR | LOG_PID, LOG_USER);
setlogmask(LOG_UPTO(LOG_DEBUG));
syslog(LOG_DEBUG, "Testing!");
}
Note the use of LOG_PERROR, which emits messages to stderr as well as whatever file syslogd is configured to fill (in my case, /var/log/messages).
The console output is:
test-app[28072]: Testing!
The syslog file contains:
Apr 17 13:20:14 cat test-app[28072]: Testing!
Can I configure syslog so that the console output contains timestamps also? If so, how?
This is not doable when using the syslog call on it's own. The reason is that the timestamp that you see on the log file is after the log message has been sent to the daemon. It's the syslog daemon that actually performs the addition of the timestamp. The logging daemon is the thing that applies the timestamp.
When you use the LOG_CONS and LOG_PERROR options the message is sent to the console or stderr without ever getting timestamped because the timestamping does not occur at the point of logging.
Essentially the use of LOG_CONS and LOG_PERROR bypass the daemon entirely; you would need to modify the implementation of the syslog call itself to add your own timestamp. Thankfully, that's pretty easy as the implementation of the syslog library call is in the glibc source, and the source isn't terribly complicated.

Linux - Detecting idleness

I need to detect when a computer is idle for a certain time period. My definition of idleness is:
No users logged in, either by remote methods or on the local machine
X server inactivity, with no movement of mouse or key presses
TTY keyboard inactivity (hopefully)
Since the majority of distros have now moved to logind, I should be able to use its DBUS interface to find out if users are logged in, and also to monitor logins/logouts. I have used xautolock to detect X idleness before, and I could continue using that, but xscreensaver is also available. Preferably however I want to move away from any specific dependencies like the screensaver due to different desktop environments using different components.
Ideally, I would also be able to base idleness on TTY keyboard inactivity, however this isn't my biggest concern. According to this answer, I should be able to directly query the /dev/input/* interfaces, however I have no clue how to go about this.
My previous attempts at making such a monitor have used Bash, due to the ease of changing a plain text script file, howver I am happy using C++ in case more advanced methods are required to accomplish this.
From a purely shell standpoint (since you tagged this bash), you can get really close to what you want.
#!/bin/sh
users_are_logged_in() {
who |grep -q .
return $?
}
x_is_blanked() {
local DISPLAY=:0
if xscreensaver-command -time |grep -q 'screen blanked'; then
return 0 # we found a blanked xscreensaver: return true
fi
# no blanked xscreensaver. Look for DPMS modes
xset -q |awk '
/DPMS is Enabled/ { dpms = 1 } # DPMS is enabled
/Monitor is On$/ { monitor = 1 } # The monitor is on
END { if(dpms && !monitor) { exit 0 } else { exit 1 } }'
return $? # true when DPMS is enabled and the monitor is not on
}
nobody_here() {
! users_are_logged_in && x_is_blanked
return $?
}
if nobody_here; then
sleep 2m
if nobody_here; then
# ...
fi
fi
This assumes that a user can log in in two minutes and that otherwise, there is no TTY keyboard activity.
You should verify that the who |grep works on your system (i.e. no headers). I had originally grepped for / but then it won't work on FreeBSD. If who has headers, maybe try [ $(who |grep -c .) -gt 1 ] which will tell you that the number of lines that who outputs is more than one.
I share your worry about the screensaver part; xscreensaver likely isn't running in the login manager (any other form of X would involve a user logged in, which who would detect), e.g. GDM uses gnome-screensaver, whose syntax would be slightly different. The DPMS part may be good enough, giving a far larger buffer for graphical logins than the two minutes for console login.
Using return $? in the last line of a function is redundant. I used it to clarify that we're actually using the return value from the previous line. nobody_here short circuits, so if no users are logged in, there is no need to run the more expensive check for the status of X.
Side note: Be careful about using the term "idle" as it more typically refers to resource (hardware, that is) consumption (e.g. CPU load). See the uptime command for load averages for the most common way of determining system (resource) idleness. (This is why I named my function nobody_here instead of e.g. is_idle)

HornetQ, consumer can't find queue

I'm trying to use ActiveMQ-CPP with HornetQ. I'm using the ActiveMQ-CPP bundled example, but I'm having a hard time with it.
The producer works like a charm, but the consumer gives me the following message:
* BEGIN SERVER-SIDE STACK TRACE
Message: Queue /queue/exampleQueue does not exist
Exception Class
END SERVER-SIDE STACK TRACE *
FILE: activemq/core/ActiveMQConnection.cpp, LINE: 768
FILE: activemq/core/ActiveMQConnection.cpp, LINE: 774
FILE: activemq/core/ActiveMQSession.cpp, LINE: 350
FILE: activemq/core/ActiveMQSession.cpp, LINE: 281
Time to completion = 0.161 seconds.
The problem is that the queue exists. The code works all right with ActiveMQ+Openwire, but I'm not having the same luck with HornetQ+STOMP.
Any ideas?
Try to set the same Queue's address you defined on Hornetq as the destination.
Probably your queue is defined on HornetQ like this
<queue name="exampleQueue">
<address>jms.queue.exampleQueue</address>
</queue>
So, try to connect to this address via STOMP.
See the following frames according to the protocol:
Subscribing to the queue
SUBSCRIBE
destination:jms.queue.exampleQueue
^#
Sending a message
SEND
destination:jms.queue.exampleQueue
it works
^#
As soon as the message is sent, you'll get the message on the session you subscribed to the queue
MESSAGE
timestamp:1311355464983
redelivered:false
expires:0
subscription:subscription/jms.queue.exampleQueue
priority:0
message-id:523
destination:jms.queue.exampleQueue
it works
-- EDIT
There's one point left I would like to add...
HornetQ doesn't conform to STOMP's naming standarts (see http://community.jboss.org/message/594176 ), so there's a possibility that the activemq-cpp follows the behavior of ativemq-nms, which "normalize" queue's name to the STOMP standart: "/queue/YourQueue" (and causes naming issues).
So, if that's the case, even if you try to change your destination name to 'jms.queue.exampleQueue', activemq-cpp could normalize it to '/queue/jms.queue.exampleQueue'.
In NMS+HornetQ, there's no "out of the box" way of avoiding this. The only choice is to edit NMS's source code and remove the part which normalize queue's names. Maybe it's the same way out on activemq-cpp.
HornetQ doesn't like the "/queue/" prefix for a SUBSCRIBE. I took that out of the ToStomp method in StompHelper and everything worked.