Procmail not forwarding using ![my email address] - procmail

I have an account on a Linux server, and I'd like to have a copy of each non-spam email that's sent to this count be forwarded to my Gmail account.
I added these lines to my .procmailrc file:
:0c:
* .
!sigils.email.address#gmail.com
Here they are in the context of the whole file (sorry for the wall of text, but i don't know procmail well enough to isolate the relevant fragment):
LINEBUF=4096
MAILDIR=/mail/$LOGNAME/Maildir
DEFAULT=/mail/$LOGNAME/Maildir/
#LOGFILE=$HOME/.pmlog
VERBOSE=no
:0
* ^From:.somebody#hotmail.com
.somebody/
:0
* ^Subject:.*test
.IN-testing/
:0
* ^From:.*Network
/dev/null
:0
* ^From:.*Microsoft
/dev/null
:0
* ^From:.*Corporation
/dev/null
# Spam filtering
:0
SCORE=|/usr/bin/spamprobe receive
:0 wf
|/usr/bin/formail -I "X-SpamProbe: $SCORE"
:0 a
*^X-SpamProbe: SPAM
.spam/
:0
./
:0c:
* .
!sigils.email.address#gmail.com
But nothing is being forwarded to my Gmail account. Emails are successfully reaching my account on the Linux server. I checked my Gmail spam folder, but they aren't there either. How do I actually set up copy forwarding?

The earlier delivering recipe takes care of the message, so your forwarding recipe never executes.
:0
./
Switch the order of the last two recipes, or move the c flag from the last recipe to this one.
Incidentally, you can omit the condition to do stuff unconditionally, like you already do in this recipe, but not in the new one you added.
Also, for basic troubleshooting, set VERBOSE=yes and examine the log - this would readily have allowed you to diagnose this yourself.
For more debugging tips, see e.g. http://www.iki.fi/era/mail/procmail-debug.html

Related

Why won't node zero execute some of the write statements to a log file

I have a production job where I use two nodes (0=master and 1=slave) via OPENMPI and all the threads on each node via OPENMP.
I submit the job on the master.
Job opens a disk file on master to log some info. ( I see same file is opened on slave as well during the run)
I have statements like
write(lu,*) 'pid=',pid,' some text'
and
write(6, *) 'pid=',pid,' some text'
one after the other. (unit 6 is the stdout -screen- in gfortran).
I see on screen that both statements are printed one after the other ( pid=0 and pid=1 ).
Strangely enough most (not all) of master prints (pid=0) on the log file are absent.
This is puzzling. I would like to learn the rule. I thought both master and slave share the logfile.
I have a host file with two hosts each requesting 32 threads ( via slots and max-slots commands ) and I am running the following command as a script
miprun --hostfile hostfile --map-by node:pe32 myexecutable
I will appreciate if some expert can shed light on the issue.

Spamassassin's custom rule (for subject line filtering) doesn't work

I'm setting up Spamassassin to use along isbg to filter mail in my IMAP mail account. My ISP already has a pretty good spam filter that adds "[SPAM]" in front of the subject line of each message it detects; thus, I'm setting up a custom rule in Spamassassin so that it adds a high score to any mail which Subject line starts with "[SPAM]". My user_prefs file is:
required_score 9
score HTML_COMMENT_8BITS 0
score UPPERCASE_25_50 0
score UPPERCASE_50_75 0
score UPPERCASE_75_100 0
score OBSCURED_EMAIL 0
score SUBJ_ILLEGAL_CHARS 0
header SPAM_FILTRADO Subject =~ /^\s*\[SPAM\]/
score SPAM_FILTRADO 20
And yet, when I feed it a spam message to test it, it doesn't seem to trigger my rule. I feed it an email with this subject line, for example:
Subject: [SPAM] See Drone X Pro in action
And I analyse it in this way:
[paulo#myserver mails]$ spamc -R < spam7.txt
9.3/9.0
Spam detection software, running on the system "myserver", has
identified this incoming email as possible spam. The original message
has been attached to this so you can view it (if it isn't spam) or label
similar future email. If you have any questions, see
##CONTACT_ADDRESS## for details.
Content preview: Big Drone Companies Are Terrified Of This New Drone That Hit
The Market <http://www.fairfood.icu/uisghougw/pjarx44255sweouci/I31AAdtTTKmLsu_A6Dq7ZK_a47Ko45fCRXk7Fr9fqm4/BbYMgcZjieuj_YxMOSmnXetiI6e4Z37yS9H2zVIeHEilOpatuk8V8Mt0EtJDfLLE1llzj6MiwlLzR99DGODekcqeM7kn63lcFcp8fJutAsw>
[...]
Content analysis details: (9.3 points, 9.0 required)
pts rule name description
---- ---------------------- --------------------------------------------------
2.4 DNS_FROM_AHBL_RHSBL RBL: Envelope sender listed in dnsbl.ahbl.org
2.7 RCVD_IN_PSBL RBL: Received via a relay in PSBL
[193.17.4.113 listed in psbl.surriel.com]
-0.0 SPF_PASS SPF: sender matches SPF record
1.3 HTML_IMAGE_ONLY_24 BODY: HTML: images with 2000-2400 bytes of words
0.0 HTML_MESSAGE BODY: HTML included in message
1.6 RCVD_IN_BRBL_LASTEXT RBL: RCVD_IN_BRBL_LASTEXT
[193.17.4.113 listed in bb.barracudacentral.org]
1.3 RDNS_NONE Delivered to internal network by a host with no rDNS
There isn't anything about my rule.
I know that my user_prefs is being loaded because, after the section I pasted above, I have some email addresses set up in a whitelist, and when analysing emails coming from those addresses, Spamassassin correctly detects them.
What's wrong with my rule?
Support for custom rules in user_prefs file is turned off by default.
You may use item_allow_user_rules in global configuration to change it.

Simple libtorrent Python client

I tried creating a simple libtorrent python client (for magnet uri), and I failed, the program never continues past the "downloading metadata".
If you may help me write a simple client it would be amazing.
P.S. When I choose a save path, is the save path the folder which I want my data to be saved in? or the path for the data itself.
(I used a code someone posted here)
import libtorrent as lt
import time
ses = lt.session()
ses.listen_on(6881, 6891)
params = {
'save_path': '/home/downloads/',
'storage_mode': lt.storage_mode_t(2),
'paused': False,
'auto_managed': True,
'duplicate_is_error': True}
link = "magnet:?xt=urn:btih:4MR6HU7SIHXAXQQFXFJTNLTYSREDR5EI&tr=http://tracker.vodo.net:6970/announce"
handle = lt.add_magnet_uri(ses, link, params)
ses.start_dht()
print 'downloading metadata...'
while (not handle.has_metadata()):
time.sleep(1)
print 'got metadata, starting torrent download...'
while (handle.status().state != lt.torrent_status.seeding):
s = handle.status()
state_str = ['queued', 'checking', 'downloading metadata', \
'downloading', 'finished', 'seeding', 'allocating']
print '%.2f%% complete (down: %.1f kb/s up: %.1f kB/s peers: %d) %s %.3' % \
(s.progress * 100, s.download_rate / 1000, s.upload_rate / 1000, \
s.num_peers, state_str[s.state], s.total_download/1000000)
time.sleep(5)
What happens it is that the first while loop becomes infinite because the state does not change.
You have to add a s = handle.status (); for having the metadata the status changes and the loop stops. Alternatively add the first while inside the other while so that the same will happen.
Yes, the save path you specify is the one that the torrents will be downloaded to.
As for the metadata downloading part, I would add the following extensions first:
ses.add_extension(lt.create_metadata_plugin)
ses.add_extension(lt.create_ut_metadata_plugin)
Second, I would add a DHT bootstrap node:
ses.add_dht_router("router.bittorrent.com", 6881)
Finally, I would begin debugging the application by seeing if my network interface is binding or if any other errors come up (my experience with BitTorrent download problems, in general, is that they are network related). To get an idea of what's happening I would use libtorrent-rasterbar's alert system:
ses.set_alert_mask(lt.alert.category_t.all_categories)
And make a thread (with the following code) to collect the alerts and display them:
while True:
ses.wait_for_alert(500)
alert = lt_session.pop_alert()
if not alert:
continue
print "[%s] %s" % (type(alert), alert.__str__())
Even with all this working correctly, make sure that torrent you are trying to download actually has peers. Even if there are a few peers, none may be configured correctly or support metadata exchange (exchanging metadata is not a standard BitTorrent feature). Try to load a torrent file (which doesn't require downloading metadata) and see if you can download successfully (to rule out some network issues).

how to setup the way procmail is logging

right now in the non verbose log procmail is not logging who the recipient is.
It is just logging who the sender is, the subject, date and the delivery.
From info#essegisistemi.it Tue Apr 15 20:33:19 2014
Subject: ***** SPAM 7.3 ***** Foto
Folder: /usr/lib/dovecot/deliver -d christian -m Junk 132417
Where can I configure to include the To and the CC into the logfile ?
You simply assign the strings you want to log to the LOG pseudo-variable.
Usually you want to add a newline explicitly. Something like this:
NL="
"
TO=`formail -zxTo:`
Cc=`formail -zxCc:`
LOG=" To: $TO$NL Cc: $CC$NL"
These will usually end up before the "log abstract" (what you have in your question). If you need full control over what gets logged, maybe set LOGABSTRACT=no and implement your own log abstract instead. (This is fairly tricky, though.)
Note also that logging could be asynchronous. The log abstract gets written all at once, but if you have many messages arriving at roughly the same time, you might want to add disambiguating information to the log entries, or (in desperation) force locking during logging so that no two messages can perform logging at the same time.
LOCKFILE=procmail.lock
# Critical section -- only one Procmail instance at a time can execute these recipes
LOG="fnord$NL"
:0w
| /usr/lib/dovecot/deliver -d "$USER" -m "$FOLDER"
# delivery succeeded, lockfile will be released
The disambiguating information could be just the process ID. In order to include it in the log abstract also, you need to smuggle it in somehow. Not sure how to do that with Dovecot, is there an option you can pass in which will simply be ignored but logged by Procmail?
TO=`formail -zxTo:`
LOG="[$$] To: $TO$NL"
CC=`formail -zxCc:`
LOG="[$$] Cc: $CC$NL"
:
:
# deliver
:0w
| deliver -d "$USER" -m "$FOLDER" -o ignore=$$
... should end up logging something like Folder: deliver -d you -m INBOX -o ignore=1742 where 1742 would be the process-ID so that you can find the same PID in the previous log entries.

Linux - Detecting idleness

I need to detect when a computer is idle for a certain time period. My definition of idleness is:
No users logged in, either by remote methods or on the local machine
X server inactivity, with no movement of mouse or key presses
TTY keyboard inactivity (hopefully)
Since the majority of distros have now moved to logind, I should be able to use its DBUS interface to find out if users are logged in, and also to monitor logins/logouts. I have used xautolock to detect X idleness before, and I could continue using that, but xscreensaver is also available. Preferably however I want to move away from any specific dependencies like the screensaver due to different desktop environments using different components.
Ideally, I would also be able to base idleness on TTY keyboard inactivity, however this isn't my biggest concern. According to this answer, I should be able to directly query the /dev/input/* interfaces, however I have no clue how to go about this.
My previous attempts at making such a monitor have used Bash, due to the ease of changing a plain text script file, howver I am happy using C++ in case more advanced methods are required to accomplish this.
From a purely shell standpoint (since you tagged this bash), you can get really close to what you want.
#!/bin/sh
users_are_logged_in() {
who |grep -q .
return $?
}
x_is_blanked() {
local DISPLAY=:0
if xscreensaver-command -time |grep -q 'screen blanked'; then
return 0 # we found a blanked xscreensaver: return true
fi
# no blanked xscreensaver. Look for DPMS modes
xset -q |awk '
/DPMS is Enabled/ { dpms = 1 } # DPMS is enabled
/Monitor is On$/ { monitor = 1 } # The monitor is on
END { if(dpms && !monitor) { exit 0 } else { exit 1 } }'
return $? # true when DPMS is enabled and the monitor is not on
}
nobody_here() {
! users_are_logged_in && x_is_blanked
return $?
}
if nobody_here; then
sleep 2m
if nobody_here; then
# ...
fi
fi
This assumes that a user can log in in two minutes and that otherwise, there is no TTY keyboard activity.
You should verify that the who |grep works on your system (i.e. no headers). I had originally grepped for / but then it won't work on FreeBSD. If who has headers, maybe try [ $(who |grep -c .) -gt 1 ] which will tell you that the number of lines that who outputs is more than one.
I share your worry about the screensaver part; xscreensaver likely isn't running in the login manager (any other form of X would involve a user logged in, which who would detect), e.g. GDM uses gnome-screensaver, whose syntax would be slightly different. The DPMS part may be good enough, giving a far larger buffer for graphical logins than the two minutes for console login.
Using return $? in the last line of a function is redundant. I used it to clarify that we're actually using the return value from the previous line. nobody_here short circuits, so if no users are logged in, there is no need to run the more expensive check for the status of X.
Side note: Be careful about using the term "idle" as it more typically refers to resource (hardware, that is) consumption (e.g. CPU load). See the uptime command for load averages for the most common way of determining system (resource) idleness. (This is why I named my function nobody_here instead of e.g. is_idle)