Mutt and Maildir format + procmail sorting - mutt

1.) I'm a little confused about my Mutt configuration. I'm successfully using fetchmail and procmail to download and then sort all my messages in the "Maildir" format into the $HOME/Maildir/ which looks as follows:
$HOME/Maildir/
-work (cur, new, tmp)
-personal (cur, new, tmp)
-misc (cur, new, tmp)
-whatever (cur, new, tmp)
So - every of those "Maildir" formatted folders receive new my which is all done by procmail. Now, what I'm confused about is the Mutt configuration. If (in .muttrc) I set folder to $HOME/Maildir, then Mutt will obviously tell me (and it does) that $HOME/Maildir is NOT a mailbox, because it doesn't contain the "cur, new, tmp" subfolders as it should in this format. The thing is that my mail is already sorted by procmail so that I have them in subfolders. On the other hand, if I set folder=$HOME/Maildir/work, then I have access to that one directory and not the others, because I can't (I assume?) define more than one folder. I tried to set folder=$HOME/Maildir and then mailboxes =work =personal =misc =whatever, but again - $HOME/Maildir is not a mailbox. I could create 3 empty dirs in $HOME/Maildir (cur, new, tmp) so that Mutt recognises it as a mailbox, but it is not otherwise recommended to mix folders with "directories". How to handle that?
What I need is a single folder $HOME/Maildir that both receives and stores messages (set move=no, since they reside in the same place all the time, except they're in different subfolders). I would appreciate very much any suggestion.
2.) A little general question - is it for some reason not recommended to use $HOME/something as a mail spool rather than /var/spool/mail/something? I found in a few places that this is the only "kosher" way to do mail in *nix systems. However, I like to have all my mail in one place without having to move read messages from spool to storage folders. I often re-read them, answer some of old messages and moving between mailboxes in order to do so seems a little annoying. So - is there some special reason to use /var/spool/mail/ for new mail other than it's standard *nix mail directory?

There should be no harm in creating empty new, cur, and tmp directories in the root folder and not use them for anything. And even if you do use them, by mistake or actual intent, there is no harm; who says you cannot mix folders and directories? What if it's folders and subfolders?

There is no harm in using your home directory for delivering mail.
If you have a quota, deliveries into /var/spool/mail will not eat out of your home directory quota; on the other hand, it is possible that /var/spool/mail also has a quota, and of course, if your mail client wants to import delivered mail into your home directory when you open it, you need enough disk space there anyway.
In some sense, files in /var/spool/mail are still the administrator's responsibility, so if you are the victim of e.g. a mailbomb attack, there may be less of a threshold for the sysadmin to step in and, say, delete a few megs of tripe from there if your incoming messages have not yet been delivered to your home directory.

Related

How do I get the last modified date of a directory in Amazon S3?

So I'm aware that Amazon S3 doesn't really have directories. My question is: does this make it impossible to reliably get the last-modified timestamp of a "directory" in S3?
I know you can get the last-modified date of a file, as in this question.
I say "reliably" because it would be possible to define the latest last-modified timestamp of a file inside a directory as the last-modified timestamp of the directory. But that's not really accurate, since if a file inside a directory gets deleted, it wouldn't register as a change to that directory (indeed the deletion might cause the last-modified date to go backwards in time).
We're using boto to scrape S3.
If its really important for you to know this, you could develop a solution using the S3 event notifications. Each time a file is put or deleted from a folder you can have either an SNS or Lamba event get fired, and you could use that information to update a table/log someplace where this information is kept for use when you need it.
Probably not a ton of work to do it, but if its critical to know, it is an avenue worth exploring.
http://docs.aws.amazon.com/AmazonS3/latest/dev/NotificationHowTo.html
Since what we label as a directory is just part of the object name, there is no creation time, modify time, etc. since it does not really exist as an entity on its own. The object has a path, and when you add '/' to the name, client presentation applications treat that as a separator, split the name, and make it look like a path. Like you suggested, there is no directory, and this is where that concept really is different than a traditional file system and how end users interact with it.
I suggest asking what you are trying to do and why the timestamp of the directory is important. E.J. Brennan suggests what you may be trying to do and is not a bad idea for the case he mentions. There is likely a different way to skin your cat.

Mutt, how to stop mutt from changing email names

I recently switched to Mutt. Being able to backing up emails sounds cool. I use rsync to do so but I have a big headache. I use Maildir format. Each time Mutt opens an email, it changes the file name of the email, e.g. it likes to add one ",S" to the end. Then weeks later when I back up my mails, rsync is driven crazy. I guess Mutt does so because of some concurrency issues but as a personal user I do not have to worry about this. I hope to tell Mutt to keep the names of email files permanently unchanged. Question: how?
the ,S ismaildir flag for seen ("mark as read"). that's the way maildirs work, i can't tell whether you can run mutt in read-only mode as far as that is concerned.
usually, when working with maildirs, it's a better idea to use a maildir-awaresynchronization like mbsync. it will know of those flags and synchronize more efficiently because it can rely on all involved utilities to be aware of the ways to handle maildirs (that is, don't change files in-place, how to set flags etc).

What's wrong with this user ignore file for Mercurial?

A little retrospective now that I've settled into Mercurial. Forget forget files combined with hg remove. It's crazy and bass-ackwards. You can use hg remove once you've established that something is in a forget file that isn't forgetting because the item in question was tracked before the original repo was created. Note that hg remove effectively clears tracked status but it also schedules the file for deletion in anything that gets changes from your repo. If ignored, however the tracking deactivation still happens but that delete-me change set won't ever reach another repo and for some reason will never delete in yours which IMO is counter-intuitive. It is a very sure sign that somebody and I don't know these guys, is UNWILLING TO COMPROMISE ON DUH DESIGN PROBLEMS. The important thing to understand is that you don't determine what's important, Mercurial does. Except when you're merging on a pull of course. It's entirely reasonable then. But I digress...
Ignore-file/remove is a good combo for already-tracked but very specific files you want forgotten but if you're dealing with a larger quantity of built files determined with broader patterns it's not worth the risk. Just go with a double-repo and pull -u from the remote repo to your syncing repo and then pull -u commits from your working repo and merge in a repo whose sole purpose is to merge changes and pass them on in a place where your not-quite tracked or untracked files (behavior is different when pulling rather than pushing of course because hey, why be consistent?) won't cause frustration. Trust me. The idea that you should have to have two repos just to get 'er done offends for good reason AND THAT SO MANY OF US ARE DOING IT should suggest a serioush !##$ing design problem, but it's much less painful than all the other awful things that will make you regret seeking a sensible alternative.
And use hg help. It's actually Mercurial's best feature and often better than the internet (which I don't fault for confusion on the matter of all things hg) for getting answers to everything that is confusing and counter-intuitive in this VCS.
/retrospective
# switch to regexp syntax.
syntax: regexp
#Config Files
#.Net
^somecompany\.Net[\\/]MasterSolution[\\/]SomeSolution[\\/]SomeApp[\\/]app\.config
^somecompany\.Net[\\/]MasterSolution[\\/]SomeSolution[\\/]SomeApp_test[\\/]App\.config
#and more of the same following
And in my mercurial.ini at the root of my user directory
[ui]
username = ereppen
merge = bcomp
ignore = C:\<path to user ignore file>\.hgignore-config
Context:
I wrote an auto-config utility in node. I just want changes to the files it changes to get ignored. We have two teams and both aren't on the same page with making this a universal thing so it needs to be user-specific for now.
The config file is in place and pointed at by my ini file. I clone. I run the config utility and change the files and stat reveals a list of every single file with an M next to it. I thought it was the utf-8 thing and explicitly set the file to utf-16 little endian. I don't think I'm doing with the regEx that any modern flavor of regEx worth actually calling regEx wouldn't support.
The .hgignore file has no effect for files that are tracked. Its function is to stop you from seeing files you want ignored listed as "untracked". If you're seeing "M" then they're already added (you got them with the clone) so .hgignore does nothing.
The usual way config files that differ from machine to machine are handled is to put a app.config.sample in source control, have app.config in .hgignore and have people do a copy when they're making their config edits.
Alternately if your config files allow for includes and overrides you end them with include app-local.config and override any settings in a app-local.config which you don't add and do include in .hgignore.

Process queue as folder with files. Possible problems?

I have an executable that needs to process records in the database when the command arrives to do so. Right now, I am issuing commands via TCP exchange but I don't really like it cause
a) queue is not persistent between sessions
b) TCP port might get locked
The idea I have is to create a folder and place files in it whose names match the commands I want to issue
Like:
1.23045.-1.1
2.999.-1.1
Then, after the command has been processed, the file will be deleted or moved to Errors folder.
Is this viable or are there some unavoidable problems with this approach?
P.S. The process will be used on Linux system, so Antivirus problems are out of the question.
Yes, a few.
First, there are all the problems associated with using a filesystem. Antivirus programs are one (though I cannot see why it doesn't apply to Linux - no delete locks?). Disk space, file and directory count maximums are others. Then, open file limits and permissions...
Second, race conditions. If there are multiple consumers, more than one of them might see and start processing the command before the first one has [re]moved it.
There are also the issues of converting commands to filenames and vice versa, and coming up with different names for a single command that needs to be issued multiple times. (Though these are programming issues, not design ones; they'll merely annoy.)
None of these may apply or be of great concern to you, in which case I say: Go ahead and do it. See what we've missed that Real Life will come up with.
I probably would use an MQ server for anything approaching "serious", though.

Logging Etiquette

I have a server program that I am writing. In this program, I log allot. Is it customary in logging (for a server) to overwrite the log of previous runs, append to the file with some sort of new run header, or to create a new log file (it won't be restarted too often).
Which of these solutions is the way of doing things under Linux/Unix/MacOS?
Also, can anyone suggest a logging library for C++/C? I need one, regardless of the answer to the above question.
Take a look in /var/log/...you'll see that files are structured like
serverlog
serverlog.1
serverlog.2
This is done by logrotate which is called in a cronjob. But everything is simply in chronological order within the files. So you should just append to the same log file each time, and let logrotate split it up if needed.
You can also add a configuration file to /etc/logrotate.d/ to control how a particular log is rotated. Depending on how big your logfiles are, it might be a good idea to add here information about your logging. You can take a look at other files in this directory to see the syntax.
This is a rather complex issue. I don't think that there is a silver bullet that will kill all your concerns in one go.
The first step in deciding what policy to follow would be to set your requirements. Why is each entry logged? What is its purpose? In most cases this will result in some rather concrete facts, such as:
You need to be able to compare the current log with past logs. Even when an error message is self-evident, the process that led to it can be determined much faster by playing spot-the-difference, rather than puzzling through the server execution flow diagram - or, worse, its source code. This means that you need at least one log from a past run - overwriting blindly is a definite No.
You need to be able to find and parse the logs without going out of your way. That means using whatever facilities and policies are already established. On Linux it would mean using the syslog facility for important messages, to allow them to appear in the usual places.
There is also some good advice to heed:
Time is important. No only because there's never enough of it, but also because log files without proper timestamps for each entry are practically useless. Make sure that each entry has a timestamp - most system-wide logging facilities will do that for you. Make also sure that the clocks on all your computers are as accurate as possible - using NTP is a good way to do that.
Log entries should be as self-contained as possible, with minimal cruft. You don't need to have a special header with colors, bells and whistles to announce that your server is starting - a simple MyServer (PID=XXX) starting at port YYYYY would be enough for grep (or the search function of any decent log viewer) to find.
You need to determine the granularity of each logging channel. Sending several GB of debugging log data to the system logging daemon is not a good idea. A good approach might be to use separate log files for each logging level and facility, so that e.g. user activity is not mixed up with low-level data that in only useful when debugging the code.
Make sure your log files are in one place, preferably separated from other applications. A directory with the name of your application is a good start.
Stay within the norm. Sure you may have devised a new nifty logfile naming scheme, but if it breaks the conventions in your system it could easily confuse even the most experienced operators. Most people will have to look through your more detailed logs in a critical situation - don't make it harder for them.
Use the system log handling facilities. E.g. on Linux that would mean appending to the same file and letting an external daemon like logrotate to handle the log files. Not only would it be less work for you, it would also automatically maintain any general logging policies as a whole.
Finally: Always copy log important data to the system log as well. Operators watch the system logs. Please, please, please don't make them have to look at other places, just to find out that your application is about to launch the ICBMs...
https://stackoverflow.com/questions/696321/best-logging-framework-for-native-c
For the logging, I would suggest creating a new log file and clean it using a certain frequency to avoid it growing too fat. Overwrite logs of previous login is usually a bad idea.