Is there a way to have Mutt's index show if a message has an attachment?
This would be similar to how Gmail shows a paperclip next to messages with an attachment.
I can limit the messages with attachments using ~X 1-15, but it's nice to see whether messages have an attachment in the index.
You can add the number of attachments (%X) to the index format setting.
For example, this shows the number of attachments after the status flags:
set index_format="%4C %Z %X %{%y%m%d} %-12.12L %?M?(#%03M)&(%4c)? %?y?(%.20Y) ?%s"
Or, if you only want an indicator, you can use conditionals. If instead of %X you put %?X?A&-?, you'll get A for messages with attachments, and - for messages without.
Search for the chapter Format strings in the supplied documentation.
To put a paper clip for an attachment, the following works for me (and also puts blanks correctly).
set index_format="%Z %?X?📎& ? %{!%a %b%d'%y %I:%M:%S%p%Z} %-12.12L %?M?(#%03M)&(%4c)? %?y?(%.20Y) ?%s"
I made it a bit more concise to save on screenspace...
Related
I am new to Boost.Log and Using it to develop a logger library as a wrapper on top of Boost.Log. My Problem is finding a convenient way to set a counter for number of consecutive repeated log messages instead of printing the same log multiple times.
for example: (ADD_LOG() method is in my library and do BOOST_LOG_SEV(...))
ADD_LOG("Hello");
ADD_LOG("Hello");
ADD_LOG("Hello");
ADD_LOG("Some Different Hello");
I want to be the output log file like this: (sample_0.log)
........................................................
[TimeStamp] [Filter] Hello
[TimeStamp] [Filter] Skipped 2 duplicate messages!
[TimeStamp] [Filter] Some Different Hello
.......................................................
I am using this example with Text File Backend and TimeStamp, Filter are Ok. My problem is about skipping duplicates. maybe with setting filters or anything else.
I think syslog in linux has this feature by some configurations.
Boost.Log does not implement such log record accumulation, you will have to implement it yourself. You can do this by implementing a sink backend that would buffer the last log record message and compare it with the next one. Note that you should not buffer the whole record or formatted string because it will likely differ because of timestamps, record counters and other frequently changing attribute values that you might use.
I'm trying to obtain the send date of an .msg email message file. After endless searching, I've concluded that the send date is not kept in its own stream within the file (but please correct me if I'm wrong). Instead, it appears that the date must be obtained from the stream containing the standard email headers (a stream named __substg1.0_007D001F).
So I've managed to obtain the email header stream and store it in a buffer. At this point, I need to find and parse the Date field from the headers. I'm finding this difficult, because I don't believe I can use a standard email-parsing C++ library. After all, I only have a header stream--not an entire, standard email file.
I'm currently trying a regex, perhaps something like this:
std::wregex regexDate(L"^Date:(.*)\r\n");
std::wsmatch match;
if (std::regex_search(strHeader, match, regexDate)) {
//...
}
But I'm reluctant to use regex (I'm concerned that it'll be error-prone), and I'm wondering if there's a more robust, accepted approach to parsing headers. Perhaps splitting the header string on new lines and finding the one that begins with Date:? Any guidance would be greatly appreciated.
One other consideration: I'm not sure it's possible to read in the header stream line by line, because IStream doesn't have a get line method.
(Side note: I've also tried obtaining message data using C++ Outlook automation, but that seems to involve some security and compatibility issues, so it won't work out.)
The Send Date is stored in an msg file, but as you note, it is not in its own stream. As a short, fixed-width value, it can be found in the __properties_version1.0 stream object under the root entry (or under an attachment object for embedded messages), with the property ID 0x00390040, the PidTagClientSubmitTime Property, which is described in the MS-OXOMSG documentation as
Contains the current time, in UTC, when the email message is submitted.
MS-OXCMAIL Section 2.2.3.2.2: Sent time elaborates on this:
To set the value of the PidTagClientSubmitTime property ([MS-OXOMSG] section 2.2.3.11), clients MUST set the Date header value, as specified in [RFC2822].
This has the property type 0x0040, pTypTime, which, per the list of Property Data Types:
8 bytes; a 64-bit integer representing the number of 100-nanosecond intervals since January 1, 1601
I have a device that spits out data with a checksum of some kind (8bit), that I would like to reverse engineer. (balboa spa wifi, for those interested).
The messages are:
7E1DFFAF130000640B2B00000100000400000000000000000064000000A57E
Where "7E" is the header/footer of the message, 1D is the length, and A5 in this case, is the checksum byte.
I've tried feeding these into reveng, but it just spits out "no models found" no matter how I set the parameters. What am I doing wrong?
Some example data with the header/footer stripped off, checksums at the end:
1DFFAF130000640B2800000100000400000000000000000064000000D1
1DFFAF130000640B2900000100000400000000000000000064000000FD
1DFFAF130000640B2A0000010000040000000000000000006400000089
1DFFAF130000640B2B00000100000400000000000000000064000000A5
Thanks
reveng -w 8 -s followed by those four strings gives me a result. However you don't have enough data to resolve the parameters. You need more such messages with CRCs, including messages of differing lengths.
I have two otherwise identical posts on a Facebook page that I administer. One post we'll call "full" returns the full range of insight values (31) I'd expect even when the values are zero, while the other which we'll call "subset" returns only a very limited subset of values (7). See below for the actual values returned.
Note that I've confirmed this is the case by using both the GUI-driven export to Excel and the Facebook Graph API Explorer (https://developers.facebook.com/tools/explorer).
My first thought was that the API suppresses certain values such as post_negative_feedback if they are zero (i.e., nobody has clicked hide or report as spam/abusive), but this is not the case. The "full" post has no such reports (or at the very least the return value for all the post_negative_* fields are zero.
I've even tried intentionally reporting the post with no negative return values as spam, and then repulling what I thought was a real-time field (i.e., post_negative_feedback), but data still comes back empty:
{
"data": [
],
(paging data)
}
What gives?
Here is the more limited subset returned for the problematic post:
post_engaged_users
post_impressions
post_impressions_fan
post_impressions_fan_unique
post_impressions_organic
post_impressions_organic_unique
post_impressions_unique
And here is the full set returned for most other posts (with asterisks added to show the subset returned above):
post_consumptions
post_consumptions_by_type
post_consumptions_by_type_unique
post_consumptions_unique
*post_engaged_users
*post_impressions
post_impressions_by_story_type
post_impressions_by_story_type_unique
*post_impressions_fan
post_impressions_fan_paid
post_impressions_fan_paid_unique
*post_impressions_fan_unique
*post_impressions_organic
*post_impressions_organic_unique
post_impressions_paid
post_impressions_paid_unique
*post_impressions_unique
post_impressions_viral
post_impressions_viral_unique
post_negative_feedback
post_negative_feedback_by_type
post_negative_feedback_by_type_unique
post_negative_feedback_unique
post_stories
post_stories_by_action_type
post_story_adds
post_story_adds_by_action_type
post_story_adds_by_action_type_unique
post_story_adds_unique
post_storytellers
post_storytellers_by_action_type
The issue (besides "why does this happen?") is that I've tried giving negative feedback to the post that fails to report any count whatsoever for this -- and I still receive no data (would expect "1" or something around there). I started out waiting the obligatory 15 minutes (real-time field) and then when that didn't work give it a full 24 hours. What gives?
I have "Tabular Data" to be sent from server to client --- I am analyzing should I be going for CSV kind of formate or XML.
The data which I send can be in MB's, server will be streaming it and client will read it line by line to start paring the output as it gets (client can't wait for all data to come).
As per my present thought CSV would be good --- it will reduce the data size and can be parsed faster.
XML is a standard -- I am concerned with parsing data as it comes to system(live parsing) and data size.
What would be the best solution?
thanks for all valuable suggestions.
If it is "Tabular data" and the table is relatively fixed and regular, I would go for a CSV-format. Especially if it is one server and one client.
XML has some advantage if you have multiple clients and want to validate the file format before using the data. On the other hand, XML has cornered the market for "code bloat", so the amount transfered will be much larger.
I would use CSV, with a header which indicate the id of each field.
id, surname, givenname, phone-number
0, Doe, John, 555-937-911
1, Doe, Jane, 555-937-911
As long as you do not forget the header, you should be fine if the data format ever changes. Of course the client need be updated before the server starts sending new streams.
If not all clients can be updated easily, then you need a more lenient messaging system.
Google Protocol Buffer has been designed for this kind of backward/forward compatibility issues, and combines this with excellent (fast & compact) binary encoding abilities to reduce the message sizes.
If you go with this, then the idea is simple: each message represents a line. If you want to stream them, you need a simple "message size | message blob" structure.
Personally, I have always considered XML bloated by design. If you ever go with Human Readable formats, then at least select JSON, you'll cut down the tag overhead by half.
I would suggest you go for XML.
There are plenty of libraries available for parsing.
Moreover, if later the data format changes, the parsing logic in case of XML won't change only business logic may need change.
But in case of CSV parsing logic might need a change
CSV format will be smaller since you only have to delare the headers on the first row then rows of data below with only commas in between to add any extra characters to the stream size.