Procmail to automatically make new folders to store emails from new senders - procmail

I am learning how to use procmail but at this point, I am not even sure it's the right tool for what I am trying to do.
So far, I have managed to get fetchmail to retrieve emails from a Google IMAP account and procmail to filter those emails into local folders I had previously created.
I am wondering though whether there is a way to get procmail to automatically create a new folder locally when an email from a new sender is being retrieved and to store that email into that folder.
So far, I have only found a website that describes the possibility of procmail creating automatically folders for mailing lists, but the recipe is something crazy using characters which I have no idea the meaning of, furthermore the official procmail website seems unreachable.
Please can you help? Thank you.

It's not clear what you expect the folder to be called, and what mailbox format you're using; but assuming maildir folders named by the sender's email terminus, try
Who=`formail -rtzxTo:`
:0
* ? mkdir -p "$Who"
$Who/
For an mbox folder, you don't need the directory check at all, because the folder is just a single text file, and you'd drop the final slash from the folder name. Mbox needs locking, so add a second colon after the zero.
Who=`formail -rtzxTo:`
:0:
$Who
Getting formail to create a reply and then extracting the To: header of the generated reply is a standard but slightly unobvious way to obtain just the email terminus for the sender of the input message.
The shell snippet mkdir -p dir creates dir if it doesn't already exist, and is a harmless no-op otherwise.

Related

pac cli paportal download

Everytime I download a "power portal" with the pac cli command:
pac paportal download -id <guid> --path ./ --overwrite true
Many of the files seem to be regenerated with new short guids on the end, although they haven't changed. And the sitestettings.yml file gets re-ordered so it shows a bunch of changes.
Below I made one change to a site setting, and I have 134 changes.
Can this be avoided? It makes it frustrating to track actual changes in source control.
If you have multiple records with the same name, then the short guid will be appended as file/folders cannot have same names, if you avoid creating records with exact same names (active/inactive both) you should not face this issue

How to share Newman htmlextra report?

This may be a basic question but I cannot figure out the answer. I have a simple postman collection that is run through newman
newman run testPostman.json -r htmlextra
That generates a nice dynamic HTML report of the test run.
How can I then share that with someone else? i.e. via email. The HTML report is just created through a local URL and I can't figure out how to save it so it stays in its dynamic state. Right clicking and Save As .html saves the file, but you lose the ability to click around in it.
I realize that I can change the export path so it saves to some shared drive somewhere, but aside from that is there any other way?
It's been already saved to newman/ in the current working directory, no need to 'Save As' one more time. You can zip it and send via email.
If you want to change the location of generated report, check this.

Issue with uploading GeoLite2-City.mmdb.missing file in mautic

I have a mautic marketing automation installed on my server (I am a beginner)
However i replicated this issue when configuring GeoLite2-City IP lookup
Automatically fetching the IP lookup data failed. Download http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.mmdb.gz, extract if necessary, and upload to /home/ol*****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb.
What i attempted
i FTP into the /home/ol****/public_html/mautic4/app/cache/prod/../ip_data/GeoLite2-City.mmdb. directory
uploaded the file (the original GeoLite2-City.mmdb has '0 byte', while the newly added file is about '6000 kb'
However, once i go back into mautic to implement the lookup, the newly added file reverts back to '0byte" and i still cant get the IP lookup configured.
I have also changed the file permission to 0744, but the issue still replicates.
Did you disable the cron job which looks for the file? If not, or if you clicked the button again in the dashboard, it will overwrite the file you manually placed there.
As a side note, the 2.16 release addresses this issue, please take a look at https://www.mautic.org/blog/community/announcing-mautic-2-16/.
Please ensure you take a full backup (files and database) and where possible, run the update at command line to avoid browser timeouts :)

Using regEx to download the entire directory using wget

I want to download multiple pdfs from urls such as this - https://dummy.site.com/aabbcc/xyz/2017/09/15/2194812/O7ca217a71ac444eda516d8f78c29091a.pdf
If I do wget on complete URL then it downloads the file wget https://dummy.site.com/aabbcc/xyz/2017/09/15/2194812/O7ca217a71ac444eda516d8f78c29091a.pdf
But if I try to recursively download the entire folder then it returns 403(forbidden access)
wget -r https://dummy.site.com/aabbcc/xyz/
I have tried by setting user agent, rejecting robots.txt and bunch of other solutions from the internet, but I'm coming back to same point.
So I want to form the list of all possible URLs considering the given URL as common pattern, and have no idea how to do that.
I just know that I can pass that file as input to wget which will download the files recursively. So seeking the help for forming the URL list using regEx here.
Thank You!
You can't download using wildcard the files you can't see. If the host do not support directory listing you have no idea what the filenames/paths are. Also as you do not know the algorithm to generate filenames you can't generate and get them.

cctiddly change workspace URL

I'm using ccTiddly (using the TiddlyWiki) and I would like to change the URL of my workspaces. I am not sure how to proceed, I tried a first time and everything was corrupted.
The old URL is www.sub.mysite.com/wiki and the new one is www.mysite.com/wiki
I was thinking to move all the files from the FTP folder and then edit the database by removing the "sub" from all the URLs.
Will that work fine?
I only have 2 workspaces with few tiddlers each.
Thanks!
It will depend which version of ccTiddly you are using.
Take a look at the fields in the database. I think you should be able to make the change just by updating the workspace field on the tiddler table but check there are no other fields storing the workspace.
When you said it corrupted everything last time, what exactly do you mean?