access read memo field - regex

I have an access database that has a table with a memo field. Fields have been inserted in this format.
Apr 02 - some text
Feb 20 - some text
I would like to reverse the order of the inserts so the above would be:
Feb 20 - some text
Apr 02 - some text
I am thinking of reading line by line using regular expressions, anyone has a better path to achieve that

Your memo field contains 2 lines of text and you want to reverse their order. You can do that with a simple VBA procedure, which doesn't need a regular expression.
Here is a sample Immediate window session which demonstrates techniques you can use in a VBA procedure.
MyText = "Apr 02 - some text" & vbcrlf & "Feb 20 - some text"
? MyText
Apr 02 - some text
Feb 20 - some text
? Split(MyText, vbcrlf)(1)
Feb 20 - some text
? Split(MyText, vbcrlf)(0)
Apr 02 - some text
If the memo field can include more than two lines of text, you can load an array with the results from Split() and then loop through the array in reverse order.

Related

Can PowerQuery append rows based on columns?

Hello StackOverflow Community,
I have a Dataset where the monthly volume's values per Model is shown in multiple columns instead of one. I.E:
Model Jan Feb Mar Apr May
A 5 12 26 56 23
Is it possible through PowerBI's PowerQuery to create one "Month" column where its values are being appended? Example:
Model Month Volume
A Jan 5
A Feb 12
A Mar 26
A Apr 56
A May 23
Thanks in advance.
You need to use 'Unpivot columns', in the transform options of the query editor
Select the columns you want, then Unpivot.
You should give you this:
It will name the new columns Attribute and Value, you can then rename them to what you want. FYI when the next month (Jun) is added to the source it should unpivot that column automatically.

regex to find the data

EDIT - I added all the last 50 texts, I saw that were sent from various people, unfortunately, it's not an automatic email...
list of all the text is HERE
I'm struggling to find a matched pattern that will identify the needed items (date, start time, time zone) from this text:
1 April 20 16:00-16:30 Israel Time
Tomorrow, Wed Feb 12, 08:00-9:00 AM IST(IL)
Tomorrow, Wed Jan 22, 09:30-10:00 PM PST
11-May-20 19:00-20:30 Israel Time
The start time is an easy one: (\d+:\d+)- but I'm not sure what to be done with the other words and digits.
Based on the data you provided, something like this would do it, with 3 captures as requested:
(\d+[-\s]\w+[-\s]\d+|\w+ \d+),?\s(\d+\:\d+)\-\d+\:\d+\s(?:AM\s|PM\s)?(.*)
Online reference

All CSV values in column 0 are strings

For some reason a csv file I wrote (win7) with Python has all the values as a string in column 0 and cannot perform any operation.
It has no labels.
The format is (I would like to keep the last value - date - as a date format):
"Rob,Avanti,Ave,12.83,Max,4.0,Min,-21.9,analist disp:,-1.0,"" date: Feb 04, 2016 """
EDIT - When I read it with the csv module it prints it out like:
['Rob,Avanti,Ave,12.83,Max,4.0,Min,-21.9,analist disp:,-1.0," date: Feb 04, 2016\t\t\t"']
What is the best way to convert the strings into comma separated values like this?
Rob,Avanti,Ave,12.83,Max,4.0,Min,-21.9,analist disp:,-1.0, date:, Feb 04, 2016
Thanks a lot.
s="Rob,Avanti,Ave,12.83,Max,4.0,Min,-21.9,analist disp:,-1.0,"" date: Feb 04, 2016 """
print(s)
Rob,Avanti,Ave,12.83,Max,4.0,Min,-21.9,analist disp:,-1.0, date: Feb 04, 2016
to add a comma after "date:" you need to add some logic (like replace ":" with ":,"; or after first word etc.
First, your date field is quoted, which is ok (and needed) because there is a comma inside:
" date: Feb 04, 2016 "
But then the whole line also gets quoted (and thus seen as a single field). And because there are already quotes around the date field, those get escaped with another quote:
"Rob,Avanti,Ave,12.83,Max,4.0,Min,-21.9,analist disp:,-1.0,"" date: Feb 04, 2016 """
So, if you remove that last quoting, everything should be fine (but you might want to trim the date field):
Rob,Avanti,Ave,12.83,Max,4.0,Min,-21.9,analist disp:,-1.0," date: Feb 04, 2016 "
If you want it exactly like this, you need another comma after date: :
Rob,Avanti,Ave,12.83,Max,4.0,Min,-21.9,analist disp:,-1.0, date:,"Feb 04, 2016"
On the other hand, it would be better to use a header instead:
Name,Name2,Ave,Max,Min,analist disp,date
Rob,Avanti,12.83,4.0,-21.9,-1.0,"Feb 04, 2016"

How to join two txt files, record by record, with fuzzy key? (approximate matching)

I have multiple text files, containing records with multiple fields delimited by tabs, all files have a "key" that is fuzzy (as a name can be, with spelling issues and typos).
File 1: Format: Title \t field1 \t ... fieldn \n
Title Original title Year
21 21 2008
21 Jump Street 21 Jump Street 2012
22 Jump Street 22 Jump Street 2014
27 volte in bianco 27 Dresses 2008
Clerks - Commessi Clerks 2006
...
File 2: Format: Title \t field1 \t ... fieldn \n
Title Director
21 Rob
21 Jump Street Lord&Miller
22 Jump Street Lord,Miller
27 volte in bianco Fletcher
Clerks: Commessi Smith
...
File 3: Format: Title \t field1 \t ... fieldn \n
Title Filename
21 "21.mkv"
21 Jump Street "21 Jump St.avi"
27 volte in bianco "27 Dresses.avi"
Clerks - Commessi "Clerks.avi"
File n: Format: Title \t field1 \t ... fieldn \n
Title Descripted in
21 "21.mht"
21 Jump Street "21JS.mht"
22 Jump Street "22.mht"
27 volte in bianco "27dres.mht"
Clerks - Commessi "Clerks.mht"
I would to create an output that orderly join all records (including incomplete and unmatched) of all files, using Title as key, but allowing a little differences between keys (see how clerks use : instead of - in file 2), ideally giving a warning when matching is not exact (char by char):
Output: Format field1 \t field2 \t ... fieldn \n
Warning Title Original title Year Director Filename Description
No 21 21 2008 Rob "21.mkv" "21.mht"
No 21 Jump Street 21 Jump Street 2012 Lord&Miller "21 JS.avi" "21JS.mht"
No 22 Jump Street 22 Jump Street 2014 Lord,Miller "22.mht"
No 27 volte in bianco 27 Dresses 2008 Fletcher "27 Dress.avi" "27dress.mht"
Yes Clerks - Commessi Clerks 2006 Smith "clerks.avi" "Clerks.mht"
How a fuzzy match can be done and marked by warning=Yes? (as with similar but not equal key for Clerks in file 2) and how to manage missing records (note that 3rd record (22 Jump street) do not have a record in file 3, thus all missing fields must be substituted by tabs only (\t) in the output file to mantain correct column order in all records).
The most difficult parts are these two, at least for me.
Any suggestion? Any tool suggested for the job ?

Parsing a multiline variable-length log file

I want to be able to utilize a 'grep' or 'pcregrep -M' like solution that parses a log file that fits the following parameters:
Each log entry can be multiple lines in length
First line of log entry has the key that I want to search for
Each key appears on more then one line
So in the example below I would want to return every line that has KEY1 on it and all the supporting lines below it until the next log message.
Log file:
01 Feb 2010 - 10:39:01.755, DEBUG - KEY1:randomtext
blah
blah2 T
blah3 T
blah4 F
blah5 F
blah6
blah7
01 Feb 2010 - 10:39:01.757, DEBUG - KEY1:somethngelse
01 Feb 2010 - 10:39:01.758, DEBUG - KEY2:randomtest
this is a test
01 Feb 2010 - 10:39:01.760, DEBUG - KEY1:more logs here
01 Feb 2010 - 10:39:01.762, DEBUG - KEY1:eve more here
this is another multiline log entry
keeps on going
but not as long as before
01 Feb 2010 - 10:39:01.763, DEBUG - KEY2:testing
test test test
end of key2
01 Feb 2010 - 10:39:01.762, DEBUG - KEY1:but key 1 is still going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
okay enough
01 Feb 2010 - 10:39:01.762, DEBUG - KEY3:and so on
and on
Desired output of searching for KEY1:
01 Feb 2010 - 10:39:01.755, DEBUG - KEY1:randomtext
blah
blah2 T
blah3 T
blah4 F
blah5 F
blah6
blah7
01 Feb 2010 - 10:39:01.757, DEBUG - KEY1:somethngelse
01 Feb 2010 - 10:39:01.760, DEBUG - KEY1:more logs here
01 Feb 2010 - 10:39:01.762, DEBUG - KEY1:eve more here
this is another multiline log entry
keeps on going
but not as long as before
01 Feb 2010 - 10:39:01.762, DEBUG - KEY1:but key 1 is still going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
okay enough
I was trying to do something like:
pcregrep -M 'KEY1(.*\n)+' logfile
but definitely doesn't work right.
if you are on *nix, you can use the shell
#!/bin/bash
read -p "Enter key: " key
awk -vkey="$key" '
$0~/DEBUG/ && $0 !~key{f=0}
$0~key{ f=1 }
f{print} ' file
output
$ cat file
01 Feb 2010 - 10:39:01.755, DEBUG - KEY1:randomtext
blah
blah2 T
blah3 T
blah4 F
blah5 F
blah6
blah7
01 Feb 2010 - 10:39:01.757, DEBUG - KEY1:somethngelse
01 Feb 2010 - 10:39:01.758, DEBUG - KEY2:randomtest
this is a test
01 Feb 2010 - 10:39:01.760, DEBUG - KEY1:more logs here
01 Feb 2010 - 10:39:01.762, DEBUG - KEY1:eve more here
this is another multiline log entry
keeps on going
but not as long as before
01 Feb 2010 - 10:39:01.763, DEBUG - KEY2:testing
test test test
end of key2
01 Feb 2010 - 10:39:01.762, DEBUG - KEY1:but key 1 is still going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
okay enough
01 Feb 2010 - 10:39:01.762, DEBUG - KEY3:and so on
and on
$ ./shell.sh
Enter key: KEY1
01 Feb 2010 - 10:39:01.755, DEBUG - KEY1:randomtext
blah
blah2 T
blah3 T
blah4 F
blah5 F
blah6
blah7
01 Feb 2010 - 10:39:01.757, DEBUG - KEY1:somethngelse
01 Feb 2010 - 10:39:01.760, DEBUG - KEY1:more logs here
01 Feb 2010 - 10:39:01.762, DEBUG - KEY1:eve more here
this is another multiline log entry
keeps on going
but not as long as before
01 Feb 2010 - 10:39:01.762, DEBUG - KEY1:but key 1 is still going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
and going
okay enough
I had a similar requirement and decided to code a little tool (in .net) that parses log files for me and write the result to standard output.
Maybe you find it useful. Works on Windows and Linux (Mono)
See here: https://github.com/iohn2000/ParLog
A tool to filter log files for log entries that contain a specific (regex) pattern. Works also with multiline log entries.
e.g.: show only log entries from a certain workflow instance.
Writes the result to standard output. Use '>' to redirect into a file
default startPattern is :
^[0-9]{2} [\w]{3} [0-9]{4} [0-9]{2}:[0-9]{2}:[0-9]{2},[0-9]{3}
this corresponds to date format: e.g.: 04 Feb 2017 15:02:50,778
Parameters are:
f:wildcard a file name or wildcard for multiple files
p:pattern the regex pattern to filter the file(s)
s:startPattern regex pattern to define when a new log entry starts
Example :
ParLog.exe -f=*.log -p=findMe
Adding on to ghostdog74's answer (thank you very much btw, it works great)
Now takes command line input in the form of "./parse file key" and handles loglevels of ERROR as well as DEBUG
#!/bin/bash
awk -vkey="$2" '
$0~/DEBUG|ERROR/ && $0 !~key{f=0}
$0~key{ f=1 }
f{print} ' $1