Vim: Delete the text matching a pattern IF submatch(1) is empty - regex

This command line parses a contact list document that may or may not have either a phone, email or web listed. If it has all three then everything works great - appending the return from the FormatContact() at the end of the line for data uploading:
silent!/^\d/+1|ki|/\n^\d\|\%$/-1|kj|'i,'jd|let #a = substitute(#",'\s*Phone: \([^,]*\)\_.*','\1',"")|let #b = substitute(#",'^\_.*E-mail:\s\[\d*\]\([-_#.0-9a-zA-Z]*\)\_.*','\1',"")|let #c = substitute(#",'^\_.*Web site:\s*\[\d*\]\([-_.:/0-9a-zA-Z]*\)\_.*','\1',"")|?^\d\+?s/$/\=','.FormatContact(#a,#b,#c)
or, broken down:
silent!/^\d/+1|ki|/\n^\d\|\%$/-1|kj|'i,'jd
let #a = substitute(#",'\s*Phone: \([^,]*\)\_.*','\1',"")
let #b = substitute(#",'^\_.*E-mail:\s\[\d*\]\([-_#.0-9a-zA-Z]*\)\_.*','\1',"")
let #c = substitute(#",'^\_.*Web site:\s*\[\d*\]\([-_.:/0-9a-zA-Z]*\)\_.*','\1',"")
?^\d\+?s/$/\=','.FormatContact(#a,#b,#c)
I created three separate searches so as not to make any ONE search fail if one atom failed to match because - again - the contact info may or may not exist per contact.
The Problem that solution created was that when the pattern does not match I get the whole #" into #a. Instead, I need it to be empty when the match does not occur. I need each variable represented (phone,email,web) whether it be empty or not.
I see no flags that can be set in the substitution function that
will do this.
Is there a way to return "" if \1 is empty?
Is there a way to create an optional atom so the search query(ies) could still account for an empty match so as to properly record it as empty?

Instead of using substitutions that replace the whole captured text
with its part of interest, one can match only that target part. Unlike
substitution routines, matching ones either locate the text conforming
to the given pattern, or report that there is no such text. Thus,
using the matchstr() function in preference to substitute(), the
parsing code listed in the question can be changed as follows:
let #a = matchstr(#", '\<Phone:\s*\zs[^,]*')
let #b = matchstr(#", '\<E-mail:\s*\[\d*\]\zs[-_#.0-9a-zA-Z]*')
let #c = matchstr(#", '\<Web site:\s*\[\d*\]\zs[-_.:/0-9a-zA-Z]*')

Just in case you want linewise processing, consider using in combination with :global, e.g.
let #a=""
g/text to match/let #A=substitute(getline("."), '^.*\(' . #/ . '\).*$', '\1\r', '')
This will print the matched text for any line that contained it, separated with newlines:
echo #a
The beautiful thing here, is that you can make it work with the last-used search-pattern very easily:
g//let #A=substitute(getline("."), '^.*\(' . #/ . '\).*$', '\1\r', '')

Related

Backrefence without matching it on find result

Consider the text structure
(Title)[#1Title-link]
(Chapter1)[#Chapter1-link]
(Chapter2)[#Chapter2-link]
(Chapter3)[#Chapter3-link]
How can i backrefence to [#Title-link] without matching it on find result. Im trying to change
(Chapter1)[#Chapter1-link] => (Chapter1)[#1Title-link-Chapter1-link]
(Chapter2)[#Chapter2-link] => (Chapter2)[#1Title-link-Chapter2-link]
(Chapter3)[#Chapter3-link] => (Chapter3)[#1Title-link-Chapter3-link]
I tried to use and find
(\(Title\)\[(.*?)])([\s\S]*?\[)#(\D.*?\])
then replace it with
$1$3$2-$4
but the problem in here it only highlight once per find and i got lots of chapter its too inefficient to replace it one by one.
Making a constant title is no good too because i have multiple files with that same structure.
Is this possible in regex? any solution or alternative is welcome.
You can first do a search to get the correct substitution string and then do a subsequent replace operation with that substitution string. You did not specify what language you were using, so here is the code in Python (where that back reference to group 1 is \1 rather than the more usual $1):
import re
text = """(Title)[#1Title-link]
(Chapter1)[#Chapter1-link]
(Chapter2)[#Chapter2-link]
(Chapter3)[#Chapter3-link]"""
m = re.search(r'(?:\(Title\)\[#([^\]]*)\])', text)
assert(m) # that we have a match
substitution = m.group(1)
text = re.sub(r'\[#Chapter([^\]]*)\]', r'[#' + substitution + r'-Chapter\1' + ']', text)
print(text)
Prints:
(Title)[#1Title-link]
(Chapter1)[#1Title-link-Chapter1-link]
(Chapter2)[#1Title-link-Chapter2-link]
(Chapter3)[#1Title-link-Chapter3-link]
See Regex Demo 1 for getting the substitution string
See Regex Demo 2 for making the subsitutions

Including regex on variable before matching string

I'm trying to find and extract the occurrence of words read from a text file in a text file. So far I can only find when the word is written correctly and not munged (a changed to # or i changed to 1). Is it possible to add a regex to my strings for matching or something similar? This is my code so far:
sub getOccurrenceOfStringInFileCaseInsensitive
{
my $fileName = $_[0];
my $stringToCount = $_[1];
my $numberOfOccurrences = 0;
my #wordArray = wordsInFileToArray ($fileName);
foreach (#wordArray)
{
my $numberOfNewOccurrences = () = (m/$stringToCount/gi);
$numberOfOccurrences += $numberOfNewOccurrences;
}
return $numberOfOccurrences;
}
The routine receives the name of a file and the string to search. The routine wordsInFileToArray () just gets every word from the file and returns an array with them.
Ideally I would like to perform this search directly reading from the file in one go instead of moving everything to an array and iterating through it. But the main question is how to hard code something into the function that allows me to capture munged words.
Example: I would like to extract both lines from the file.
example.txt:
russ1#anh#ck3r
russianhacker
# this variable also will be read from a blacklist file
$searchString = "russianhacker";
getOccurrenceOfStringInFileCaseInsensitive ("example.txt", $searchString);
Thanks in advance for any responses.
Edit:
The possible substitutions will be defined by an user and the regex must be set to fit. A user could say that a common substitution is to change the letter "a" to "#" or even "1". The possible change is completely arbitrary.
When searching for a specific word ("russian" for example) this could be done with something like:
(m/russian/i); # would just match the word as it is
(m/russi[a#1]n/i); # would match the munged word
But I'm not sure how to do that if I have the string to match stored in a variable, such as:
$stringToSearch = "russian";
This is sort of a full-text search problem, so one method is to normalize the document strings before matching against them.
use strict;
use warnings;
use Data::Munge 'list2re';
...
my %norms = (
'#' => 'a',
'1' => 'i',
...
);
my $re = list2re keys %norms;
s/($re)/$norms{$1}/ge for #wordArray;
This approach only works if there's only a single possible "normalized form" for any given word, and may be less efficient anyway than just trying every possible variation of the search string if your document is large enough and you recompute this every time you search it.
As a note your regex m/$randomString/gi should be m/\Q$randomString/gi, as you don't want any regex metacharacters in $randomString to be interpreted that way. See docs for quotemeta.
There are parts of the problem which aren't specified precisely enough (yet).
Some of the roll-your-own approaches, that depend on the details, are
If user defined substitutions are global (replace every occurrence of a character in every string) the user can submit a mapping, as a hash say, and you can fix them all. The process will identify all candidates for the words (along with the actual, unmangled, words, if found). There may be false positives so also plan on some post-processing
If the user can supply a list of substitutions along with words that they apply to (the mangled or the corresponding unmangled ones) then we can have a more targeted run
Before this is clarified, here is another way: use a module for approximate ("fuzzy") matching.
The String::Approx seems to fit quite a few of your requirements.
The match of the target with a given string relies on the notion of the Levenshtein edit distance: how many insertions, deletions, and replacements ("edits") it takes to make the given string into the sought target. The maximum accepted number of edits can be set.
A simple-minded example:
use warnings;
use strict;
use feature 'say';
use String::Approx qw(amatch);
my $target = qq(russianhacker);
my #text = qw(that h#cker was a russ1#anh#ck3r);
my #matches = amatch($target, ["25%"], #text);
say for #matches; #==> russ1#anh#ck3r
See documentation for what the module avails us, but at least two comments are in place.
First, note that the second argument in amatch specifies the percentile-deviation from the target string that is acceptable. For this particular example we need to allow every fourth character to be "edited." So much room for tweaking can result in accidental matches which then need be filtered out, so there will be some post-processing to do.
Second -- we didn't catch the easier one, h#cker. The module takes a fixed "pattern" (target), not a regex, and can search for only one at a time. So, in principle, you need a pass for each target string. This can be improved a lot, but there'll be more work to do.
Please study the documentation; the module offers a whole lot more than this simple example.
I've ended solving the problem by including the regex directly on the variable that I'll use to match against the lines of my file. It looks something like this:
sub getOccurrenceOfMungedStringInFile
{
my $fileName = $_[0];
my $mungedWordToCount = $_[1];
my $numberOfOccurrences = 0;
open (my $inputFile, "<", $fileName) or die "Can't open file: $!";
$mungedWordToCount =~ s/a/\[a\#4\]/gi;
while (my $currentLine = <$inputFile>)
{
chomp ($currentLine);
$numberOfOccurrences += () = ($currentLine =~ m/$mungedWordToCount/gi);
}
close ($inputFile) or die "Can't open file: $!";
return $numberOfOccurrences;
}
Where the line:
$mungedWordToCount =~ s/a/\[a\#4\]/gi;
Is just one of the substitutions that are needed and others can be added similarly.
I didn't know that Perl would just interpret the regex inside of the variable since I've tried that before and could only get the wanted results defining the variables inside the function using single quotes. I must've done something wrong the first time.
Thanks for the suggestions, people.

perl regex to handle and preserver single and multiple words into a variable

I am writing a perl script to read the full name of a member and save it to variables firstname and lastname like below:
my ($firstname, $lastname) = $member =~ m/^(\w+.*?) +(\w+)$/;
my $member_name = $firstname.' '.$lastname;
The value for $member comes from an upstream service which would be like for example "Jane Doe"
Now the code above cannot handle when the service sends $member value like "Jane". The regex fails to handle a single word in that code line. I need it to handle both multiple and single words. I cannot implement a new code functionality so I am looking to add to the existing regex so that there is minimal change and that it can handle both the conditions.
So far this is what I am testing with in the command line but so far no luck:
perl -e 'my ($firstname, $lastname) = "Jane Doe" =~ m/^(\w+.*?) +(\w+)$/|m/^(\w+)$/; print "$firstname\n$lastname";'
When I substitute "Jane Doe" with "Jane", nothing prints. I want the code to be in this format though. like if the value is multiple words it should print them both, otherwise just the single word.
Your help will be greatly appreciated.
There is a syntax error in your Perl code. You terminated the pattern too early.
# / / / /
# V
m/^(\w+.*?) +(\w+)$/|m/^(\w+)$/
This will lead to the | being interpreted as a bit-wise or. Since there's another m// behind it, the | will take the return values of both m// operations and do its magic. The second m// will just match against the topic $_.
What you actually want is to merge both patterns.
my ($firstname, $lastname) = "Jane Doe" =~ m/^(?:(\w+.*?) +)?(\w+)$/;
You need to make the first name optional with a non-capture group (?:), followed by a ? none-or-one quantifier.
You cannot have three capture groups, like you probably intended, because the third one would go to $3, and not $1.
However, the above solution uses the last name, which you then assign to the $firstname variable. Your full name pattern allows for names with any characters in them, like Jean-Luc Picard. But if you pass in just Jean-Luc, the match will fail. So if you want only the first name, you should use the correct pattern to make it consistent.
A simple way of doing that is to make the last name optional instead.
my ($firstname, $lastname) = "Jane" =~ m/^(\w+.*?)(?: +(\w+))?$/;
Remember that this will set $lastname to undef, which doesn't matter so much in your command line example, but in a proper program with strict and warnings (which you of course have turned on, right?) it will complain if $lastname is used as a string while it's undef.
I suggest you read this article about names.

How to extract expression matching an email address in a text file using R or Command Line?

I have a text file that contains email addresses and some information.
I would like to know how can I extract those email address using R or the terminal?
I've read that I can used some regular expression that would match an email address such as
"^[_a-z0-9-]+(\\.[_a-z0-9-]+)*#[a-z0-9-]+(\\.[a-z0-9-]+)*(\\.[a-z]{2,4})$"
But what command or function shall I used to extract those emails?
There are no pattern in the text file. The command or function should just do a search on the document and extract the email addresses.
Lets take an unstructured example file:
this is a test
fred is fred#foo.com and joe is joe#example.com - but
#this is a twitter handle for twit#here.com
Then if you do:
myText <- readLines("testmail.txt")
emails = unlist(regmatches(myText, gregexpr("([_a-z0-9-]+(\\.[_a-z0-9-]+)*#[a-z0-9-]+(\\.[a-z0-9-]+)*(\\.[a-z]{2,4}))", myText)))
> emails
[1] "fred#foo.com" "joe#example.com" "twit#here.com"
it extracts a vector of all the emails, including when there's more than one on a line. I don't think it will find email addresses broken over line breaks, but if you paste the read lines together it might do that too:
> myText = paste(readLines("testmail.txt"),collapse=" ")
> emails = regmatches(myText, gregexpr("([_a-z0-9-]+(\\.[_a-z0-9-]+)*#[a-z0-9-]+(\\.[a-z0-9-]+)*(\\.[a-z]{2,4}))", myText))
> emails
[[1]]
[1] "fred#foo.com" "joe#example.com" "twit#here.com"
In this case there's only one line in myText because we pasted all the lines together, so there's only one element in the returned list emails object.
Note that regex string isn't a strict definition of a valid email address. For example, it limits itself to addresses that are between 2 and 4 characters after the last dot. So it doesn't match fred#foo.fnord. There are top level domains that are longer than four characters so you may need to modify the regex.
Also, it only matches alphanumeric and dot in the name part - so valid addresses such as foo+bar#google.com won't match.
A regex that fixes these two issues might be:
"([_+a-z0-9-]+(\\.[_+a-z0-9-]+)*#[a-z0-9-]+(\\.[a-z0-9-]+)*(\\.[a-z]{2,14}))"
but it probably has other issues and you'd be better of searching for a better email address regex online. I say better, because a perfect one doesn't exist...
This can also work :
aa <- paste(readLines("C:\\MY_FOLDER\\NOI\\file1sample.txt"),collapse = " ")
temp <- sapply(str_extract_all(aa,"[a-z_+0-9]+\\#\\w+\\.[a-z]{2,4}"), function(x){ paste(x,collapse = " ")})
Read your file into R and use grep.
myText <- readLines("your.file")
Emails <- grep("^[_a-z0-9-]+(\\.[_a-z0-9-]+)*#[a-z0-9-]+(\\.[a-z0-9-]+)*(\\.[a-z]{2,4})$", myText, value=T)
This will return the whole line that the email appears on, if there is other information on that line you will need to split it up first using something like strsplit

Parsing log files

I'm trying to write a script to simplify the process of searching through a particular applications log files for specific information. So I thought maybe there's a way to convert them into an XML tree, and I'm off to a decent start....but The problem is, the application log files are an absolute mess if you ask me
Some entries are simple
2014/04/09 11:27:03 INFO Some.code.function - Doing stuff
Ideally I'd like to turn the above into something like this
<Message>
<Date>2014/04/09</Date>
<Time>11:48:38</Time>
<Type>INFO</Type>
<Source>Some.code.function</Source>
<Sub>Doing stuff</Sub>
</Message>
Other entries are something like this where there's additional information and line breaks
2014/04/09 11:27:04 INFO Some.code.function - Something happens
changes:
this stuff happened
I'd like to turn this last chunk into something like the above, but add the additional info into a section
<Message>
<Date>2014/04/09</Date>
<Time>11:48:38</Time>
<Type>INFO</Type>
<Source>Some.code.function</Source>
<Sub>Doing stuff</Sub>
<details>changes:
this stuff happened</details>
</Message>
and then other messages, errors will be in the form of
2014/04/09 11:27:03 ERROR Some.code.function - Something didn't work right
Log Entry: LONGARSEDGUID
Error Code: E3145
Application: Name
Details:
message information etc etc and more line breaks, this part of the message may add up to an unknown number of lines before the next entry
This last chunk I'd like to convert as the last to above examples, but adding XML nodes for log entry, error code, application, and again, details like so
<Message>
<Date>2014/04/09</Date>
<Time>11:48:38</Time>
<Type>ERROR </Type>
<Source>Some.code.function</Source>
<Sub>Something didn't work right</Sub>
<Entry>LONGARSEDGUID</Entry>
<Code>E3145</Code>
<Application>Name</Application>
<details>message information etc etc and more line breaks, this part of the message may add up to an unknown number of lines before the next entry</details>
</Message>
Now I know that Select-String has a context option which would let me select a number of lines after the line I've filtered, the problem is, this isn't a constant number.
I'm thinking a regular expression would also me to select the paragraph chunk before the date string, but regular expressions are not a strong point of mine, and I thought there might be a better way because the one constant is that new entries start with a date string
the idea though is to either break these up into xml or tables of sorts and then from there I'm hoping it might take the last or filtering non relevant or recurring messages a little easier
I have a sample I just tossed on pastebin after removing/replacing a few bits of information for privacy reasons
http://pastebin.com/raw.php?i=M9iShyT2
Sorry this is kind of late, I got tied up with work for a bit there (darn work expecting me to be productive while on their dime). I ended up with something similar to Ansgar Wiechers solution, but formatted things into objects and collected those into an array. It doesn't manage your XML that you added later, but this gives you a nice array of objects to work with for the other records. I'll explain the main RegEx line here, I'll comment in-line where it's practical.
'(^\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) [\d+?] (\w+?) {1,2}(.+?) - (.+)$' is the Regex that detects the start of a new record. I started to explain it, but there are probably better resources for you to learn RegEx than me explaining it to me. See this RegEx101.com link for a full breakdown and examples.
$Records=#() #Create empty array that we will populate with custom objects later
$Event = $Null #make sure nothing in $Event to give script a clean start
Get-Content 'C:\temp\test1.txt' | #Load file, and start looping through it line-by-line.
?{![string]::IsNullOrEmpty($_)}|% { #Filter out blank lines, and then perform the following on each line
if ($_ -match '(^\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}) \[\d+?] (\w+?) {1,2}(.+?) - (.+)$') { #New Record Detector line! If it finds this RegEx match, it means we're starting a new record.
if ($Event) { #If there's already a record in progress, add it to the Array
$Records+=$Event
}
$Event = New-Object PSObject -Property #{ #Create a custom PSObject object with these properties that we just got from that RegEx match
DateStamp = [datetime](get-date $Matches[1]) #We convert the date/time stamp into an actual DateTime object. That way sorting works better, and you can compare it to real dates if needed.
Type = $Matches[2]
Source = $Matches[3]
Message = $Matches[4]}
Ok, little pause for the cause here. $Matches isn't defined by me, why am I referencing it? . When PowerShell gets matches from a RegEx expression it automagically stores the resulting matches in $Matches. So all the groups that we just matched in parenthesis become $Matches[1], $Matches[2], and so on. Yes, it's an array, and there is a $Matches[0], but that is the entire string that was matched against, not just the groups that matched. We now return you to your regularly scheduled script...
} else { #End of the 'New Record' section. If it's not a new record if does the following
if($_ -match "^((?:[^ ^\[])(?:\w| |\.)+?):(.*)$"){
RegEx match again. It starts off by stating that this has to be the beginning of the string with the carat character (^). Then it says (in a non-capturing group noted by the (?:<stuff>) format, which really for my purposes just means it won't show up in $Matches) [^ \[]; that means that the next character can not be a space or opening bracket (escaped with a ), just to speed things up and skip those lines for this check. If you have things in brackets [] and the first character is a carat it means 'don't match anything in these brackets'.
I actually just changed this next part to include periods, and used \w instead of [a-zA-Z0-9] because it's essentially the same thing but shorter. \w is a "word character" in RegEx, and includes letters, numbers, and the underscore. I'm not sure why the underscore is considered part of a word, but I don't make the rules I just play the game. I was using [a-zA-Z0-9] which matches anything between 'a' and 'z' (lowercase), anything between 'A' and 'Z' (uppercase), and anything between '0' and '9'. At the risk of including the underscore character \w is a lot shorter and simpler.
Then the actual capturing part of this RegEx. This has 2 groups, the first is letters, numbers, underscores, spaces, and periods (escaped with a \ because '.' on it's own matches any character). Then a colon. Then a second group that is everything else until the end of the line.
$Field = $Matches[1] #Everything before the colon is the name of the field
$Value = $Matches[2].trim() #everything after the colon is the data in that field
$Event | Add-Member $Field $Value #Add the Field to $Event as a NoteProperty, with a value of $Value. Those two are actually positional parameters for Add-Member, so we don't have to go and specify what kind of member, specify what the name is, and what the value is. Just Add-Member <[string]name> <value can be a string, array, yeti, whatever... it's not picky>
} #End of New Field for current record
else{$Value = $_} #If it didn't find the regex to determine if it is a new field then this is just more data from the last field, so don't change the field, just set it all as data.
} else { #If it didn't find the regex then this is just more data from the last field, so don't change the field, just set it all as data.the field does not 'not exist') do this:
$Event.$Field += if(![string]::isNullOrEmpty($Event.$Field)){"`r`n$_"}else{$_}}
This is a long explanation for a fairly short bit of code. Really all it does is add data to the field! This has an inverted (prefixed with !) If check to see if the current field has any data, if it, or if it is currently Null or Empty. If it is empty it adds a new line, and then adds the $Value data. If it doesn't have any data it skips the new line bit, and just adds the data.
}
}
}
$Records+=$Event #Adds the last event to the array of records.
Sorry, I'm not very good with XML. But at least this gets you clean records.
Edit: Ok, code is notated now, hopefully everything is explained well enough. If something is still confusing perhaps I can refer you to a site that explains better than I can. I ran the above against your sample input in PasteBin.
One possible way to deal with such files is to process them line by line. Each log entry starts with a timestamp and ends when the next line starting with a timestamp appears, so you could do something like this:
Get-Content 'C:\path\to\your.log' | % {
if ($_ -match '^\d{4}/\d{2}/\d{2} \d{2}:\d{2}:\d{2}') {
if ($logRecord) {
# If a current log record exists, it is complete now, so it can be added
# to your XML or whatever, e.g.:
$logRecord -match '^(\d{4}/\d{2}/\d{2}) (\d{2}:\d{2}:\d{2}) (\S+) ...'
$message = $xml.CreateElement('Message')
$date = $xml.CreateElement('Date')
$date.InnerText = $matches[1]
$message.AppendChild($date)
$time = $xml.CreateElement('Time')
$time.InnerText = $matches[2]
$message.AppendChild($time)
$type = $xml.CreateElement('Type')
$type.InnerText = $matches[3]
$message.AppendChild($type)
...
$xml.SelectSingleNode('...').AppendChild($message)
}
$logRecord = $_ # start new record
} else {
$logRecord += "`r`n$_" # append to current record
}
}