Need help in designing a regular expression - regex

"LIM-1-2::PROVPEC=NTK552DA,CTYPE=\"LIM C-Band\":OOS-AU,UEQ"
"2XOSC-1-4::PROVPEC=NTK554BA,CTYPE=\"OSC w/WSC 2 Port SFP 2 Port 10/100 BT\":OOS-AU,UEQ"
"P155M-1-4-1::PROVPEC=NTK592NP,CTYPE=\"OC-3 0-15dB CWDM 1511 nm\":OOS-AU,UEQ"
I have this data in a file. I need to extract -1-2 for first equipment likewise -1-4-1 for last one. I will using this data later. I am able to figure out how to get -1-1 but it's not versatile enough to get -1-1-4 also.
Equipment can also have a subslot.This list is tentative.
EQP-shelf-slot-subslot. I need some expression which can check if subslot exists or not provides me out in form -shelf-slot-subslot or -shelf-slot

How about:
my ($wanted) = $str =~ /^\w+([^:]+)/;
or, if quotes are part of the string:
my ($wanted) = $str =~ /^"\w+([^:]+)/;

Related

Including regex on variable before matching string

I'm trying to find and extract the occurrence of words read from a text file in a text file. So far I can only find when the word is written correctly and not munged (a changed to # or i changed to 1). Is it possible to add a regex to my strings for matching or something similar? This is my code so far:
sub getOccurrenceOfStringInFileCaseInsensitive
{
my $fileName = $_[0];
my $stringToCount = $_[1];
my $numberOfOccurrences = 0;
my #wordArray = wordsInFileToArray ($fileName);
foreach (#wordArray)
{
my $numberOfNewOccurrences = () = (m/$stringToCount/gi);
$numberOfOccurrences += $numberOfNewOccurrences;
}
return $numberOfOccurrences;
}
The routine receives the name of a file and the string to search. The routine wordsInFileToArray () just gets every word from the file and returns an array with them.
Ideally I would like to perform this search directly reading from the file in one go instead of moving everything to an array and iterating through it. But the main question is how to hard code something into the function that allows me to capture munged words.
Example: I would like to extract both lines from the file.
example.txt:
russ1#anh#ck3r
russianhacker
# this variable also will be read from a blacklist file
$searchString = "russianhacker";
getOccurrenceOfStringInFileCaseInsensitive ("example.txt", $searchString);
Thanks in advance for any responses.
Edit:
The possible substitutions will be defined by an user and the regex must be set to fit. A user could say that a common substitution is to change the letter "a" to "#" or even "1". The possible change is completely arbitrary.
When searching for a specific word ("russian" for example) this could be done with something like:
(m/russian/i); # would just match the word as it is
(m/russi[a#1]n/i); # would match the munged word
But I'm not sure how to do that if I have the string to match stored in a variable, such as:
$stringToSearch = "russian";
This is sort of a full-text search problem, so one method is to normalize the document strings before matching against them.
use strict;
use warnings;
use Data::Munge 'list2re';
...
my %norms = (
'#' => 'a',
'1' => 'i',
...
);
my $re = list2re keys %norms;
s/($re)/$norms{$1}/ge for #wordArray;
This approach only works if there's only a single possible "normalized form" for any given word, and may be less efficient anyway than just trying every possible variation of the search string if your document is large enough and you recompute this every time you search it.
As a note your regex m/$randomString/gi should be m/\Q$randomString/gi, as you don't want any regex metacharacters in $randomString to be interpreted that way. See docs for quotemeta.
There are parts of the problem which aren't specified precisely enough (yet).
Some of the roll-your-own approaches, that depend on the details, are
If user defined substitutions are global (replace every occurrence of a character in every string) the user can submit a mapping, as a hash say, and you can fix them all. The process will identify all candidates for the words (along with the actual, unmangled, words, if found). There may be false positives so also plan on some post-processing
If the user can supply a list of substitutions along with words that they apply to (the mangled or the corresponding unmangled ones) then we can have a more targeted run
Before this is clarified, here is another way: use a module for approximate ("fuzzy") matching.
The String::Approx seems to fit quite a few of your requirements.
The match of the target with a given string relies on the notion of the Levenshtein edit distance: how many insertions, deletions, and replacements ("edits") it takes to make the given string into the sought target. The maximum accepted number of edits can be set.
A simple-minded example:
use warnings;
use strict;
use feature 'say';
use String::Approx qw(amatch);
my $target = qq(russianhacker);
my #text = qw(that h#cker was a russ1#anh#ck3r);
my #matches = amatch($target, ["25%"], #text);
say for #matches; #==> russ1#anh#ck3r
See documentation for what the module avails us, but at least two comments are in place.
First, note that the second argument in amatch specifies the percentile-deviation from the target string that is acceptable. For this particular example we need to allow every fourth character to be "edited." So much room for tweaking can result in accidental matches which then need be filtered out, so there will be some post-processing to do.
Second -- we didn't catch the easier one, h#cker. The module takes a fixed "pattern" (target), not a regex, and can search for only one at a time. So, in principle, you need a pass for each target string. This can be improved a lot, but there'll be more work to do.
Please study the documentation; the module offers a whole lot more than this simple example.
I've ended solving the problem by including the regex directly on the variable that I'll use to match against the lines of my file. It looks something like this:
sub getOccurrenceOfMungedStringInFile
{
my $fileName = $_[0];
my $mungedWordToCount = $_[1];
my $numberOfOccurrences = 0;
open (my $inputFile, "<", $fileName) or die "Can't open file: $!";
$mungedWordToCount =~ s/a/\[a\#4\]/gi;
while (my $currentLine = <$inputFile>)
{
chomp ($currentLine);
$numberOfOccurrences += () = ($currentLine =~ m/$mungedWordToCount/gi);
}
close ($inputFile) or die "Can't open file: $!";
return $numberOfOccurrences;
}
Where the line:
$mungedWordToCount =~ s/a/\[a\#4\]/gi;
Is just one of the substitutions that are needed and others can be added similarly.
I didn't know that Perl would just interpret the regex inside of the variable since I've tried that before and could only get the wanted results defining the variables inside the function using single quotes. I must've done something wrong the first time.
Thanks for the suggestions, people.

Perl MongoDB API - regular expression in filters

I am working in Perl with a MongoDB. I have a collection with documents that have a big text field that I need to be able to find all rows that contain multiple strings in the field.
So for instance, if this is a database of movie quotes one row would have value:
We must totally destroy all spice production on Arrakis. The Guild and
the entire Universe depends on spice. He who can destroy a thing,
controls a thing.
I want to be able to match that row with terms "spice", "Arrakis", and "Guild" where ALL of those terms have to be in the text.
My current approach can only achieve matches if the terms provided happen to be in the correct order, i.e.:
$db->get_collection( 'quotes' )->find( { quote => qr/spice.*Arrakis.*Guild/i } );
That's a match, but
$db->get_collection( 'quotes' )->find( { quote => qr/Guild.*spice.*Arrakis/i } );
is not a match.
If I were working with a SQL database I could do:
... WHERE quote LIKE '%spice%' and quote LIKE '%Arrakis%' and quote LIKE '%Guild%'
but with the MongoDB interface you only get one shot per field.
Is there a way to match multiple words where all are required in one regex, or is there another way to get more than one crack at a field in the MongoDB interface?
One way: A bunch of positive lookahead assertations:
#!/usr/bin/perl
use warnings;
use strict;
use feature qw/say/;
my #tests = ("The Guild demands that the spice must flow from Arrakis",
"House Atreides will be transported to Arrakis by the Guild.");
for my $test (#tests) {
if ($test =~ m/^(?=.*spice)
(?=.*Guild)
(?=.*Arrakis)/x) {
say "$test: matches";
} else {
say "$test: fails";
}
}
produces:
The Guild demands that the spice must flow from Arrakis: matches
Duke Leto will be transported to Arrakis by the Guild.: fails

How do I implement IPv6 zero compression in Powershell?

For example: here is an IP address : fc06:0000:0002:0760:e000:7201:7003:06b7.
But i'd like to replace ":0" or :and more zeros to ":". So it should look like this: fc06::2:760:e000:7201:7003:6b7
The code:
if (Test-Path $file){
foreach ($line in Get-Content $file){
$line=$line.Replace(":0+",":")
Write-Output $line
}
}
I know, that the problem is with :0+. So: how can I say, that one or more from the last character (now: zero)? Because simply :0+ doesn't works.
Thanks!
Use the built-in [System.Net.IPAddress] class:
$ip = [System.Net.IPAddress]"fc06:0000:0002:0760:e000:7201:7003:06b7"
$ip.IPAddressToString
or
$ip = [System.Net.IPAddress]::Parse("fc06:0000:0002:0760:e000:7201:7003:06b7")
$ip.IPAddressToString
I recommend this over using string replacement because it will deal with the issue of multiple groups of zeroes in different parts of the address, preventing you from creating an invalid IPv6 address. It will not collapse the first consecutive zeros into :: though.
Edit
Actually, this method will use :: if there are more than 1 group of consecutive zeroes, for example:
$ip = [System.Net.IPAddress]::Parse("fc06:0000:0000:0760:e000:7201:0000:06b7")
$ip.IPAddressToString
Will give you fc06::760:e000:7201:0:6b7.
Edit 2
Here's a way to get exactly what you want:
$ip = [System.Net.IPAddress]::Parse("fc06:0000:0002:0760:e000:7201:7003:06b7")
$ip.IPAddressToString -replace '(?<!(?:^|:)0?:.*?)(?:^|:)0:','::'
This lets the class do the heavy lifting, and in case there's a single zero that could be collapsed, it takes care of it.
Output: fc06::2:760:e000:7201:7003:6b7
(also tested with other addresses that have different sizes of zero blocks in different positions).
Edit 3:
As #Ron Maupin points out, RFC 5952 section 4.2.2 states:
The symbol "::" MUST NOT be used to shorten just one 16-bit 0 field.
For example, the representation 2001:db8:0:1:1:1:1:1 is correct, but
2001:db8::1:1:1:1:1 is not correct.
Given this, I strongly recommend that you don't use the code in Edit 2, or otherwise attempt to collapse a single 16 bit field.
Instead, use the built-in class's zero compression which appears to already be RFC compliant.
The string .replace() method uses literal string arguments, not regular expressions.
Switch to the -replace operator:
$line=$line -Replace ':0+',':'

Perl decimal to ASCII conversion

I am pulling SNMP information from an F5 LTM, and storing this information in a psql database. I need help converting the returned data in decimal format into ASCII characters.
Here is an example of the information returned from the SNMP request:
iso.3.6.1.4.1.3375.2.2.10.2.3.1.9.10.102.111.114.119.97.114.100.95.118.115 = Counter64: 0
In my script, I need to identify the different sections of this information:
my ($prefix, $num, $char-len, $vs) = ($oid =~ /($vsTable)\.(\d+)\.(\d+)\.(.+)/);
This gives me the following:
(prefix= .1.3.6.1.4.1.3375.2.2.10.2.3.1)
(num= 9 )
(char-len= 10 )
(vs= 102.111.114.119.97.114.100.95.118.115)
The variable $vs is the Object name in decimal format. I would like to convert this to ASCII characters (which should be "forward_vs").
Does anyone have a suggestion on how to do this?
Given that this is related to interpreting SNMP data, it seems logical to me to use one or more of the SNMP modules available from CPAN. You have to know quite a lot about SNMP to determine when the string you quote stops being the identifier (prefix) and starts to be the value. You have a better chance of getting a general solution with SNMP code than with hand-hacked code.
Jonathan Leffler has the right answer, but here are a couple of things to expand your Perl horizons:
use v5.10;
$_ = "102.111.114.119.97.114.100.95.118.115";
say "Version 1: " => eval;
say "Version 2: " => pack "W".(1+y/.//) => /\d+/g;
Executed, that prints:
Version 1: forward_vs
Version 2: forward_vs
Once both are clear to you, you may hit space to continue or q to quit. :)
EDIT: The last one can also be written
pack "WW".y/.//,/\d+/g
But please don't. :)
my $new_vs = join("", map { chr($_) } split(/\./,$vs));
Simple solution:
$ascii .= chr for split /\./, $vs;
pack 'C*', split /\./
For example,
>perl -E"say pack 'C*', split /\./, $ARGV[0]" 102.111.114.119.97.114.100.95.118.115
forward_vs

Trying to simplify a Regex

I'm spending my weekend analyzing Campaign Finance Contribution records. Fun!
One of the annoying things I've noticed is that entity names are entered differently:
For example, i see stuff like this: 'llc', 'llc.', 'l l c', 'l.l.c', 'l. l. c.', 'llc,', etc.
I'm trying to catch all these variants.
So it would be something like:
"l([,\.\ ]*)l([,\.\ ]*)c([,\.\ ]*)"
Which isn't so bad... except there are about 40 entity suffixes that I can think of.
The best thing I can think of is programmatically building up this pattern , based on my list of suffixes.
I'm wondering if there's a better way to handle this within a single regex that is human readable/writable.
You could just strip out excess crap. Using Perl:
my $suffix = "l. lc.."; # the worst case imaginable!
$suffix =~ s/[.\s]//g;
# no matter what variation $suffix was, it's now just "llc"
Obviously this may maul your input if you use it on the full company name, but getting too in-depth with how to do that would require knowing what language we're working with. A possible regex solution is to copy the company name and strip out a few common words and any words with more than (about) 4 characters:
my $suffix = $full_name;
$suffix =~ s/\w{4,}//g; # strip words of more than 4 characters
$suffix =~ s/(a|the|an|of)//ig; # strip a few common cases
# now we can mangle $suffix all we want
# and be relatively sure of what we're doing
It's not perfect, but it should be fairly effective, and more readable than using a single "monster regex" to try to match all of them. As a rule, don't use a monster regex to match all cases, use a series of specialized regexes to narrow many cases down to a few. It will be easier to understand.
Regexes (other than relatively simple ones) and readability rarely go hand-in-hand. Don't misunderstand me, I love them for the simplicity they usually bring, but they're not fit for all purposes.
If you want readability, just create an array of possible values and iterate through them, checking your field against them to see if there's a match.
Unless you're doing gene sequencing, the speed difference shouldn't matter. And it will be a lot easier to add a new one when you discover it. Adding an element to an array is substantially easier than reverse-engineering a regex.
The first two "l" parts can be simplified by [the first "l" part here]{2}.
You can squish periods and whitespace first, before matching: for instance, in perl:
while (<>) {
$Sq = $_;
$Sq =~ s/[.\s]//g; # squish away . and " " in the temporary save version
$Sq = lc($Sq);
/^llc$/ and $_ = 'L.L.C.'; # try to match, if so save the canonical version
/^ibm/ and $_ = 'IBM'; # a different match
print $_;
}
Don't use regexes, instead build up a map of all discovered (so far) entries and their 'canonical' (favourite) versions.
Also build a tool to discover possible new variants of postfixes by identifying common prefixes to a certain number of characters and printing them on the screen so you can add new rules.
In Perl you can build up regular expressions inside your program using strings. Here's some example code:
#!/usr/bin/perl
use strict;
use warnings;
my #strings = (
"l.l.c",
"llc",
"LLC",
"lLc",
"l,l,c",
"L . L C ",
"l W c"
);
my #seps = ('.',',','\s');
my $sep_regex = '[' . join('', #seps) . ']*';
my $regex_def = join '', (
'[lL]',
$sep_regex,
'[lL]',
$sep_regex,
'[cC]'
);
print "definition: $regex_def\n";
foreach my $str (#strings) {
if ( $str =~ /$regex_def/ ) {
print "$str matches\n";
} else {
print "$str doesn't match\n";
}
}
This regular expression could also be simplified by using case-insensitive matching (which means $match =~ /$regex/i ). If you run this a few times on the strings that you define, you can easily see cases that don't validate according to your regular expression. Building up your regular expression this way can be useful in only defining your separator symbols once, and I think that people are likely to use the same separators for a wide variety of abbreviations (like IRS, I.R.S, irs, etc).
You also might think about looking into approximate string matching algorithms, which are popular in a large number of areas. The idea behind these is that you define a scoring system for comparing strings, and then you can measure how similar input strings are to your canonical string, so that you can recognize that "LLC" and "lLc" are very similar strings.
Alternatively, as other people have suggested you could write an input sanitizer that removes unwanted characters like whitespace, commas, and periods. In the context of the program above, you could do this:
my $sep_regex = '[' . join('', #seps) . ']*';
foreach my $str (#strings) {
my $copy = $str;
$copy =~ s/$sep_regex//g;
$copy = lc $copy;
print "$str -> $copy\n";
}
If you have control of how the data is entered originally, you could use such a sanitizer to validate input from the users and other programs, which will make your analysis much easier.