I am trying to write a git hook that will not allow user to push his commit into the branch that do not follow our branches prefixes rules.
So, for matching the branch name I need to write a regular expression.
Here is the example list representing ordering of the branch prefixes:
ex - experimental
d - devops
b - back-end
f - front-end
So, for instance we have the following branch names that should match:
Branch name may have additional words after prefixes as below (e.g -git-hooks, -api).
Each branch name have to contain at least one prefix (e.g -ex).
<developer>-ex-d - developer's experimental branch with devops implementation of a git hooks
ex-b-f - experimental branches with back-end and front-end api implementation
Branches names that shouldn't match:
<developer>-d-ex-f-b - illegal prefixes order
exp-front-back - illegal prefixes
The most difficult part for me is too understand how to match prefixes in correct order without their repetition.
Thanks in advance for answering!
Either I haven't understood you correctly, or this is what you want
/^([^-]+)?(-ex)?(-d)?(-b)?(-f)?(?!((-ex)|(-d)|(-b)|(-f)))(-.*)?$/
See https://regexr.com/3l4t4 to see it working
This looks for your prefixes in sequence, then uses a negative lookahead to enforce no duplicates, then allows other name chunks subsequently.
Actually, there's a bug: this will disallow (eg) <developer>-ex-d-dev because it thinks there's a duplicate -d. I'll leave the answer here in case someone else can improve it.
I wasn't able to answer this with regex, but in python it can be done like this (I know it's not a pretty answer but it works):
import re
def check_prefixes(string_to_check):
lst_prefixes = ['-ex-', '-d-', '-b-', '-f-']
# find position of all prefixes (-1 indicates prefix was not present and so can be skipped)
lst_positions = [string_to_check.find(prefix) for prefix in lst_prefixes if string.find(prefix) != -1]
string_correct = True
# if the prefixes aren't in the right order the string isn't correct
if sorted(lst_positions) != lst_positions:
string_correct = False
else:
# find all the prefixes
# the string needs to be adjusted so that re.findall actually finds all the prefixes and doesn't skip the even numbered ones or the first/last
string_to_check= '-' + string_to_check.split('>-')[-1].replace('-', '--') + '-'
all_prefixes = re.findall('(-.*?-)', string_to_check)
# check if all the prefixes are legal
for prefix in all_prefixes:
if prefix not in lst_prefixes:
string_correct = False
break
return string_correct
to_check = ['<developer>-ex-d',
'ex-b-f',
'<developer>-d-ex-f-b',
'exp-front-back']
for string_to_check in to_check:
print '{} = {}'.format(string_to_check, check_prefixes(string_to_check))
Perhaps you can make it work in whatever coding language you're most comfortable in.
To do with a single regex it may be easier to match the unwanted branches :
without suffix ^[-]*$
other than ex,d,b,f : -(?!(ex|d|b|f)(-|$))[^-]*
not followed by one of expected suffixes
-ex-(?!(d|b|f)(-|$))|
-d-(?!(b|f)(-|$))|
-b-(?!f(-|$))
can be checked on regex101
Related
We can check to see if a digit is in a password, for example, by doing something like:
(?=.*\d)
Or if there's a digit and lowercase with:
(?=.*\d)(?=.*[a-z])
This will basically go on "until the end" to check whether there's a letter in the string.
However, I was wondering if it's possible in some sort of generic way to limit the scope of a lookahead. Here's a basic example which I'm hoping will demonstrate the point:
start_of_string;
middle_of_string;
end_of_string;
I want to use a single regular expression to match against start_of_string + middle_of_string + end_of_string.
Is it possible to use a lookahead/lookbehind in the middle_of_string section WITHOUT KNOWING WHAT COMES BEFORE OR AFTER IT? That is, not knowing the size or contents of the preceding/succeeding string component. And limit the scope of the lookahead to only what is contained in that portion of the string?
Let's take one example:
start_of_string = 'start'
middle_of_string = '123'
end_of_string = 'ABC'
Would it be possible to check the contents of each part but limit it's scope like this?
string = 'start123ABC'
# Check to make sure the first part has a letter, the second part has a number and the third part has a capital
((?=.*[a-z]).*) # limit scope to the first part only!!
((?=.*[0-9]).*) # limit scope to only the second part.
((?=.*[A-Z]).*) # limit scope to only the last part.
In other words, can lookaheads/lookbehinds be "chained" with other components of a regex without it screwing up the entire regex?
UPDATE:
Here would be an example, hopefully this is more helpful to the question:
START_OF_STRING = 'abc'
Does 'x' exist in it? (?=.*x) ==> False
END_OF_STRING = 'cdxoy'
Does 'y' exist in it? (?=.*y) ==> True
FULL_STRING = START_OF_STRING + END_OF_STRING
'abcdxoy'
Is it possible to chain the two regexes together in any sort of way to only wok on its 'substring' component?
For example, now (?=.*x) in the first part of the string would return True, but it should not.
`((?=.*x)(?=.*y)).*`
I think the short answer to this is "No, it's not possible.", but am looking to hear from someone who understands this to tell why it is or isn't.
In .NET and javascript you could use a positive lookahead at the start of your string component and a negative lookbehind at the end of it to "constrain" the match. Example:
.*(?=.*arrow)(?<middle>.*)(?<=.*arrow).*
helloarrowxyz
{'middle': 'arrow'}
If in pcre, python, or other you would need to either have a fixed width lookahead to constraint it from going too far forward, such as what Wiktor Stribiżew says above:
.*(?=.{0,5}arrow)(?<middle>.{0,5}).*
Otherwise, it wouldn't be possible to do without either a fixed-width lookahead or a variable width look-behind.
I have a program written in python3 that should parse several domain names every day and extrapolate data.
Parsed data should serve as input for a search function, for aggregation (statistics and charts) and to save some time to the analyst that uses the program.
Just so you know: I don't really have the time to study machine learning (which seems to be a pretty good solution here), so I chose to start with regex, that I already use.
I already searched the regex documentation inside and outside StackOverflow and worked on the debugger on regex101 and I still haven't found a way to do what I need.
Edit (24/6/2019): I mention machine learning because of the reason I need a complex parser, that is automate things as much as possible. It would be useful for making automatic choices like blacklisting, whitelisting, etc.
The parser should consider a few things:
a maximum number of 126 subdomains plus the TLD
each subdomain must not be longer than 64 characters
each subdomain can contain only alphanumeric characters and the - character
each subdomain must not begin or end with the - character
the TLD must not be longer than 64 characters
the TLD must not contain only digits
but I to go a little deeper:
the first string can (optionally) contain a "usage type" like cpanel., mail., webdisk., autodiscover. and so on... (or maybe a symple www.)
the TLD can (optionally) contain a particle like .co, .gov, .edu and so on (.co.uk for example)
the final part of the TLD is not really checked against any list of ccTLD/gTLDs right now and I don't think it will be in the future
What I thought useful to solve the problem is a regex group for the optional usage type, one for each subdomain and one for the TLD (the optional particle must be inside the TLD group)
With these rules in mind I came up with a solution:
^(?P<USAGE>autodiscover|correo|cpanel|ftp|mail|new|server|webdisk|webhost|webmail[\d]?|wiki|www[\d]?\.)?([a-z\d][a-z\d\-]{0,62}[a-z\d])?((\.[a-z\d][a-z\d\-]{0,62}[a-z\d]){0,124}?(?P<TLD>(\.co|\.com|\.edu|\.net|\.org|\.gov)?\.(?!\d+)[a-z\d]{1,64})$
The above solution doesn't return the expected results
I report here a couple of examples:
A couple of strings to parse
without.further.ado.lets.travel.the.forest.com
www.without.further.ado.lets.travel.the.forest.gov.it
The groups I expect to find
FullMatchwithout.further.ado.lets.travel.the.forest.com
group2without
group3further
group4ado
group5lets
group6travel
group7the
group8forest
groupTLD.com
FullMatchwww.without.further.ado.lets.travel.the.forest.gov.it
groupUSAGEwww.
group2without
group3further
group4ado
group5lets
group6travel
group7the
group8forest
groupTLD.gov.it
The groups I find
FullMatchwithout.further.ado.lets.travel.the.forest.com
group2without
group3.further.ado.lets.travel.the.forest
group4.forest
groupTLD.com
FullMatchwww.without.further.ado.lets.travel.the.forest.gov.it
groupUSAGEwww.
group2without
group3.further.ado.lets.travel.the.forest
group4.forest
groupTLD.gov.it
group6.gov
As you can see from the examples, a couple of particles are found twice and that is not the behavior i sought for, anyway. Any attempt to edit the formula results in unexpeted output.
Any idea about a way to find the expected results?
This a simple, well-defined task. There is no fuzzyness, no complexity, no guessing, just a series of easy tests to figure out everything on your checklist. I have no idea how "machine learning" would be appropriate, or helpful. Even regex is completely unnecessary.
I've not implemented everything you want to verify, but it's not hard to fill in the missing bits.
import string
double_tld = ['gov', 'edu', 'co', 'add_others_you_need']
# we'll use this instead of regex to check subdomain validity
valid_sd_characters = string.ascii_letters + string.digits + '-'
valid_trans = str.maketrans('', '', valid_sd_characters)
def is_invalid_sd(sd):
return sd.translate(valid_trans) != ''
def check_hostname(hostname):
subdomains = hostname.split('.')
# each subdomain can contain only alphanumeric characters and
# the - character
invalid_parts = list(filter(is_invalid_sd, subdomains))
# TODO react if there are any invalid parts
# "the TLD can (optionally) contain a particle like
# .co, .gov, .edu and so on (.co.uk for example)"
if subdomains[-2] in double_tld:
subdomains[-2] += '.' + subdomains[-1]
subdomains = subdomains[:-1]
# "a maximum number of 126 subdomains plus the TLD"
# TODO check list length of subdomains
# "each subdomain must not begin or end with the - character"
# "the TLD must not be longer than 64 characters"
# "the TLD must not contain only digits"
# TODO write loop, check first and last characters, length, isnumeric
# TODO return something
I don't know if it is possible to get the output exactly as you asked. I think that with a single pattern it cannot catch results in different groups(group2, group3,..).
I found one way to get almost the result you expect using regex module.
match = regex.search(r'^(?:(?P<USAGE>autodiscover|correo|cpanel|ftp|mail|new|server|webdisk|webhost|webmail[\d]?|wiki|www[\d]?)\.)?(?:([a-z\d][a-z\d\-]{0,62}[a-z\d])\.){0,124}?(?P<TLD>(?:co|com|edu|net|org|gov)?\.(?!\d+)[a-z\d]{1,64})$', 'www.without.further.ado.lets.travel.the.forest.gov.it')
Output:
match.captures(0)
['www.without.further.ado.lets.travel.the.forest.gov.it']
match.captures[1] or match.captures('USAGE')
['www.']
match.captures(2)
['without', 'further', 'ado', 'lets', 'travel', 'the', 'forest']
match.captures(3) or match.captures('TLD')
['gov.it']
Here, to avoid taking . in groups I have added it in non-capturing group like this
(?:([a-z\d][a-z\d\-]{0,62}[a-z\d])\.)
Hope it helps.
I'm writing a code that looks through a string and then takes in words that are not considered "reserved keywords". I am new to regex, but have spent quite some time learning what kind of structure I need to look for reserved words. So far, I've written something along the lines of this:
\b(import|false|int|etc)\b
I am going to use an array list to feed in all of the reserved words into the string above, but I need it to work opposite of how it works now. I've figured out how to get it to search for the specific words with the code above, but how can I get it to look for the words that are NOT listed above. I've tried incorporating the ^ symbol, but I'm not having any luck there. Any regex veterans out there who see what I'm doing wrong?
There are two obvious possibilities, depending on what (else) you are doing.
Possibility 1: Use a dict or set:
You could just match words and then test for membership in a set or dictionary:
Reserved_words = set('import false true int ...'.split())
word_rx = r'\b\w+\b' # Or whatever rule you like for "words"
for m in re.finditer(...):
word = m.group(0)
if word in Reserved_words:
print("Found reserved word:", word)
else:
print("Found unreserved word:", word)
This approach is frequently used in lexers, where it is easier to just write a catch-all "match a word" rule, and then check a matched word against a list of keywords, than it is to write a fairly complex rule for each keyword and a catch-all to deal with the rest.
You can use a dict if you want to associate some kind of payload with the keyword (such as a class handle for instantiating a particular AST node type, etc.).
Possibility 2: Use named groups:
Another possibility is that you could use named groups in your regex to capture keyword/nonkeyword values:
word_rx = r'\b(?<keyword>import|int|true|false|\.\.\.)|(?<nonkeyword>\w+)\b'
for m in re.finditer(...):
word = m.group('keyword')
if word:
print("Found keyword:", word)
else:
word = m.group('nonkeyword')
print("Found nonkeyword:", word)
This is going to be slower than the previous approach, because of prefixes: "int" matches a keyword, but "integral" starts to match an int, then fails, then backtracks to the other branch, then matches a nonkeyword. :-(
However, if you are strongly tied to a mostly-regex implementation, for example, if you have many other regex-based rules, and you are processing them in a loop, then go for it!
I have a helper method that goes through any given block of text and replaces substrings that are in the format '#something;' with a link. It works with all test cases I've tried, including
#user; #user name; #user.name; ##user.name; #user*name;
but gets hung up on quotations, as in
#I'll fight you;
but still matches up until that point? Below, for hacky debugging purposes, I have the helper method putting three asterisks ('*') on either side of the assumed match, so the above tag results in
***#I'***ll fight you;
I can't figure it out.
(And if anyone has any additional tips and tricks on how to get it to match a tag like '#username;;', where the end character is also a part of the name, lemme know. I figured that might be too complicated and better done programmatically.)
module PostsHelper
def tag_users(content)
# User tagging in format '#multiword name;'
# Regexp /(\#)(.*?)(\;)/ for debugging; user configurable eventually
start_character = '#'
end_character = ';'
tag_pattern = eval('/(#{start_character})(.*?)(#{end_character})/')
name_pattern = eval('/(?<=#{start_character})(.*?)(?=#{end_character})/')
# Iterate through all tags and replace with link
content.gsub(tag_pattern) do
tag = Regexp.last_match(0)
tagged_name = tag[name_pattern, 1]
tagged_user = User.where('lower(name) = ?', tagged_name.downcase).first
if tagged_user
"<a href='#{user_path(tagged_user.id)}'>##{tagged_name}</a>"
else
'***' + tag + '***'
end
end
end
end
Edit: I called a quotation mark a comma. I hate myself.
What about something like this?
/(?<=#).[^;]*/
it should match everything in between the # and the ; -- as tried now at http://www.rubular.com/.
I'd also caution against using the termination character within the username -- it will be difficult to differentiate #user;; from maybe mentioning #user; in a sentence that is followed by a semi-colon.
At one point in my app, I need to match some strings against a pattern. Let's say that some of the sample strings look as follows:
Hi there, John.
What a lovely day today!
Lovely sunset today, John, isn't it?
Will you be meeting Linda today, John?
Most (not all) of these strings are from pre-defined patterns as follows:
"Hi there, %s."
"What a lovely day today!"
"Lovely sunset today, %s, isn't it?"
"Will you be meeting %s today, %s?"
This library of patterns is ever-expanding (currently at around 1,500), but is manually maintained. The input strings though (the first group) is largely unpredictable. Though most of them will match one of the patterns, some of them will not.
So, here's my question: Given a string (from the first group) as input, I need to know which of the patterns (known second group) it matched. If nothing matched, it needs to tell me that.
I'm guessing the solution involves building a regex out of the patterns, and iteratively checking which one matched. However, I'm unsure what the code to build those regexes looks like.
Note: The strings I've given here are for illustration purposes. In reality, the strings aren't human generated, but are computer-generated human-friendly strings as shown above from systems I don't control. Since they aren't manually typed in, we don't need to worry about things like typos and other human errors. Just need to find which pattern it matches.
Note 2: I could modify the patterns library to be some other format, if that makes it easier to construct the regexes. The current structure, with the printf style %s, isn't set in stone.
I am looking at this as a parsing problem. The idea is that the parser function takes a string and determines if it is valid or not.
The string is valid if you can find it among the given patterns. That means you need an index of all the patterns. The index must be a full text index. Also it must match according to the word position. eg. it should short circuit if the first word of the input is not found among the first word of the patterns. It should take care of the any match ie %s in the pattern.
One solution is to put the patterns in an in memory database (eg. redis) and do a full text index on it. (this will not match according to word position) but you should be able to narrow down to the correct pattern by splitting the input into words and searching. The searches will be very fast because you have a small in memory database. Also note that you are looking for the closest match. One or more words will not match. The highest number of matches is the pattern you want.
An even better solution is to generate your own index in a dictionary format. Here is an example index for the four patterns you gave as a JavaScript object.
{
"Hi": { "there": {"%s": null}},
"What: {"a": {"lovely": {"day": {"today": null}}}},
"Lovely": {"sunset": {"today": {"%s": {"isnt": {"it": null}}}}},
"Will": {"you": {"be": {"meeting": {"%s": {"today": {"%s": null}}}}}}
}
This index is recursive descending according to the word postion. So search for the first word, if found search for the next within the object returned by the first and so on. Same words at a given level will have only one key. You should also match the any case. This should be blinding fast in memory.
My first thought would be to have the regexp engine take all the trouble of handling this. They're usually optimised to handle large amounts of text so it shouldn't be that much of a performance hassle. It's brute force but the performance seems to be okay. And you could split the input into pieces and have multiple processes handle them. Here's my moderately tested solution (in Python).
import random
import string
import re
def create_random_sentence():
nwords = random.randint(4, 10)
sentence = []
for i in range(nwords):
sentence.append("".join(random.choice(string.lowercase) for x in range(random.randint(3,10))))
ret = " ".join(sentence)
print ret
return ret
patterns = [ r"Hi there, [a-zA-Z]+.",
r"What a lovely day today!",
r"Lovely sunset today, [a-zA-Z]+, isn't it?",
r"Will you be meeting [a-zA-Z]+ today, [a-zA-Z]+\?"]
for i in range(95):
patterns.append(create_random_sentence())
monster_pattern = "|".join("(%s)"%x for x in patterns)
print monster_pattern
print "--------------"
monster_regexp = re.compile(monster_pattern)
inputs = ["Hi there, John.",
"What a lovely day today!",
"Lovely sunset today, John, isn't it?",
"Will you be meeting Linda today, John?",
"Goobledigoock"]*2000
for i in inputs:
ret = monster_regexp.search(i)
if ret:
print ".",
else:
print "x",
I've created a hundred patterns. This is the maximum limit of the python regexp library. 4 of them are your actual examples and the rest are random sentences just to stress performance a little.
Then I combined them into a single regexp with 100 groups. (group1)|(group2)|(group3)|.... I'm guessing you'll have to sanitise the inputs for things that can have meanings in regular expressions (like ? etc.). That's the monster_regexp.
Testing one string against this tests it against 100 patterns in a single shot. There are methods that fetch out the exact group which was matched. I test 10000 strings 80% of which should match and 10% which will not. It short cirtcuits so if there's a success, it will be comparatively quick. Failures will have to run through the whole regexp so it will be slower. You can order things based on the frequency of input to get some more performance out of it.
I ran this on my machine and this is my timing.
python /tmp/scratch.py 0.13s user 0.00s system 97% cpu 0.136 total
which is not too bad.
However, to run a pattern against such a large regexp and fail will take longer so I changed the inputs to have lots of randomly generated strings that won't match and then tried. 10000 strings none of which match the monster_regexp and I got this.
python /tmp/scratch.py 3.76s user 0.01s system 99% cpu 3.779 total
Similar to Noufal's solution, but returns the matched pattern or None.
import re
patterns = [
"Hi there, %s.",
"What a lovely day today!",
"Lovely sunset today, %s, isn't it",
"Will you be meeting %s today, %s?"
]
def make_re_pattern(pattern):
# characters like . ? etc. have special meaning in regular expressions.
# Escape the string to avoid interpretting them as differently.
# The re.escape function escapes even %, so replacing that with XXX to avoid that.
p = re.escape(pattern.replace("%s", "XXX"))
return p.replace("XXX", "\w+")
# Join all the pattens into a single regular expression.
# Each pattern is enclosed in () to remember the match.
# This will help us to find the matched pattern.
rx = re.compile("|".join("(" + make_re_pattern(p) + ")" for p in patterns))
def match(s):
"""Given an input strings, returns the matched pattern or None."""
m = rx.match(s)
if m:
# Find the index of the matching group.
index = (i for i, group in enumerate(m.groups()) if group is not None).next()
return patterns[index]
# Testing with couple of patterns
print match("Hi there, John.")
print match("Will you be meeting Linda today, John?")
Python solution. JS should be similar.
>>> re2.compile('^ABC(.*)E$').search('ABCDE') == None
False
>>> re2.compile('^ABC(.*)E$').search('ABCDDDDDDE') == None
False
>>> re2.compile('^ABC(.*)E$').search('ABX') == None
True
>>>
The trick is to use ^ and $ to bound your pattern and making it a "template". Use (.*) or (.+) or whatever it is that you want to "search" for.
The main bottleneck for you, imho, will be iterating through a list of these patterns. Regex searches are computationally expensive.
If you want a "does any pattern match" result, build a massive OR based regex and let your regex engine handle the 'OR'ing for you.
Also, if you have only prefix patterns, check out the TRIE data structure.
This could be a job for sscanf, there is an implementation in js: http://phpjs.org/functions/sscanf/; the function being copied is this: http://php.net/manual/en/function.sscanf.php.
You should be able to use it without changing the prepared strings much, but I have doubts about the performances.
the problem isn't clear to me. Do you want to take the patterns and build regexes out of it?
Most regex engines have a "quoted string" option. (\Q \E). So you could take the string and make it
^\QHi there,\E(?:.*)\Q.\E$
these will be regexes that match exactly the string you want outside your variables.
if you want to use a single regex to match just a single pattern, you can put them in grouped patterns to find out which one matched, but that will not give you EVERY match, just the first one.
if you use a proper parser (I've used PEG.js), it might be more maintainable though. So that's another option if you think you might get stuck in regex hell