How can I use regular expression to search files in Unix? - regex

I have following files from 2 different categories :
Category 1 :
MAA
MAB
MAC
MAD
MAE
MAF
MAG
MAH
MAJ
MBA
MBB
MBC
MBD
MBE
MDA
MDD
and Category 2 :
MCA
MCB
MCC
MCD
MCE
MCF
MCG
MDB
So my question is : How can I write regular expression so that I can find files from category 1 only ?
I don't want to do hard coded script, expecting some logic from brilliant people.
I am trying this :
find . -regex "*[M][A,B,D][A,B,C,D,E,F,J].txt"

It's quite simple :
ls -l | grep "MAA\|MAB\|MAC\|MAD\|MAE\|MAF\|MAG\|MAH\|MAJ\|MBA\|MBB\|MBC\|MBD MBE\|MDA\|MDD"
Ok so you don't want hardcoded. Then yes you should state the patterns which should NOT match -v
ls -l | grep -v "MC." | grep -v "pattern2" | ....

Your question is not very precise, but from your attempt, I conclude, that you are looking for files having names ending in ....MAA.txt, ...MAB.txt and so on, and being located in either your working directory or somewhere below.
You also didn't mention, which shell you are using. Here is an example using zsh - no need to write a regular expression here:
ls ./**/*M{AA,AB,AC,AD,AE,AF,AG,AH,AJ,BA,BB,BC,BD,BE,DA,DD}.txt

I am trying this : find . -regex "*[M][A,B,D][A,B,C,D,E,F,J].txt"
The errors in this are:
The wildcard for any characters in a regex is .*, unlike just * in a normal filename pattern.
You forgot G and H in the third bracket expression.
You didn't exclude the category 2 name MDB.
Besides:
The characters of a bracket expression are not to be separated by ,.
A bracket expression with a single item ([M]) can be replaced by just the item (M).
This leads to:
find . -regex ".*M[ABD].*" -not -name "MDB*"
or, without regex:
find . -name "M[ABD]*" -not -name "MDB*"

Related

regex quantifiers in bash --simple vs extended matching {n} times

I'm using the bash shell and trying to list files in a directory whose names match regex patterns. Some of these patterns work, while others don't. For example, the * wildcard is fine:
$ls FILE_*
FILE_123.txt FILE_2345.txt FILE_789.txt
And the range pattern captures the first two of these with the following:
$ls FILE_[1-3]*.txt
FILE_123.txt FILE_2345.txt
but not the filename with the "7" character after "FILE_", as expected. Great. But now I want to count digits:
$ls FILE_[0-9]{3}.txt
ls: FILE_[0-9]{3}.txt: No such file or directory
Shouldn't this give me the filenames with three numeric digits following "FILE_" (i.e. FILE_123.txt and FILE_789.txt, but not FILE_2345.txt) Can someone tell me how I should be using the {n} quantifier (i.e. "match this pattern n times)?
ls uses with glob pattern, you can not use {3}. You have to use FILE_[0-9][0-9][0-9].txt. Or, you could the following command.
ls | grep -E "FILE_[0-9]{3}.txt"
Edit:
Or, you also use find command.
find . -regextype egrep -regex '.*/FILE_[0-9]{3}\.txt'
The .*/ prefix is needed to match a complete path. On Mac OS X :
find -E . -regex ".*/FILE_[0-9]{3}\.txt"
Bash filename expansion does not use regular expressions. It uses glob pattern matching, which is distinctly different, and what you're trying with FILE_[0-9]{3}.txt does brace expansion followed by filename expansion. Even bash's extended globbing feature doesn't have an equivalent to regular expression's {N}, so as already mentioned you have to use FILE_[0-9][0-9][0-9].txt

Why can't `find` actually find all of the directories matching a pattern?

I have directories matching the pattern foo[0-9]+ and foo-bar. I want to remove all directories matching the former pattern. My goal for doing this is by using find, but when I try to find directories matching the former pattern, I can't recall them:
$ mkdir foo{1..15} foo-bar
$ # yields nothing
$ find . -name "foo[0-9]+"
When I try to find everything that matches foo[^-], only some of the directories appear:
$ find . -name "foo[^-]"
./foo9
./foo7
./foo6
./foo1
./foo8
./foo4
./foo3
./foo2
./foo5
I've played with the -regex flag and all available -regextypes, but can't seem to get the magic right.
How can I list all of these directories?
This should work:
find -E . -regex '.*/foo[0-9]+'
You might want to limit the type: find -E . -type d -regex '.*/foo[0-9]+'
This works:
$ ls -F
foo-bar/ foo10/ foo12/ foo14/ foo2/ foo4/ foo6/ foo8/
foo1/ foo11/ foo13/ foo15/ foo3/ foo5/ foo7/ foo9/
$ find . -name "foo[^-]*"
./foo1
./foo2
./foo3
./foo4
./foo5
./foo6
./foo7
./foo8
./foo9
./foo10
./foo11
./foo12
./foo13
./foo14
./foo15
Alternatively, if your goal is to list all directories that don't match foo-bar then you can simply use the -not operator:
$ find . -not -name foo-bar
.
./foo1
./foo2
./foo3
./foo4
./foo5
./foo6
./foo7
./foo8
./foo9
./foo10
./foo11
./foo12
./foo13
./foo14
./foo15
By the way, you were using file globbing and not regexes when you weren't using the -regex flag.
To find the files using globbing:
find . -name "foo[1-9]" -o -name "foo1[0-5]" -o -name "foo-bar"
There we match any files with name "foo" followed by "single digit between 1 and 9", or files named foo1 followed by "single digit between 0 and 5", or files named exactly "foo-bar".
Or if you know the directory won't have any numbered files aside from the ones you created:
find . -name "foo[1-9]*" -o -name "foo-bar'"
Here we find all files named "foo" followed by one digit, followed by any number of any characters, or the file named exactly foo-bar. Globbing is not very precise like regexes, but it's often sufficient and it's pretty quick.
The * and ? in globbing is different than in regexes. In globbing, they themselves represent unknown characters in the string being matched as well as the quantity of them. In regexes, they modify the previous atom in the regex, and express the quantity of that previous atom.

How to ignore digits

I have a file location
/appl/bcm_prod/u/scratch/markit/markitdownloader_20160420_25918.log
I know the variable for the year,month, and day, how do I ignore the rest of the string?
For example,
/appl/bcm_prod/u/scratch/markit/markitdownloader_%Y%m%d_25918.log
what do I put for the 25918 id which can change everyday.
What is the regex flavour?
There are examples below:
find /appl/bcm_prod/u/scratch/markit/ -regex ".*/markitdownloader_20160420_[0-9][0-9]*.log" -type f
find /appl/bcm_prod/u/scratch/markit/ -type f | grep "markitdownloader_20160420_[0-9][0-9]*.log"
ls /appl/bcm_prod/u/scratch/markit/markitdownloader_20160420_*.log
ls /appl/bcm_prod/u/scratch/markit/markitdownloader_20160420_+([[:digit:]]).log
Typically for regex you could either use * or .*. So you would put the date as necessary and then *.log or .*.log depending on your regex version. *, depending on the regex version, either means 0 or any or is a wildcard. If * isn't the wildcard then . is. On bash I would say logname_date_*.log is what you are looking for.

Find and replace every second occurence with sed

Hi all I have the code below
find . -type f -exec sed -i 's#EText-No.#New EText-No. #g' {} +
I have been using the script to find and replace some characters in multiple files in folders and subfolders.
I have discovered that some values occurs more than twice. Hence I need to modify my script to replace only the second instance of an attribute
find . -type f -exec sed -i '/Subject/{:a;N;/Subject.*Subject/!Ta;s/Subject/SecondSubject/2;}/g' {} +
I am trying to use the code above to achive this .. but it seems not to be working. I need to modify the code to work with "#" as a seperatore like the above code. because I have backlash characters in my file.
Any Idea how I might make the code to work and using the sperator #?
Thanks for your help
ORIGINAL FILE BEFORE PROCESSING
<tr><th scope="row">Subject</th><td>United States -- Biography</td></tr><tr><th scope="row">Subject</th><td>United States -- Short Stories</td></tr><tr><th scope="row">EText-No.</th><td>24200</td></tr><tr><th scope="row">Release Date</th><td>2008-01-07</td></tr><tr>
After processing
<tr><th scope="row">Subject</th><td>United States -- Biography</td></tr><tr><th scope="row">SecondSubject</th><td>United States -- Short Stories</td></tr><tr><th scope="row">EText-No.</th><td>24200</td></tr><tr><th scope="row">Release Date</th><td>2008-01-07</td></tr><tr>
Please note that the second Subject is changed from 'Subject' to 'SecondSubject'
Try this:
sed -i '/Subject/{:a;s/\(Subject.*\)Subject/\1SecondSubject/;tb;N;ba;:b}'
If a line appended to the pattern space (with the N command) contains more than one occurrence of the word "Subject", then you can use this command to only target the first occurrence of the appended line (the second occurrence of the pattern space):
sed -i '/Subject/{:a;/Subject.*Subject/!{N;ba;};s/Subject/newSubject/2;}'

Shell Script - list files, read files and write data to new file

I have a special question to shell scripting.
Simple scripting is no Problem for me but I am new on this and want to make me a simple database file.
So, what I want to do is:
- Search for filetypes (i.e. .nfo) <-- should be no problem :)
- read inside of each found file and use some strings inside
- these string of each file should be written in a new file. Each found file informations
should be one row in new file
I hope I explained my "project" good.
My problem is now, to understand how I can tell the script it has to search for files and then use each of this files to read in it and use some information in it to write this in a new file.
I will explain a bit better.
I am searching for files and that gives me back:
file1.nfo
file2.nfo
file3.nfo
Ok now in each of that file I need the information between 2 lines. i.e.
file1.nfo:
<user>test1</user>
file2.nfo:
<user>test2</user>
so in the new file there should now be:
file1.nfo:user1
file2.nfo:user2
OK so:
find -name *.nfo > /test/database.txt
is printing out the list of files.
and
sed -n '/<user*/,/<\/user>/p' file1.nfo
gives me back the complete file and not only the information between <user> and </user>
I try to go on step by step and I am reading a lot but it seems to be very difficult.
What am I doing wrong and what should be the best way to list all files, and write the files and the content between two strings to a file?
EDIT-NEW:
Ok here is an update for more informations.
I learned now a lot and searched the web for my problems. I can find a lot of informations but i don´t know how to put them together so that i can use it.
Working now with awk is that i get back filename and the String.
Here now the complete Informations (i thought i can go on by myself with a bit of help but i can´t :( )
Here is an example of: /test/file1.nfo
<string1>STRING 1</string1>
<string2>STRING 2</string2>
<string3>STRING 3</string3>
<string4>STRING 4</string4>
<personal informations>
<hobby>Baseball</hobby>
<hobby>Baskeball</hobby>
</personal informations>
Here an example of /test/file2.nof
<string1>STRING 1</string1>
<string2>STRING 2</string2>
<string3>STRING 3</string3>
<string4>STRING 4</string4>
<personal informations>
<hobby>Soccer</hobby>
<hobby>Traveling</hobby>
</personal informations>
The File i want to create has to look like this.
STRING 1:::/test/file1.nfo:::Date of file:::STRING 4:::STRING 3:::Baseball, Basketball:::STRING 2
STRING 1:::/test/file2.nfo:::Date of file:::STRING 4:::STRING 3:::Baseball, Basketball:::STRING 2
"Date of file" should be the creation date of the file. So that i can see how old is the file.
So, that´s what i need and it seems not easy.
Thanks a lot.
UPATE ERROR -printf
find: unrecognized: -printf
Usage: find [PATH]... [OPTIONS] [ACTIONS]
Search for files and perform actions on them.
First failed action stops processing of current file.
Defaults: PATH is current directory, action is '-print'
-follow Follow symlinks
-xdev Don't descend directories on other filesystems
-maxdepth N Descend at most N levels. -maxdepth 0 applies
actions to command line arguments only
-mindepth N Don't act on first N levels
-depth Act on directory *after* traversing it
Actions:
( ACTIONS ) Group actions for -o / -a
! ACT Invert ACT's success/failure
ACT1 [-a] ACT2 If ACT1 fails, stop, else do ACT2
ACT1 -o ACT2 If ACT1 succeeds, stop, else do ACT2
Note: -a has higher priority than -o
-name PATTERN Match file name (w/o directory name) to PATTERN
-iname PATTERN Case insensitive -name
-path PATTERN Match path to PATTERN
-ipath PATTERN Case insensitive -path
-regex PATTERN Match path to regex PATTERN
-type X File type is X (one of: f,d,l,b,c,...)
-perm MASK At least one mask bit (+MASK), all bits (-MASK),
or exactly MASK bits are set in file's mode
-mtime DAYS mtime is greater than (+N), less than (-N),
or exactly N days in the past
-mmin MINS mtime is greater than (+N), less than (-N),
or exactly N minutes in the past
-newer FILE mtime is more recent than FILE's
-inum N File has inode number N
-user NAME/ID File is owned by given user
-group NAME/ID File is owned by given group
-size N[bck] File size is N (c:bytes,k:kbytes,b:512 bytes(def.))
+/-N: file size is bigger/smaller than N
-links N Number of links is greater than (+N), less than (-N),
or exactly N
-prune If current file is directory, don't descend into it
If none of the following actions is specified, -print is assumed
-print Print file name
-print0 Print file name, NUL terminated
-exec CMD ARG ; Run CMD with all instances of {} replaced by
file name. Fails if CMD exits with nonzero
-delete Delete current file/directory. Turns on -depth option
The pat1,pat2 notation of sed is line based. Think of it like this, pat1 sets an enable flag for its commands and pat2 disables the flag. If both pat1 and pat2 are on the same line the flag will be set, and thus in your case print everything following and including the <user> line. See grymoire's sed howto for more.
An alternative to sed, in this case, would be to use a grep that supports look-around assertions, e.g. GNU grep:
find . -type f -name '*.nfo' | xargs grep -oP '(?<=<user>).*(?=</user>)'
If grep doesn't support -P, you can use a combination of grep and sed:
find . -type f -name '*.nfo' | xargs grep -o '<user>.*</user>' | sed 's:</\?user>::g'
Output:
./file1.nfo:test1
./file2.nfo:test2
Note, you should be aware of the issues involved with passing files on to xargs and perhaps use -exec ... instead.
It so happens that grep outputs in the format you need and is enough for an one-liner.
By default a grep '' *.nfo will output something like:
file1.nfo:random data
file1.nfo:<user>test1</user>
file1.nfo:some more random data
file2.nfo:not needed
file2.nfo:<user>test2</user>
file2.nfo:etc etc
By adding the -P option (Perl RegEx) you can restrict the output to matches only:
grep -P "<user>\w+<\/user>" *.nfo
output:
file1.nfo:<user>test1</user>
file2.nfo:<user>test2</user>
Now the -o option (only show what matched) saves the day, but we'll need a bit more advanced RegEx since the tags are not needed:
grep -oP "(?<=<user>)\w+(?=<\/user>)" *.nfo > /test/database.txt
output of cat /test/database.txt:
file1.nfo:test1
file2.nfo:test2
Explained RegEx here: http://regex101.com/r/oU2wQ1
And your whole script just became a single command.
Update:
If you don't have the --perl-regexp option try:
grep -oE "<user>\w+<\/user>" *.nfo|sed 's#</?user>##g' > /test/database.txt
All you need is:
find -name '*.nfo' | xargs awk -F'[><]' '{print FILENAME,$3}'
If you have more in your file than just what you show in your sample input then this is probably all you need:
... awk -F'[><]' '/<user>/{print FILENAME,$3}' file
Try this (untested):
> outfile
find -name '*.nfo' -printf "%p %Tc\n" |
while IFS= read -r fname tstamp
do
awk -v tstamp="$tstamp" -F'[><]' -v OFS=":::" '
{ a[$2] = a[$2] sep[$2] $3; sep[$2] = ", " }
END {
print a["string1"], FILENAME, tstamp, a["string4"], a["string3"], a["hobby"], a["string2"]
}
' "$fname" >> outfile
done
The above will only work if your file names do not contain spaces. If they can, we'd need to tweak the loop.
Alternative if your find doesn't support -printf (suggestion - seriously consider getting a modern "find"!):
> outfile
find -name '*.nfo' -print |
while IFS= read -r fname
do
tstamp=$(stat -c"%x" "$fname")
awk -v tstamp="$tstamp" -F'[><]' -v OFS=":::" '
{ a[$2] = a[$2] sep[$2] $3; sep[$2] = ", " }
END {
print a["string1"], FILENAME, tstamp, a["string4"], a["string3"], a["hobby"], a["string2"]
}
' "$fname" >> outfile
done
If you don't have "stat" then google for alternatives to get a timestamp from a file or consider parsing the output of ls -l - it's unreliable but if it's all you've got...