Ignoring strings without using the -v flag - regex

I am trying to use egrep to find lines in a file that contain a certain word, but dont start with that word.
I am currently doing as so...
egrep '^word|word' file.txt
I tried putting it in brackets with the ^ not symbol, but brackets specifiy each letter individually and not a word as a whole.
egrep'^[^word]|word' file.txt
How can I do this, to ignore a certain first word, for example I ignore every The that is at the beginning of a sentence but spot the other ones. Without using the v-flag.

All you need is:
grep '..*word' file
or:
grep -E '.+word' file
to find lines that contain word at a location other than the start of a line.

Related

Break line using regex

I have multiple xml files that look like this: <TEST><TEST><TEST><TEST><TEST><TEST><TEST><TEST><TEST><TEST>
I would like to break into a new like for every '<' and get rid of every '>'.
I want to do this via regex since what I'm working on is for *nix.
There is no need for regex to do such a simple search & replace. You want to replace < with \n< and > with an empty string.
Assuming your content is in file input.txt, this simple sed command line can do the job:
sed 's/</\n</g;s/>//g' input.txt
How it works
There are two sed commands separated by ;:
s/</\n</g
s/>//g
Both commands are s (search and replace). The s command requires the search regex (no regex here), the replacement string and some optional flag, separated by /.
The first s searches for < and replaces it with \n<. \n is the usual notation for a newline character in regex and many Unix tools (even when no regex is involved).
The second s searches for > and replaces it with nothing.
Both s commands use the g (global) flag that tells them to do all the replacements they can do on each line. sed runs each command for every line of the input and by default, s stops after the first replacement (on a line).

Grep for lines not beginning with "//"

I'm trying but failing to write a regex to grep for lines that do not begin with "//" (i.e. C++-style comments). I'm aware of the "grep -v" option, but I am trying to learn how to pull this off with regex alone.
I've searched and found various answers on grepping for lines that don't begin with a character, and even one on how to grep for lines that don't begin with a string, but I'm unable to adapt those answers to my case, and I don't understand what my error is.
> cat bar.txt
hello
//world
> cat bar.txt | grep "(?!\/\/)"
-bash: !\/\/: event not found
I'm not sure what this "event not found" is about. One of the answers I found used paren-question mark-exclamation-string-paren, which I've done here, and which still fails.
> cat bar.txt | grep "^[^\/\/].+"
(no output)
Another answer I found used a caret within square brackets and explained that this syntax meant "search for the absence of what's in the square brackets (other than the caret). I think the ".+" means "one or more of anything", but I'm not sure if that's correct and if it is correct, what distinguishes it from ".*"
In a nutshell: how can I construct a regex to pass to grep to search for lines that do not begin with "//" ?
To be even more specific, I'm trying to search for lines that have "#include" that are not preceeded by "//".
Thank you.
The first line tells you that the problem is from bash (your shell). Bash finds the ! and attempts to inject into your command the last you entered that begins with \/\/. To avoid this you need to escape the ! or use single quotes. For an example of !, try !cat, it will execute the last command beginning with cat that you entered.
You don't need to escape /, it has no special meaning in regular expressions. You also don't need to write a complicated regular expression to invert a match. Rather, just supply the -v argument to grep. Most of the time simple is better. And you also don't need to cat the file to grep. Just give grep the file name. eg.
grep -v "^//" bar.txt | grep "#include"
If you're really hungup on using regular expressions then a simple one would look like (match start of string ^, any number of white space [[:space:]]*, exactly two backslashes /{2}, any number of any characters .*, followed by #include):
grep -E "^[[:space:]]*/{2}.*#include" bar.txt
You're using negative lookahead which is PCRE feature and requires -P option
Your negative lookahead won't work without start anchor
This will of course require gnu-grep.
You must use single quotes to use ! in your regex otherwise history expansion is attempted with the text after ! in your regex, the reason of !\/\/: event not found error.
So you can use:
grep -P '^(?!\h*//)' file
hello
\h matches 0 or more horizontal whitespace.
Without -P or non-gnu grep you can use grep -v:
grep -v '^[[:blank:]]*//' file
hello
To find #include lines that are not preceded by // (or /* …), you can use:
grep '^[[:space:]]*#[[:space:]]*include[[:space:]]*["<]'
The regex looks for start of line, optional spaces, #, optional spaces, include, optional spaces and either " or <. It will find all #include lines except lines such as #include MACRO_NAME, which are legitimate but rare, and screwball cases such as:
#/*comment*/include/*comment*/<stdio.h>
#\
include\
<stdio.h>
If you have to deal with software containing such notations, (a) you have my sympathy and (b) fix the code to a more orthodox style before hunting the #include lines. It will pick up false positives such as:
/* Do not include this:
#include <does-not-exist.h>
*/
You could omit the final [[:space:]]*["<] with minimal chance of confusion, which will then pick up the macro name variant.
To find lines that do not start with a double slash, use -v (to invert the match) and '^//' to look for slashes at the start of a line:
grep -v '^//'
You have to use the -P (perl) option:
cat bar.txt | grep -P '(?!//)'
For the lines not beginning with "//", you could use (^[^/]{2}.*$).
If you don't like grep -v for this then you could just use awk:
awk '!/^\/\//' file
Since awk supports compound conditions instead of just regexps, it's often easier to specify what you want to match with awk than grep, e.g. to search for a and b in any order with grep:
grep -E 'a.*b|b.*a`
while with awk:
awk '/a/ && /b/'

"partial grep" to accelerate grep speed?

This is what I am thinking: grep program tries to pattern-match every pattern occurrence in the line, just like:
echo "abc abc abc" | grep abc --color
the result is that the three abc is all red colored, so grep did a full pattern matching to the line.
But think in this scenario, I have many big files to process, but the words that I am interested is very much likely to occur in the first few words. My job is to find the lines without the words in them. So if the grep program can continue to the next line when the words have been found without having to check the rest of the line, it would maybe significantly faster.
Is there a partial match option maybe in grep to do this?
like:
echo abc abc abc | grep --partial abc --color
with only the first abc colored red.
See this nice introduction to grep internals:
http://lists.freebsd.org/pipermail/freebsd-current/2010-August/019310.html
In particular:
GNU grep AVOIDS BREAKING THE INPUT INTO LINES. Looking for newlines
would slow grep down by a factor of several times, because to find the
newlines it would have to look at every byte!
So instead of using line-oriented input, GNU grep reads raw data into
a large buffer, searches the buffer using Boyer-Moore, and only when
it finds a match does it go and look for the bounding newlines.
(Certain command line options like -n disable this optimization.)
So the answer is: No. It is way faster for grep to look for the next occurrence of the search string, rather than to look for a new line.
Edit: Regarding the speculation in the comments to that color=never would do the trick: I had a quick glance at the source code. The variable color_option is not used anywhere near the the actual search for the regex or the previous and upcoming newline in case a match has been found.
It might be that one could save a few CPU cycles when searching those line terminators. Possibly a real world difference shows up with pathological long lines and a very short search string.
If your job is to find the lines without the words in them, you can give sed a try to delete the lines containing the specific word.
sed '/word/d' input_file
Sed will probably continue to the next line when the first occurrence is found on the current line.
If you want to find lines without specific words, you can use grep to do this.
Try grep -v "abc" which means do the inverse. In this case, find lines without the string "abc".
If I have a file that looks like this:
line one abc
line two abc
line three def
Doing grep -v "abc" file.txt will return line three def.

Caret symbol does not work with grep

This does not work
grep -h '^zip' log*
this works
grep -h '[^bg]zip' log*
The log* files definitely contain a file named zip because the second command prints out the file name. But the first does not print anything at all. I try several and see that the caret symbol only works as negation in brackets. Outside of the bracket, it does not mean to indicate that something following it would be in the beginning of the word.
What is wrong here? I am using ubuntu 12.4
beginning of the word
^ marks the beginning of line, not word. "foo zip" will not match against ^zip, but "zip foo" will. If you want to match zip at the beginning of a word, use this:
grep \\bzip
\b marks a word boundary, but you need to double up on escapes because your shell will strip one. (grep '\bzip' also works.)

Regex (grep) for multi-line search needed [duplicate]

This question already has answers here:
How can I search for a multiline pattern in a file?
(11 answers)
Closed 1 year ago.
I'm running a grep to find any *.sql file that has the word select followed by the word customerName followed by the word from. This select statement can span many lines and can contain tabs and newlines.
I've tried a few variations on the following:
$ grep -liIr --include="*.sql" --exclude-dir="\.svn*" --regexp="select[a-zA-Z0-
9+\n\r]*customerName[a-zA-Z0-9+\n\r]*from"
This, however, just runs forever. Can anyone help me with the correct syntax please?
Without the need to install the grep variant pcregrep, you can do a multiline search with grep.
$ grep -Pzo "(?s)^(\s*)\N*main.*?{.*?^\1}" *.c
Explanation:
-P activate perl-regexp for grep (a powerful extension of regular expressions)
-z Treat the input as a set of lines, each terminated by a zero byte (the ASCII NUL character) instead of a newline. That is, grep knows where the ends of the lines are, but sees the input as one big line. Beware this also adds a trailing NUL char if used with -o, see comments.
-o print only matching. Because we're using -z, the whole file is like a single big line, so if there is a match, the entire file would be printed; this way it won't do that.
In regexp:
(?s) activate PCRE_DOTALL, which means that . finds any character or newline
\N find anything except newline, even with PCRE_DOTALL activated
.*? find . in non-greedy mode, that is, stops as soon as possible.
^ find start of line
\1 backreference to the first group (\s*). This is a try to find the same indentation of method.
As you can imagine, this search prints the main method in a C (*.c) source file.
I am not very good in grep. But your problem can be solved using AWK command.
Just see
awk '/select/,/from/' *.sql
The above code will result from first occurence of select till first sequence of from. Now you need to verify whether returned statements are having customername or not. For this you can pipe the result. And can use awk or grep again.
Your fundamental problem is that grep works one line at a time - so it cannot find a SELECT statement spread across lines.
Your second problem is that the regex you are using doesn't deal with the complexity of what can appear between SELECT and FROM - in particular, it omits commas, full stops (periods) and blanks, but also quotes and anything that can be inside a quoted string.
I would likely go with a Perl-based solution, having Perl read 'paragraphs' at a time and applying a regex to that. The downside is having to deal with the recursive search - there are modules to do that, of course, including the core module File::Find.
In outline, for a single file:
$/ = "\n\n"; # Paragraphs
while (<>)
{
if ($_ =~ m/SELECT.*customerName.*FROM/mi)
{
printf file name
go to next file
}
}
That needs to be wrapped into a sub that is then invoked by the methods of File::Find.