Bash: working with linenumbers to use them in sed - regex

Basically i just need to uncomment two lines containing a specific string.
Therefore i grep the string to get the line numbers and use sed to uncomment
(sure one might also use sed to get line numbers but the problem is the same).
You get the line numbers each on its own line, i dont know how to work with the line numbers sitting on its own line, so im trying to to get them into one line to use bash variables to handle the line numbers:
$ cat configfile
some text
#a string foo
#b string bar
some other text
#more text
much more text
so my first try is:
linenr=$(grep -n string configfile | cut -d: -f1) # get line numbers (several lines)
linenr=(${linenr//\ / }) # put line numbers into one line
sed -i "${linenr[0]},${linenr[1]} s/##*//" configfile # uncomment lines
my second try is:
linenr=$(sed -n '/string/=' configfile) # get line numbers (several lines)
linenr=$(echo $linenr | sed -i 's/\n/ /' configfile) # put line numbers into one line
sed -i "${linenr[0]},${linenr[1]} s/##*//" configfile # uncomment lines
I need to do this twice, for two nearly similar configfiles and for some reason, i get different output of the line numbers, altough the code is the same for both configfiles: (works for configfile4 but not for configfile6? i assume the content of those files is irrelevant for the output of the found line numbers? also checked line endings, are same in both files)
configfile4lines:
44 45
configfile6lines:
54
55
how should one work in such situtions with line numbers or are there better ways to do this?

You can use a regexp match as the address in sed, instead of line numbers.
sed -i '/string/s/##*//' configfile

Related

How to match a text with newlines after a word using bash

I need to match the end of a file after a word match in bash.
I have a text like this:
line 1
line 2
line 3
key line
line 5
line 6
I pretend to get everything after "key line", so my output would be:
line 5
line 6
How can I do this using bash?
I've tried grep -o -P '(?<=key line\n)[\s\S]*' but it didn't work, although it worked testing in https://regexr.com/
grep is line-based. It doesn't work well when you want to search across lines.
sed is up for the job. This will delete all lines from line 1 to the one containing key line, leaving only the lines after it:
$ sed '1,/key line/d' test.txt
line 5
line 6

Comment out file paths in a file matching lines in another file with sed and bash

I have a file (names.txt) with the following content:
/bin/pgawk
/bin/zsh
/dev/cua0
/dev/initctl
/root/.Xresources
/root/.esd_auth
... and so on. I want to read this file line by line, and use sed to comment out matches in another file. I have the code below, but it does nothing:
#/bin/bash
while read line
do
name=$line
sed -e '/\<$name\>/s/^/#/' config.conf
done < names.txt
Lines in the input file needs to be commented out in config.conf file. Like follows:
config {
#/bin/pgawk
#/bin/zsh
#/dev/cua0
#/dev/initctl
#/root/.Xresources
#/root/.esd_auth
}
I don't want to do this by hand, because the file contains more then 300 file paths. Can someone help me to figure this out?
You need to use double quotes around your sed command, otherwise shell variables will not be expanded. Try this:
sed "/\<$name\>/s/^/#/" config.conf
However, I would recommend that you skip the bash for-loop entirely and do the whole thing in one go, using awk:
awk 'NR==FNR{a[$0];next}{for(i=1;i<=NF;++i)if($i in a)$i="#"$i}1' names.txt config.conf
The awk command stores all of the file names as keys in the array a and then loops through every word in each line of the config file, adding a "#" before the word if it is in the array. The 1 at the end means that every line is printed.
It is better not to use regular expression matching here, as some of the characters in your file names (such as .) will be interpreted by the regular expression engine. This approach does a simple string match, which avoids the problem.

How to delete a specific number of random lines matching a pattern

I have an svg file with a grid of dots represented by lines that have the word use in them. I would like to delete a specific number of random lines matching that use pattern, then save a new version of the file. This answer was very close.
So it will be a combination of this (delete one random line in a specific range):
sed -i '.svg' $((9 + RANDOM % 579))d /filename.svg
and this (delete all lines matching pattern use):
sed -i '.svg' /use/d /filename.svg
In other words, the logic would go something like this:
sed -i delete 'x' number of RANDOM lines matching 'use' from 'input.svg' and save to 'output.svg'
I'm running these commands from Terminal on a Mac and am inexperienced with syntax so formatting the command for that would be ideal.
Delete each line containing "use" with a probability of 10%:
awk '!/use/ || rand() > 0.10' file
Randomly delete exactly one line containing "use":
awk -v n="$(( RANDOM % $(grep -c "use" file) ))" '!/use/ || n-- != 0' file
Here's an example invocation:
$ cat file
some string
a line containing "use"
another use-ful line
more random data
$ awk -v n="$(( RANDOM % $(grep -c "use" file) ))" '!/use/ || n-- != 0' file
some string
another use-ful line
more random data
One of the lines containing use was removed.
This might work for you: (GNU sed & sort):
sed -n '/\<use\>/=' file | sort -r | head -5 | sed 's/$/d/' | sed -i.bak -f - file
Extract the line numbers of the lines containing the word use from the file. Randomly sort those line numbers then take the first say 5 and build a sed script to delete them from the original file.

SED: addressing two lines before match

Print line, which is situated 2 lines before the match(pattern).
I tried next:
sed -n ': loop
/.*/h
:x
{n;n;/cen/p;}
s/./c/p
t x
s/n/c/p
t loop
{g;p;}
' datafile
The script:
sed -n "1N;2N;/XXX[^\n]*$/P;N;D"
works as follows:
Read the first three lines into the pattern space, 1N;2N
Search for the test string XXX anywhere in the last line, and if found print the first line of the pattern space, P
Append the next line input to pattern space, N
Delete first line from pattern space and restart cycle without any new read, D, noting that 1N;2N is no longer applicable
This might work for you (GNU sed):
sed -n ':a;$!{N;s/\n/&/2;Ta};/^PATTERN\'\''/MP;$!D' file
This will print the line 2 lines before the PATTERN throughout the file.
This one with grep, a bit simpler solution and easy to read [However need to use one pipe]:
grep -B2 'pattern' file_name | sed -n '1,2p'
If you can use awk try this:
awk '/pattern/ {print b} {b=a;a=$0}' file
This will print two line before pattern
I've tested your sed command but the result is strange (and obviously wrong), and you didn't give any explanation. You will have to save three lines in a buffer (named hold space), do a pattern search with the newest line and print the oldest one if it matches:
sed -n '
## At the beginning read three lines.
1 { N; N }
## Append them to "hold space". In following iterations it will append
## only one line.
H
## Get content of "hold space" to "pattern space" and check if the
## pattern matches. If so, extract content of first line (until a
## newline) and exit.
g
/^.*\nsix$/ {
s/^\n//
P
q
}
## Remove the old of the three lines saved and append the new one.
s/^\n[^\n]*//
h
' infile
Assuming and input file (infile) with following content:
one
two
three
four
five
six
seven
eight
nine
ten
It will search six and as output will yield:
four
Here are some other variants:
awk '{a[NR]=$0} /pattern/ {f=NR} END {print a[f-2]}' file
This stores all lines in an array a. When pattern is found store line number.
At then end print that line number from the file.
PS may be slow with large files
Here is another one:
awk 'FNR==NR && /pattern/ {f=NR;next} f-2==FNR' file{,}
This reads the file twice (file{,} is the same as file file)
At first round it finds the pattern and store line number in variable f
Then at second round it prints the line two before the value in f

Grep for a single line comment but exclude stuff like http://example.com

How can I use a regEx with grep so that I can check whether what I'm looking for is the first thing on the line ? What I want to do is find single line comments in a file, but I don't want to grep stuff line http://path, so there can't be anything behind // on the line.
$ echo $'http://www.example.com\n // single line comment' | grep "^ *//.*"
// single line comment