The 9th column has multiple values separated with ";". I am trying to find first occurrence of string after "name_id" in column $9 of a tab limited file - the first line of the file looks like this eg.
1 NY state 3102016 3102125 . + . name_id "ENSMUSG8868"; trans_id "ENSMUST00000082908"; number "1"; id_name "Gm26206";ex_id "ENSMUSE000005";
There are multiple values separated by";" in 9th column. I could come up with this command that pulls out the last "ENSMUSE000005" id
sed 's|.*"\([0-9_A-Z]\+\)".*|\1|' input.txt | head
Can it be done with regex in awk? thanks a lot!
echo $x |awk -F';' '{split($1,a," ");gsub(/"/ ,"" ,a[10]);print a[10]}'
ENSMUSG8868
Where x is your line.
Based on OP's comments :
echo $x |awk -F';' '{split($1,a," ");gsub(/"/ ,"" ,a[10]);print a[1],a[10]}'
1 ENSMUSG8868
Related
I have a file like this. it is a 7-column tab file with separator of one space (sep=" ").
however, in the 4th column, it is a string with some words which also contains spaces. Then last 3 column are numbers.
test_find.txt
A UTR3 0.760 Sterile alpha motif domain|Sterile alpha motif domain;Sterile alpha motif domain . . 0.0007
G intergenic 0.673 BTB/POZ domain|BTB/POZ domain|BTB/POZ domain . . 0.0015
I want to replace space into underscore (e.g. replace "Sterile alpha motif domain" to "Sterile_alpha_motif_domain"). Firstly, find the pattern starting with letters and end with "|", then treat as one string and replace all spaces to "_". Then move to next line and find next patter. (Is there any easier way to do it?)
I was able to use sed -i -e 's/Sterile alpha motif domain/Sterile_alpha_motif_domain/g' test_find.txt to only first row, but cannot generalize it.
I tried to find all patterns using sed -n 's/^[^[a-z]]*{\(.*\)\\[^\|]*$/\1/p' test_find.txt but doesn't work.
can anyone help me?
I want output like this:
A UTR3 0.760 Sterile_alpha_motif_domain|Sterile_alpha_motif_domain;Sterile_alpha_motif_domain . . 0.0007
G intergenic 0.673 BTB/POZ_domain|BTB/POZ_domain . . 0.0015
Thank you!!!!
We'll need to two-step processing: first extract the 4th column which may
contain spaces; next replace the spaces in the 4th column with underscores.
With GNU awk:
gawk '{
if (match($0, /^(([^ ]+ ){3})(.+)(( [0-9.]+){3})$/, a)) {
gsub(/ /, "_", a[3])
print a[1] a[3] a[4]
}
}' test_find.txt
Output:
A UTR3 0.760 Sterile_alpha_motif_domain|Sterile_alpha_motif_domain;Sterile_alpha_motif_domain . . 0.0007
G intergenic 0.673 BTB/POZ_domain|BTB/POZ_domain|BTB/POZ_domain . . 0.0015
The regex ^(([^ ]+ ){3})(.+)(( [0-9.]+){3})$ matches a line capturing
each submatches.
The 3rd argument (GNU awk extension) a is an array name which is
assigned to the capture group. a[1] holds 1st-3rd columns,
a[3] holds 4th column, and a[4] holds 5th-7th columns.
The gsub function replaces whitespaces with an underscores.
Then the columns are concatenated and printed.
Assuming you have special character at the end before the final column with integers, You can try this sed
$ sed -E 's~([[:alpha:]/]+) ~\1_~g;s/_([[:punct:]])/ \1/g' input_file
0.760 Sterile_alpha_motif_domain|Sterile_alpha_motif_domain;Sterile_alpha_motif_domain . . 0.0007
0.673 BTB/POZ_domain|BTB/POZ_domain|BTB/POZ_domain . . 0.0015
Without making any assumptions on the content of each field, you can 'brute force' the expected result by counting the number of characters in each field (+ the number of field separators) for the beginning of the line and the end of the line, and use this to manipulate the '4th column', e.g.
awk '{start=length($1)+length($2)+length($3)+4; end=length($0)-length($NF)-length($(NF-1))-length($(NF-2))-length($1)-length($2)-length($3)-6; text=substr($0, start, end); gsub(" ", "_", text); print $1, $2, $3, text, $(NF-2), $(NF-1), $NF}' test.txt
'Neat' version:
awk '{
start=length($1)+length($2)+length($3)+4
end=length($0)-length($NF)-length($(NF-1))-length($(NF-2))-length($1)-length($2)-length($3)-6
text=substr($0, start, end)
gsub(" ", "_", text)
print $1, $2, $3, text, $(NF-2), $(NF-1), $NF
}' test.txt
A UTR3 0.760 Sterile_alpha_motif_domain|Sterile_alpha_motif_domain;Sterile_alpha_motif_domain . . 0.0007
G intergenic 0.673 BTB/POZ_domain|BTB/POZ_domain|BTB/POZ_domain . . 0.0015
Breakdown:
awk '{
# How many characters are there before column 4 begins (length of each field + total count of field separators (in this case, "4"))
start=length($1)+length($2)+length($3)+4;
# How many characters are there in column 4 (total - (first 3 fields + last 3 fields + total field separators (6)))
end=length($0)-length($NF)-length($(NF-1))-length($(NF-2))-length($1)-length($2)-length($3)-6;
# Use the substr function to define column 4
text=substr($0, start, end);
# Substitute spaces for underscores in column 4
gsub(" ", "_", text);
# Print everything
print $1, $2, $3, text, $(NF-2), $(NF-1), $NF
}' test.txt
Sorry for a really basic question. How to replace a particular column in a csv file with some string?
e.g.
id, day_model,night_model
===========================
1 , ,
2 ,2_DAY ,
3 ,3_DAY ,3_NIGHT
4 , ,
(4 rows)
I want to replace any string in the column 2 and column 3 to true
others would be false, but not the 1,2 row and end row.
Output:
id, day_model,night_model
===========================
1 ,false ,false
2 ,true ,false
3 ,true ,true
4 ,false ,false
(4 rows)
What I tried is the following sample code( Only trying to replace the string to "true" in column 3):
#awk -F, '$3!=""{$3="true"}' OFS=, file.csv > out.csv
But the out.csv is empty. Please give me some direction.
Many thanks!!
Since your field separator is comma, the "empty" fields may contain spaces, particularly the 2nd field. Therefore they might not equal the empty string.
I would do this:
awk -F, -v OFS=, '
# ex
NR>2 && !/^\([0-9]+ rows\)/ {
for (i=2; i<=NF; i++)
$i = ($i ~ /[^[:blank:]]/) ? "true" : "false"
}
{ print }
' file
Well since you added sed in tag and you have only three columns I have solution for your problem in four steps because regex replacement was not possible for all cases in just one go.
Since your 2nd and 3rd column is having blank space. I wrote four sed commands to do the replacement for each kind of row.
sed '/^(\d+\s+,)\S+\s*,\S+\s*$/\1true,true/gm' file.txt
This will replace rows like 3 ,3_DAY ,3_NIGHT
Regex101 Demo
sed '/^(\d+\s+,)\S+\s*,\s*$/\1true,false/gm' file.txt
This will replace rows like 2 ,2_DAY ,
Regex101 Demo
sed '/^(\d+\s+,)\s*,\S+\s*$/\1false,true/gm' file.txt
This will replace rows like 5 , ,2_Day
Regex101 Demo
sed '/^(\d+\s+,)\s*,\s*$/\1false,false/gm' file.txt
This will replace rows like 1 , ,
Regex101 Demo
For example, let's say there is a file called domains.csv with the following:
1,helloguys.ca
2,byegirls.com
3,hellohelloboys.ca
4,hellobyebyedad.com
5,letswelcomewelcomeyou.org
I'm trying to use linux awk regex expressions to find the line that contains the longest repeated1 word, so in this case, it will return the line
5,letswelcomewelcomeyou.org
How do I do that?
1 Meaning "immediately repeated", i.e., abcabc, but not abcXabc.
A pure awk implementation would be rather long-winded as awk regexes don't have backreferences, the usage of which simplifies the approach quite a bit.
I'ved added one line to the example input file for the case of multiple longest words:
1,helloguys.ca
2,byegirls.com
3,hellohelloboys.ca
4,hellobyebyedad.com
5,letswelcomewelcomeyou.org
6,letscomewelcomewelyou.org
And this gets the lines with the longest repeated sequence:
cut -d ',' -f 2 infile | grep -Eo '(.*)\1' |
awk '{ print length(), $0 }' | sort -k 1,1 -nr |
awk 'NR==1 {prev=$1;print $2;next} $1==prev {print $2;next} {exit}' | grep -f - infile
Since this is pretty anti-obvious, let's split up what this does and look at the output at each stage:
Remove the first column with the line number to avoid matches for lines numbers with repeating digits:
$ cut -d ',' -f 2 infile
helloguys.ca
byegirls.com
hellohelloboys.ca
hellobyebyedad.com
letswelcomewelcomeyou.org
letscomewelcomewelyou.org
Get all lines with a repeated sequence, extract just that repeated sequence:
... | grep -Eo '(.*)\1'
ll
hellohello
ll
byebye
welcomewelcome
comewelcomewel
Get the length of each of those lines:
... | awk '{ print length(), $0 }'
2 ll
10 hellohello
2 ll
6 byebye
14 welcomewelcome
14 comewelcomewel
Sort by the first column, numerically, descending:
...| sort -k 1,1 -nr
14 welcomewelcome
14 comewelcomewel
10 hellohello
6 byebye
2 ll
2 ll
Print the second of these columns for all lines where the first column (the length) has the same value as on the first line:
... | awk 'NR==1{prev=$1;print $2;next} $1==prev{print $2;next} {exit}'
welcomewelcome
comewelcomewel
Pipe this into grep, using the -f - argument to read stdin as a file:
... | grep -f - infile
5,letswelcomewelcomeyou.org
6,letscomewelcomewelyou.org
Limitations
While this can handle the bbwelcomewelcome case mentioned in comments, it will trip on overlapping patterns such as welwelcomewelcome, where it only finds welwel, but not welcomewelcome.
Alternative solution with more awk, less sort
As pointed out by tripleee in comments, this can be simplified to skip the sort step and combine the two awk steps and the sort step into a single awk step, likely improving performance:
$ cut -d ',' -f 2 infile | grep -Eo '(.*)\1' |
awk '{if (length()>ml) {ml=length(); delete a; i=1} if (length()>=ml){a[i++]=$0}}
END{for (i in a){print a[i]}}' |
grep -f - infile
Let's look at that awk step in more detail, with expanded variable names for clarity:
{
# New longest match: throw away stored longest matches, reset index
if (length() > max_len) {
max_len = length()
delete arr_longest
idx = 1
}
# Add line to longest matches
if (length() >= max_len)
arr_longest[idx++] = $0
}
# Print all the longest matches
END {
for (idx in arr_longest)
print arr_longest[idx]
}
Benchmarking
I've timed the two solutions on the top one million domains file mentioned in the comments:
First solution (with sort and two awk steps):
964438,abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk.com
real 1m55.742s
user 1m57.873s
sys 0m0.045s
Second solution (just one awk step, no sort):
964438,abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk.com
real 1m55.603s
user 1m56.514s
sys 0m0.045s
And the Perl solution by Casimir et Hippolyte:
964438,abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijk.com
real 0m5.249s
user 0m5.234s
sys 0m0.000s
What we learn from this: ask for a Perl solution next time ;)
Interestingly, if we know that there will be just one longest match and simplify the commands accordingly (just head -1 instead of the second awk command for the first solution, or no keeping track of multiple longest matches with awk in the second solution), the time gained is only in the range of a few seconds.
Portability remark
Apparently, BSD grep can't do grep -f - to read from stdin. In this case, the output of the pipe until there has to be redirected to a temp file, and this temp file then used with grep -f.
A way with perl:
perl -F, -ane 'if (#m=$F[1]=~/(?=(.+)\1)/g) {
#m=sort { length $b <=> length $a} #m;
$cl=length #m[0];
if ($l<$cl) { #res=($_); $l=$cl; } elsif ($l==$cl) { push #res, ($_); }
}
END { print #res; }' file
The idea is to find all longest overlapping repeated strings for each position in the second field, then the match array is sorted and the longest substring becomes the first item in the array (#m[0]).
Once done, the length of the current repeated substring ($cl) is compared with the stored length (of the previous longest substring). When the current repeated substring is longer than the stored length, the result array is overwritten with the current line, when the lengths are the same, the current line is pushed into the result array.
details:
command line option:
-F, set the field separator to ,
-ane (e execute the following code, n read a line at a time and puts its content in $_, a autosplit, using the defined FS, and puts fields in the #F array)
The pattern:
/
(?= # open a lookahead assertion
(.+)\1 # capture group 1 and backreference to the group 1
) # close the lookahead
/g # all occurrences
This is a well-know pattern to find all overlapping results in a string. The idea is to use the fact that a lookahead doesn't consume characters (a lookahead only means "check if this subpattern follows at the current position", but it doesn't match any character). To obtain the characters matched in the lookahead, all that you need is a capture group.
Since a lookahead matches nothing, the pattern is tested at each position (and doesn't care if the characters have been already captured in group 1 before).
I have a folder which contains sub folders and some more files in them.
The files are named in the following way
abc.DEF.xxxxxx.dat
I'm trying to find the duplicate files only matching 'xxxxxx' in the above pattern ignoring the rest. The extension .dat doesn't change. But the length of abc and DEF might change. The order of separation by periods also doesn't change.
I'm guessing I need to use Find in the following way
find -regextype posix-extended -regex '\w+\.\w+\.\w+\.dat'
I need help coming up with the regular expression. Thanks.
Example:
For a file named 'epg.ktt.crwqdd.dat', I need to find duplicate files containing 'crwqdd'.
You can use awk for that:
find /path -type f -name '*.dat' | awk -F. 'a[$4]++'
Explanation:
Let find give the following output:
./abd.DdF.TTDFDF.dat
./cdd.DxdsdF.xxxxxx.dat
./abc.DEF.xxxxxx.dat
./abd.DdF.xxxxxx.dat
./abd.DEF.xxxxxx.dat
Basically, spoken with the words of a computer, you want to count the occurrences of a pattern between .dat and the next dot and print those lines where pattern appeared at least the second time.
To achieve this we split the file names by the . what gives us 5(!) fields:
echo ./abd.DEF.xxxxxx.dat | awk -F. '{print $1 " " $2 " " $3 " " $4 " " $5}'
/abd DEF xxxxxx dat
Note the first, empty field. The pattern of interest is $4.
To count the occurrences of a pattern in $4 we use an associative array a and increment it's value on each occurrence. Unoptimized, the awk command will look like:
... | awk -F. '{{if(a[$4]++ > 1){print}}'
However, you can write an awk program in the form:
CONDITION { ACTION }
What will give us:
... | awk -F. 'a[$4]++ > 1 {print}'
print is the default action in awk. It prints the whole current line. As it is the default action it can be omitted. Also the >1 check can be omitted because awk treats integer values greater than zero as true. This gives us the final command:
... | awk -F. 'a[$4]++'
To generalize the command we can say the pattern of interest isn't the 4th column, it is the next to last column. This can be expressed using number of fields in awk its NF:
... | awk -F. 'a[$(NF-1)]++'
Output:
./abc.DEF.xxxxxx.dat
./abd.DdF.xxxxxx.dat
./abd.DEF.xxxxxx.dat
i have contents in a file
like
asdfb ... 1
adfsdf ... 2
sdfdf .. 3
I want to write a unix command that should be able to add 1 + 2 + 3 and give the result as 6
From what I am aware grep and awk would be handy, any pointers would help.
I believe the following is what you're looking for. It will sum up the last field in each record for the data that is read from stdin.
awk '{ sum += $NF } END { print sum }' < file.txt
Some things to note:
With awk you don't need to declare variables, they are willed into existence by assigning values to them.
The variable NF is the number of fields in the current record. By prepending it with a $ we are treating its value as a variable. At least this is how it appears to work anyway :)
The END { } block is only once all records have been processed by the other blocks.
An awk script is all you need for that, since it has grep facilities built in as part of the language.
Let's say your actual file consists of:
asdfb zz 1
adfsdf yyy 2
sdfdf xx 3
and you want to sum the third column. You can use:
echo 'asdfb zz 1
adfsdf yyy 2
sdfdf xx 3' | awk '
BEGIN {s=0;}
{s = s + $3;}
END {print s;}'
The BEGIN clause is run before processing any lines, the END clause after processing all lines.
The other clause happens for every line but you can add more clauses to change the behavior based on all sorts of things (grep-py things).
This might not exactly be what you're looking for, but I wrote a quick Ruby script to accomplish your goal:
#!/usr/bin/env ruby
total = 0
while gets
total += $1.to_i if $_ =~ /([0-9]+)$/
end
puts total
Here's one in Perl.
$ cat foo.txt
asdfb ... 1
adfsdf ... 2
sdfdf .. 3
$ perl -a -n -E '$total += $F[2]; END { say $total }' foo
6
Golfed version:
perl -anE'END{say$n}$n+=$F[2]' foo
6