I have an import error somewhere in my code but can't find it. I can use the search function in my editor (sublime text) to search for a regular expression in all the files inside the project. So I would like to search for the terms 'import' and 'views' with anything in between or before / after, I just want to match any line that contains these two words.
I'm not familiar with 'Sublime Text', but the following regex works well in 'Notepad++':
(.*)(import)(.*)(views)(.*)
Now, in the line matched, $1, $2, $3, $4 and $5 may be used to refer to the text before "import", "import", the text between "import" and "views", "views" and the text after "views" respectively.
EDIT - 1
It works well with 'Sublime Text' as well.
For example, for the text,
asdf1234 ..import.fghj4567 views...hjkl7890
Find What (as above):
(.*)(import)(.*)(views)(.*)
Replace With:
$2, $4, $5, $3, $1
Result:
import, views, ...hjkl7890, .fghj4567 , asdf1234 ..
For the text,
asdf1234 views fghj4567 import hjkl7890
Find What (as above):
(.*)(views)(.*)(import)(.*)
Replace With:
$2, $4, $5, $3, $1
Result:
views, import, hjkl7890, fghj4567 , asdf1234
EDIT - 2
It seems to be working well for me though:
Related
I have an old app that generates something like:
USERLIST (
"jasonr"
"jameso"
"tommyx"
)
ROLELIST (
"op"
"admin"
"ro"
)
I need some form of regex that changes ONLY the USERLIST section to USERLIST("jasonr", "jameso", "tommyx") and the rest of the text remain intact:
USERLIST("jasonr", "jameso", "tommyx")
ROLELIST (
"op"
"admin"
"ro"
)
In addition to the multiline issue, I don't know how to handle the replacement in only part of the string. I've tried perl (-0pe) and sed, can't find a solution. I don't want to write an app to do this, surely there is a way...
perl -0777 -wpe'
s{USERLIST\s*\(\K ([^)]+) }{ join ", ", $1 =~ /("[^"]+")/g }ex' file
Prints the desired output on the shown input file. Broken over lines for easier view.
With -0777 switch the whole file is read at once into a string ("slurped") and is thus in $_. With /x modifier literal spaces in the pattern are ignored so can be used for readability.
Explanation
Capture what follows USERLIST (, up to the first closing parenthesis. This assumes no such paren inside USERLIST( ... ). With \K lookbehind all matches prior to it stay (are not "consumed" out of the string) and are excluded from $&, so we don't have to re-enter them in the replacement side
The replacement side is evaluated as code, courtesy of /e modifier. In it we capture all double-quoted substrings from the initial $1 capture (assuming no nested quotes) and join that list by , . The obtained string is then used for the replacement for what was in the parentheses following USERLIST
With your shown samples in GNU awk please try following awk code.
awk -v RS='(^|\n)USERLIST \\(\n[^)]*\\)\n' '
RT{
sub(/[[:space:]]+\(\n[[:space:]]+/,"(",RT)
sub(/[[:space:]]*\n\)\n/,")",RT)
gsub(/"\n +"/,"\", \"",RT)
print RT
}
END{
printf("%s",$0)
}
' Input_file
Explanation: Setting RS(record separator) as (^|\n)USERLIST \\(\n[^)]*\\)\n for all lines of Input_file. Then in main program checking condition if RT is NOT NULL then substituting [[:space:]]+\(\n[[:space:]]+ with "(" and then substituting [[:space:]]*\n\)\n with ) and then substituting "\n +" with \" finally printing its value. Then in this program's END block printing line's value in printf function to get rest of the values.
Output will be as follows:
USERLIST("jasonr", "jameso", "tommyx")
ROLELIST (
"op"
"admin"
"ro"
)
This might work for you (GNU sed):
sed '/USERLIST/{:a;N;/^)$/M!ba;s/(\n\s*/(/;s/\n)/)/;s/\n\s*/, /g}' file
If a line contains USERLIST, gather up the list and format as required.
I'm trying to convert a MySQL dump into SQLite database, for database migration. I need to edit the date to append time, so for example 2018-09-19 should be converted to 2018-09-19 00:00:00.00. The reason for this format has to do with how our application works. This is the solution I came up with but it doesn't work.
#!/usr/bin/perl
while (<>){
<Other Stuff>
....
s/([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))\[/$1[00:00:00.00]][/
print;
}
For testing I created a test.txt file with just for testing
2019-03-06
And in command line or terminal I used the following command to test if the append works.
perl -pe 's/([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))\[/$1[00:00:00.00]][/' < test.txt > testout.txt
This gives a clear error of:
syntax error at -e line 1, near "00:" Execution of -e aborted due to compilation errors.
Using this #dada's solution that looks like this gives no error but also doesn't append the 00:00:00.00 at the end of the line
The Expected output should be
2019-03-06 00:00:00.00
Your problem statement says you want to turn:
2018-09-19
into:
2018-09-19 00:00:00.00
However, your code is:
s/([12]\d{3}-(0[1-9]|1[0-2])-(0[1-9]|[12]\d|3[01]))\[/$1[00:00:00.00]][/
Using /x we can write it a bit more legibly as:
s/
(
[12]\d{3} # year
- # hyphen
( 0[1-9] | 1[0-2] ) # month (saved as $2)
- # hyphen
( 0[1-9] | [12]\d | 3[01] ) # day (saved as $3)
) # save this as $1
\[ # square bracket
/$1[00:00:00.00]][/x
From this, it is clear that 2018-09-19 does not match because it does not end with a square bracket.
The replacement value is:
$1[00:00:00.00]][
This (tries to) say:
lookup index 00:00:00.00 in array #1 and substitute value
append ][
However this is not valid perl and not what you wanted anyway.
What is happening is that instead of $x + [y] (scalar followed by string value), perl is seeing $x[y] (value from array). To prevent this, either use braces ( ${x}[y] ) or escape the bracket ( $x\[y] ). This results in:
${1}[00:00:00.00]][
which is still not what the problem said was needed as the zeros are wrapped in brackets.
To get what you say you want, remove the \[ from the end of the search part and remove the unnecessary brackets from the replacement part:
s/
(
[12]\d{3}
- ( 0[1-9] | 1[0-2] )
- ( 0[1-9] | [12]\d | 3[01] )
)
# no bracket here
/$1 00:00:00.00/x; # no brackets here
Note that your code as given has another bug which is that the final print needs to be separated from the s/// by a semi-colon.
I am trying to grab the Word from a text file AFTER the IP regex match ($2):
fileName.txt:
IP hostname blah blah blah...
blah blah..
IP hostname blah blah blah...
.
.
.
I want the hostnames for each instance of the IP (which I found with grep regex and stored it in $var). I want to use found hostnames to set to $host and print them out onto a text file with the IPs (which are already done).
I have tried multiple methods from online answers but they all printed blanks.
Thank you!
See BashFAQ #1 for guidance on how best to read from a stream.
#!/bin/bash
# ^^^^ important, not /bin/sh
while read -r -a words; do
# put the array words into $1, $2, $3, etc.
set -- "${words[#]}"
# put $1 -- the first word -- into the variable named "ip"
ip=$1
# remove $1, leaving only hostnames in $1, $2, etc
shift
echo "IP address $ip has the following hostnames:"
for hostname; do # by default, a for loop iterates over $#
echo "- ${hostname}"
done
done < <(grep '[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}' test_amy_hostrun.txt)
awk to the rescue!
$ awk '/^([0-9]{1,3}\.){3}[0-9]{1,3}/{print $1, $2}'
will give you IP hostname for the lines starting with the matching regex.
if your awk doesn't support regex interval you need the add --re-interval option.
I have a file that contains the output of another command of the form:
aaaaaaaa (paramA 12.4) param2: 14, some text 25.55
bbbbbb (paramA 5.1) param2: 121, some text2 312.1
I want to pick the values aaaaaaaa, 12.4, 14, 25.55 from first row and similarly bbbbbb, 5.1, 121, 312.1 from row 2 and so on and dump them in a different format (may be csv).
I want to use regular expression in some command (sed, awk, grep etc) and assign the matched patters to say $1, $2 etc so that I could dump them in the desired format.
What I am not clear is which command to learn for this. While searching around, sed, awk, grep seem to be capable of doing it but I could not quite get a readymade answer. I plan learn each of these commands but what do I start with to solve the problem at hand?
For an input exactly like that, you can use
awk -F' +|)|,' -vOFS=", " '{print $1, $3, $6,$10}' file
which produces
aaaaaaaa, 12.4, 14, 25.55
bbbbbb, 5.1, 121, 312.1
However, that fails if you have more or less than two words in the last field, or if you have more then one word in the others.
Otherwise, you would have to look for numbers and distinguish it from text or you need to better characterize your input (fixed with, tab separated or based on some regex with sed).
You can do this in bash:
# Not tested; regex may not be entirely correct.
regex='(.*) +\(paramA (.*)\) +params: (.*), +.* +(.*)'
while IFS= read -r line; do
[[ $line =~ $regex ]] || continue
# Captured groups are:
# ${BASH_REMATCH[1]} - aaaaaaaa
# ${BASH_REMATCH[2]} - 12.4
# ${BASH_REMATCH[3]} - 14
# ${BASH_REMATCH[4]} - 25.55
done < file.txt
However, it will be relatively slow. Using another tool like awk will probably be more efficient. It all depends, however, on what you actually want to do with the extracted text.
I have two pipe-delimited text files (say, A.txt and B.txt).
Below is A.txt file format, a standard format (can think of as Universal Set)
"EmpId"|"FName"|"LName"|"Sex"|"DOB"|"SSN"|"TagId1"|"TagId2"
Below is B.txt file (can think of Sub Set) with column and 2 records.
"SSN"|"LName"|"FName"|"DOB"|"Sex"|"EmpId"
"123"|"Barrat"|"Alanzon"|"1983"|"F"|"4455"
"678"|"Alexia"|"Timothy"|"1975"|"M"|"2222"||"baz"
I need to convert B.txt into A.txt format’s columns order.
Expected result is:
"EmpId"|"FName"|"LName"|"Sex"|"DOB"|"SSN"|"TagId1"|"TagId2"
"4455"|"Alanzon"|"Barrat"|"F"|"1983"|"123"|||
"2222"|" Timothy "|" Alexia"|"M"|"1975"|"678"||"baz"
How to go about it?
Since the two formats are basically the same, except the columns are reordered, I would recommend just reading B.txt into a dictionary (with the column name as the key) and then printing this dictionary to a file, with the columns in the right order.
awk -F'|' -v OFS='|' '{print $6, $3, $2, $5, $4, $1, $7, $8}' B.txt > A.txt