Use cases for regular expression find/replace - regex
I recently discussed editors with a co-worker. He uses one of the less popular editors and I use another (I won't say which ones since it's not relevant and I want to avoid an editor flame war). I was saying that I didn't like his editor as much because it doesn't let you do find/replace with regular expressions.
He said he's never wanted to do that, which was surprising since it's something I find myself doing all the time. However, off the top of my head I wasn't able to come up with more than one or two examples. Can anyone here offer some examples of times when they've found regex find/replace useful in their editor? Here's what I've been able to come up with since then as examples of things that I've actually had to do:
Strip the beginning of a line off of every line in a file that looks like:
Line 25634 :
Line 632157 :
Taking a few dozen files with a standard header which is slightly different for each file and stripping the first 19 lines from all of them all at once.
Piping the result of a MySQL select statement into a text file, then removing all of the formatting junk and reformatting it as a Python dictionary for use in a simple script.
In a CSV file with no escaped commas, replace the first character of the 8th column of each row with a capital A.
Given a bunch of GDB stack traces with lines like
#3 0x080a6d61 in _mvl_set_req_done (req=0x82624a4, result=27158) at ../../mvl/src/mvl_serv.c:850
strip out everything from each line except the function names.
Does anyone else have any real-life examples? The next time this comes up, I'd like to be more prepared to list good examples of why this feature is useful.
Just last week, I used regex find/replace to convert a CSV file to an XML file.
Simple enough to do really, just chop up each field (luckily it didn't have any escaped commas) and push it back out with the appropriate tags in place of the commas.
Regex make it easy to replace whole words using word boundaries.
(\b\w+\b)
So you can replace unwanted words in your file without disturbing words like Scunthorpe
Yesterday I took a create table statement I made for an Oracle table and converted the fields to setString() method calls using JDBC and PreparedStatements. The table's field names were mapped to my class properties, so regex search and replace was the perfect fit.
Create Table text:
...
field_1 VARCHAR2(100) NULL,
field_2 VARCHAR2(10) NULL,
field_3 NUMBER(8) NULL,
field_4 VARCHAR2(100) NULL,
....
My Regex Search:
/([a-z_])+ .*?,?/
My Replacement:
pstmt.setString(1, \1);
The result:
...
pstmt.setString(1, field_1);
pstmt.setString(1, field_2);
pstmt.setString(1, field_3);
pstmt.setString(1, field_4);
....
I then went through and manually set the position int for each call and changed the method to setInt() (and others) where necessary, but that worked handy for me. I actually used it three or four times for similar field to method call conversions.
I like to use regexps to reformat lists of items like this:
int item1
double item2
to
public void item1(int item1){
}
public void item2(double item2){
}
This can be a big time saver.
I use it all the time when someone sends me a list of patient visit numbers in a column (say 100-200) and I need them in a '0000000444','000000004445' format. works wonders for me!
I also use it to pull out email addresses in an email. I send out group emails often and all the bounced returns come back in one email. So, I regex to pull them all out and then drop them into a string var to remove from the database.
I even wrote a little dialog prog to apply regex to my clipboard. It grabs the contents applies the regex and then loads it back into the clipboard.
One thing I use it for in web development all the time is stripping some text of its HTML tags. This might need to be done to sanitize user input for security, or for displaying a preview of a news article. For example, if you have an article with lots of HTML tags for formatting, you can't just do LEFT(article_text,100) + '...' (plus a "read more" link) and render that on a page at the risk of breaking the page by splitting apart an HTML tag.
Also, I've had to strip img tags in database records that link to images that no longer exist. And let's not forget web form validation. If you want to make a user has entered a correct email address (syntactically speaking) into a web form this is about the only way of checking it thoroughly.
I've just pasted a long character sequence into a string literal, and now I want to break it up into a concatenation of shorter string literals so it doesn't wrap. I also want it to be readable, so I want to break only after spaces. I select the whole string (minus the quotation marks) and do an in-selection-only replace-all with this regex:
/.{20,60} /
...and this replacement:
/$0"¶ + "/
...where the pilcrow is an actual newline, and the number of spaces varies from one incident to the next. Result:
String s = "I recently discussed editors with a co-worker. He uses one "
+ "of the less popular editors and I use another (I won't say "
+ "which ones since it's not relevant and I want to avoid an "
+ "editor flame war). I was saying that I didn't like his "
+ "editor as much because it doesn't let you do find/replace "
+ "with regular expressions.";
The first thing I do with any editor is try to figure out it's Regex oddities. I use it all the time. Nothing really crazy, but it's handy when you've got to copy/paste stuff between different types of text - SQL <-> PHP is the one I do most often - and you don't want to fart around making the same change 500 times.
Regex is very handy any time I am trying to replace a value that spans multiple lines. Or when I want to replace a value with something that contains a line break.
I also like that you can match things in a regular expression and not replace the full match using the $# syntax to output the portion of the match you want to maintain.
I agree with you on points 3, 4, and 5 but not necessarily points 1 and 2.
In some cases 1 and 2 are easier to achieve using a anonymous keyboard macro.
By this I mean doing the following:
Position the cursor on the first line
Start a keyboard macro recording
Modify the first line
Position the cursor on the next line
Stop record.
Now all that is needed to modify the next line is to repeat the macro.
I could live with out support for regex but could not live without anonymous keyboard macros.
Related
Notepad++ - Selecting or Highlighting multiple sections of repeated text IN 1 LINE
I have a text file in Notepad++ that contains about 66,000 words all in 1 line, and it is a set of 200 "lines" of output that are all unique and placed in 1 line in the basic JSON form {output:[{output1},{output2},...}]}. There is a set of characters matching the RegEx expression "id":.........,"kind":"track" that occurs about 285 times in total, and I am trying to either single them out, or copy all of them at once. Basically, without some super complicated RegEx terms, I am stuck because I can't figure out how to highlight all of them at once, and also the Remove Unbookmarked Lines feature does not apply because this is all in one line. I have only managed to be able to Mark every single occurrence. So does this require a large number of steps to get the file into multiple lines and work from there, or is there something else I am missing? Edit: I have come up with a set of Macro schemes that make the process of doing this manually work much faster. It's another alternative but still takes a few steps and quite some time. Edit 2: I intended there to be an answer for actually just highlighting the different sections all at once, but I guess that it not possible. The answer here turns out to be more useful in my case, allowing me to have a list of IDs without everything else.
You seem to already have a regex which matches single instances of your pattern, so assuming it works and that we must use Notepad++ for this: Replace .*?("id":.........,"kind":"track").*?(?="id".........,"kind":"track"|$) with \1. If this textfile is valid JSON, this opens you up to other, non-notepad++ options, like using Python with the json module. Edited to remove unnecessary steps
Search/replace in block selection in Notepad++
Is there a way to limit search/replace only to a columnar block selection in Notepad++? Here is what I am trying to do: I am bulk-editing metadata extracted from large numbers of photos. The metadata comes to me as a csv file with no quotes around fields in header line and no quotes around first field in each succeeding line. I edit this file in Open Office calc which exports with quotes around all fields. I can easily edit header row but the problem comes in stripping quotes from only first field in successive lines. I can use notepad in columnar mode but, after selecting the first column, the 'search only in selection' option box is greyed out. I can do this by hand but it means lots of hand-work and increased chance of error.
I know, this probably won't help you any more, but I just had the same problem and stumbled across this question. I found moving the block in question to a new file and performing the find/replace there works quite decently. When moving the block back, be sure to select it in block mode (see this question).
No. Another editor may have this feature.
sort of a late reply but... I had the same problem when I moved to a new machine with Notepad++ installed. Previously, I was using a text editor called Boxer that had this feature, which I found invaluable. Its not free-ware however.
You may not be able to Search/Replace within a columnar selection, but you can easily carry out your task within Notepad++. Use Find and Replace feature, with the Regular Expressions box checked. If you want to remove quotes only from a target column, use the following regular expression in the Find field: (^([^,]*,){i})"([^,\n\r]*)"(.*$) Replace i with the position of the target column minus 1. (i.e.- Us 2 if you want quotes around the third column, 0 for the first column, etc) In the Replace field use: \1\3\4 Clicking "Replace All" will strip quotes from the target column. If you want to blow away all quotes surrounding each element in your csv without prejudice, use the following regular expression in the Find field: ((?<=,)|(?<=^))"(.*?)"((?=$|,)) In the Replace field use: \1\2\3 Clicking Replace All will strip quotes form the columns. Example Since you didn't provide an example csv file, I'll walk through my own working example. Below is my csv: "0","1","2","3","4","5","6","7","8","9" "10","11","12","13","14","15","16","17","18","19" "20","21","22","23","24","25","26","27","28","29" "30","31","32","33","34","35","36","37","38","39" "40","41","42","43","44","45","46","47","48","49" "50","51","52","53","54","55","56","57","58","59" "60","61","62","63","64","65","66","67","68","69" "70","71","72","73","74","75","76","77","78","79" "80","81","82","83","84","85","86","87","88","89" "90","91","92","93","94","95","96","97","98","99" "100","101","102","103","104","105","106","107","108","109" "110","111","112","113","114","115","116","117","118","119" "120","121","122","123","124","125","126","127","128","129" "130","131","132","133","134","135","136","137","138","139" "140","141","142","143","144","145","146","147","148","149" "150","151","152","153","154","155","156","157","158","159" "160","161","162","163","164","165","166","167","168","169" "170","171","172","173","174","175","176","177","178","179" "180","181","182","183","184","185","186","187","188","189" "190","191","192","193","194","195","196","197","198","199" If I wanted to remove quotes from the second column, I would use the below Find and Replace fields (^([^,]*,){1})"([^,\n\r]*)"(.*$) \1"\3"\4 Clicking Replace All yields the below result: "0",1,"2","3","4","5","6","7","8","9" "10",11,"12","13","14","15","16","17","18","19" "20",21,"22","23","24","25","26","27","28","29" "30",31,"32","33","34","35","36","37","38","39" "40",41,"42","43","44","45","46","47","48","49" "50",51,"52","53","54","55","56","57","58","59" "60",61,"62","63","64","65","66","67","68","69" "70",71,"72","73","74","75","76","77","78","79" "80",81,"82","83","84","85","86","87","88","89" "90",91,"92","93","94","95","96","97","98","99" "100",101,"102","103","104","105","106","107","108","109" "110",111,"112","113","114","115","116","117","118","119" "120",121,"122","123","124","125","126","127","128","129" "130",131,"132","133","134","135","136","137","138","139" "140",141,"142","143","144","145","146","147","148","149" "150",151,"152","153","154","155","156","157","158","159" "160",161,"162","163","164","165","166","167","168","169" "170",171,"172","173","174","175","176","177","178","179" "180",181,"182","183","184","185","186","187","188","189" "190",191,"192","193","194","195","196","197","198","199"
My search on internet, to to see weather notepad++ suports this; brought me here. I have used TextPad and confirm that it supports find-and-replace within column selected block. Also TextPad is free for personal use.
EditPad: Need a regex that handles multiple possible data formats
First, I'm using EditPadPro for my regex cleaning, so any answers given should work within that environment. I get a large spreadsheet full of data that I have to clean every day. I've managed to get it down to a couple of different regexes that I run, and this works... but I'm curious to see if it's possible to reduce down to a single regex. Here is some sample data: 3-CPC_114851_70095_70095_CAN-bre 3-CPC_114851_70095_70095_CAN b11-ao1-113775-bre b7-ao-114441 b7-ao-114441-bre b7-ao1-114441 b7-ao1-114441-bre http://go.nlvid.com/results1/?http://bo go.nlv/results1/?click b4-sm-1359 b6-sm-1356-bre 1359_195_1453814569-bre 1356_104_1456856729 b15-rad-8905 b15-rad-8905-bre Here is how the above data needs to end up: 114851-bre 114851 113775-bre 114441 114441-bre 114441 114441-bre http://go.nlvid.com/results1/ go.nlv/results1/ sm-1359 sm-1356-bre sm-1359-bre sm-1356 rad-8905 rad-8905-bre So, there are numerous rules, such as: In cases of more than 2 underscores, the result needs to contain only the value immediately after the first underscore, and everything from the dash onwards. In cases where the string contains "-ao-", "-ao1-", everything prior to the final numeric string should be removed. If a question mark is present, everything from the mark onwards should be removed. If the string contains "-sm-" or "-rad-", everything prior to those alpha strings should be removed. If the string contains 2 underscores, averything after the first numeric string up to a dash (if present) should be removed, and the string "sm-" should be prepended. Additionally there is other data that must be left untouched, including but not limited to: 113535|24905|24905 as well as many variations on this pattern of xxxxxx|yyyyy|zzzzz (and not always those string lengths) This may be asking way too much of regex, I'm not sure as I'm not great with it. But I've seen some pretty impressive things done with it, so I thought I'd put this out to the community and see what you come back with.
Jonathan, I can wrap all of those into one regex, except the last one (where you prepend sm- to a string that does not contain sm). It is not possible in this context, because we cannot capture "sm" to reuse in the replacement, and because there is no "conditional replacement" syntax in EPP. That being said, you can achieve what you want in EPP with two regexes and one macro to chain the two. Here is how. The solution below is tested in EPP. Regex 1 Press Ctrl + Sh + F to enter Search / Replace mode Enter the following Search and Replace in the appropriate boxes At the top right of the Search bar, click the Favorite Searches pull-down, select "Add", give it a name, e.g. Regex 1 Search: (?mx)^ (?=(?:[^_\r\n]*?_){3})[^_\r\n]+?_([^_\r\n]+)[^-\r\n]+(-[^\r\n]+)? | [^\r\n]*?-ao1?-\D*([^\r\n]+) | ([^\r\n?]*)(?=\?)[^\r\n]+ | [^\r\n]*?-((?:sm|rad)-[^\r\n]+) Replace: \1\2\3\4\5 Regex 2 Same 1-2-3 steps as above. Search ^(?!(?:[^_\r\n]*?_){3})(?=(?:[^_\r\n]*?_){2})(\d+)(?:[^-\r\n]+(-[^\r\n]+)?) Replace sm-\1\2 Chaining Regex 1 and Regex 2 Top menu: Macros, Record Macro, give it a name. Click the Favorite searches pulldown, select Regex 1 Hit Replace All. Click the Favorite searches pulldown, select Regex 2 Hit Replace All. Macros, Stop recording. Whenever you want to do your sequence of replacements, pull it by name under the Macros menu. Testing This I have tested my "Jonathan macro" on your input. Here is the result: 114851-bre 114851 113775-bre 114441 114441-bre 114441 114441-bre http://go.nlvid.com/results1/ go.nlv/results1/ sm-1359 sm-1356-bre sm-1359-bre sm-1356 rad-8905 rad-8905-bre
Try this: Toggle the Search Panel : SHIFT+CTRL+F SEARCH: .*?((?:sm-|rad-)?(?:(?:\d+|[\w\.]+\/.*?))(?:-\w+)?$) REPLACE: $1 Check REGEX and WORDS Click Replace All or Hit CTRL+ALT+F3 Check the image below:
notepad++ regular expressions to convert lines for SPSS syntax editor
I am curently busy with bulding a synthax document in SPSS and have a column of variable strings that consists of approximately 40 lines (it will be much much more in coming week). SPSS has a nice way of creating it (can be seen here :) http://vault.hanover.edu/~altermattw/methods/stats/reliable/reliability-1.html) but it can be done per one variable at a time which is possible to automatize. I am a total beginner (I wouldn't mind if you would call me n00b) at search&replace with reqular expressions in notepad++ but I can use the extended search function as a basic user :P The data contains scores Likert scale (from 1-7) and I would like to reverse it to do some tests. For example: my variable name on the line is q_4_SQ001 and the sline in synthax editor is q_4_SQ001=COMPUTE q_4_SQ001r=8-q_4_SQ001. My question so far is thus: How can I convert a line containing a unique variable name into it's revers formula? So in this case, how can I replace the following lines: q_4_SQ001 q_4_SQ002 q_4_SQ003 q_4_SQ004 into the synthax given under: COMPUTE q_4_SQ001r=8-q_4_SQ001. COMPUTE q_4_SQ002r=8-q_4_SQ002. COMPUTE q_4_SQ003r=8-q_4_SQ003. COMPUTE q_4_SQ004r=8-q_4_SQ004. Please remark the dots in the end of each line I did this manually to give you an impression of what I would like to achieve. My data set has different questions and different variable strings so I would like to make my life a bit easier right now :P I also tried recording and running a macro as stated in here (http://stackoverflow.com/questions/2467875/notepad-replace-all-regular-expression-start-of-the-line-and-end-of-the-line) but that still is pretty time consuming since I have to do each line manulally and clean up with extended search in the end. Wouldn't it be easier to convert each line? Thanks a bunch in advance :)
Funny, Notepad++ works under Wine, as I just found out ;) New file, inserted: q_4_SQ001 q_4_SQ002 q_4_SQ003 q_4_SQ004 Select all (CTRL+A), replace (CTRL+R). Tick Regular Expr, stick ^(.*)$ in the "find" bit (first textbox), and COMPUTE \1r=8-\1. in the "replace" bit (second textbox). Hit the Find button, and then the Replace Rest button. Parenthesis () around a pattern cause the pattern to be "memorised", each set of parenthesis available to the replacement pattern via \1, \2, etc. After the replace, I got: COMPUTE q_4_SQ001r=8-q_4_SQ001. COMPUTE q_4_SQ002r=8-q_4_SQ002. COMPUTE q_4_SQ003r=8-q_4_SQ003. COMPUTE q_4_SQ004r=8-q_4_SQ004. Which I assume is what you wanted. Enjoy.
Find Lines with N occurrences of a char
I have a txt file that I’m trying to import as flat file into SQL2008 that looks like this: “123456”,”some text” “543210”,”some more text” “111223”,”other text” etc… The file has more than 300.000 rows and the text is large (usually 200-500 chars), so scanning the file by hand is very time consuming and prone to error. Other similar (and even more complex files) were successfully imported. The problem with this one, is that “some lines” contain quotes in the text… (this came from an export from an old SuperBase DB that didn’t let you specify a text quantifier, there’s nothing I can do with the file other than clear it and try to import it). So the “offending” lines look like this: “123456”,”this text “contains” a quote” “543210”,”And the “above” text is bad” etc… You can see the problem here. Now, 300.000 is not too much if I could perform a search using a text editor that can use regex, I’d manually remove the quotes from each line. The problem is not the number of offending lines, but the impossibility to find them with a simple search. I’m sure there are less than 500, but spread those in a 300.000 lines txt file and you know what I mean. Based upon that, what would be the best regex I could use to identify these lines? My first thought is: Tell me which lines contain more than 4 quotes (“). But I couldn’t come up with anything (I’m not good at Regex beyond the basics).
this pattern ^("[^"]+){4,} will match "lines containing more than 4 quotes" you can experiment with replacing 4 with 5 or more, depending on your data.
I think that you can be more direct with a Regex than you're planning to be. Depending on your dialect of Regex, something like this should do it: ^"\d+",".*".*"
You could also use a regex to remove the outside quotes and use a better delimeter instead. For example, search for ^"([0-9]+)","(.*)"$ and replace it with \1+++++DELIM+++++\2. Of course, this doesn't directly answer your question, but it might solve the problem.