I want to delete everything from start of the document upto some regex match, such as _tmm. I wrote the following custom command:
command! FilterTmm exe 'g/^_tmm\\>/,/^$/mo$' | norm /_tmm<CR> | :0,-1 d
This doesn't work as expected. But when I execute these commands directly using the command line, they work.
Do you have any alternative suggestions to accomplish this job using custom commands?
It seems that you want to remove from beginning to the line above the matched line.
/pattern could have offset option. like /pattern/{offset}, :h / for detail, for your needs, you could do (no matter where your cursor is):
ggd/_tmm/-1<cr>
EDIT
I read your question twice, it seems that you want to do it in a single command line.
Your script has problem, normal doesn't support |, that is, it must be the last command.
try this line, if it works for you:
exe 'norm gg'|/_tmm/-1|0,.d
Related
Attempting this on a sufficiently large file (say 80,000+ lines and about 500k+) will crash things or stall eventually both on my server and on my local Mac.
I've tried this at the command line as well, with the same result:
vim -es -c '%s/\n/\\n/g' -c wq $file
Also, the problem appears to be with the selection (\n) and not the replacement (\\n).
For my larger files I can of course split them and cat them back when finished, but the split points cannot be arbitrary in my case and must be adjusted manually for each and every split.
I appreciate that there are other ways to do this -- sed, etc. -- but I have similar and additional problems there, and I would like to be able to do this with vim.
I'm adding my comment as an answer:
Text editors usually don't like 'gigantic' lines (which is what you'll get with that replacement).
To test that if this is is due because of the 'big line' and not the substitution itself I did this test:
I created a simple ~500KB file with a script. No new line characters, just a single line. Then I tried to load the file with vim. Result? I had to kill it :-).
However, if on the same script I write some new lines every now and then, I have no problems opening the file.
Also, one thing you could try is the following: on vim, replace \n by \n\n if it is fast, then this should also confirm the 'big line' issue.
I have a few Regex expressions that I use with xVim for Xcode. Rather than repeatedly typing them out in the command bar with \<Regex>, I'd like to be able to invoke them with a custom command, like :Regex1. So I've added command Regex1 “/-\s*\(“ to my xvimrc file and restarted Xcode. When I run :Regex1 however nothing happens.
Your command wouldn't even work in original Vim. I don't know xVim, but try something along these lines:
" With cursor moving to match.
command Foo /foo/
" Just updating the search pattern (but less likely to be portable to xVim).
command Foo let #/ = 'foo'
If none of that works; try defining a mapping instead. As this is just translating keys, it has the highest chance of being supported.
I would suggest using this PERL Regex plugin since it already does what you want.
https://github.com/othree/eregex.vim
Abbreviations ...
I understand you often use the same regex. You can use abreviations instead of a command to do a search.
ab re -\s*(
then type / + re + space and your long regex (here just "-\s*(" should expand).
... Not user defined command
User defined commands are not available in ed nor in vi nor in vim without the +eval compilation flag (:h user-commands and scroll one line up).
For a list of ex commands: http://www.csb.yale.edu/userguides/wordprocess/vi-summary.html
For a list of ed commands: http://pubs.opengroup.org/onlinepubs/7908799/xcu/ed.html
Two of my favorite Vim features are the ability to apply standard operators to lines matching a regex, and the ability to filter a selection or range of lines through an external command. But can these two ideas be combined?
For example, I have a text file that I use as a lab notebook, with notes from different dates separated by a line of dashes. I can do something like delete all the dash-lines with something like :% g/^-/d. But let's say I wanted to resize all the actual text lines, without touching those dash lines.
For a single paragraph, this would be something like {!}fmt. But how can this be applied to all the non-dash paragraphs? When I try what seems the logical thing, and just chain these two together with :% v/^-/!fmt, that doesn't work. (In fact, it seems to crash Vim...)
Is there a way to connect these two ideas, and only pass lines (not) matching a pattern into an external command like fmt?
Consider how the :global command works.
:global (and :v) make two passes through the buffer,
first marking each line that matches,
then executing the given command on the marked lines.
Thus if you can come up with a command – be it an Ex command or a command-line tool – and an associated range that can be applied to each matching line (and range), you have a winner.
For example, assuming that your text is soft-wrapped and your paragraphs are simply lines that don't begin with minus, here's how to reformat the paragraphs:
:v/^-/.!fmt -72
Here we used the range . "current line" and thus filtered every matching line through fmt. More complicated ranges work, too. For instance, if your text were hard-wrapped and paragraphs were defined as "from a line beginning with minus, up until the next blank line" you could instead use this:
:g/^-/.,'}!fmt -72
Help topics:
:h multi-repeat
:h :range!
:h :range
One way to do it may be applying the command to the lines matching the pattern 'not containing only dashes'
The solution I would try the is something like (not tested):
:g/\v^(-+)#!/normal V!fmt
EDIT I was doing some experiments and I think a recurvie macro should work for you
first of all set nowrapscan:
set nowrapscan
To prevent the recursive macro executing more than you want.
Then you make a search:
/\v^(-+)#!
Test if pressing n and p works with your pattern and tune it up if needed
After that, start recording the macro
qqn:.!awk '{print $2}'^M$
In this case I use awk as an example .! means filter current line with an external program
Then to make the macro recursive just append the string '#q' to the register #q
let #q .= '#q'
And move to the beggining of the buffer to apply the recursive macro and make the modifications:
gg#q
Then you are done. Hope this helps
I need to be able to only extract the filename (info.txt) from a line like:
07/01/2010 07:25p 953 info.txt
I've tried using this: /d+\s+\d+\s+\d+\s+(?.?)/, but it doesn't seem to work ...
How about
/\S+$/
I.e. the longest possible string of non-whitespace at the end of the line.
(Hard to know for sure without more info about the possible inputs.)
As #J V pointed out, filenames with spaces in them (like his username) will not be parsed properly by the above regexp. We don't know from the question whether that's possible.
But I have a suspicion that we're looking at the output of Windows DIR command, or something very similar. In that case, the most reliable approach might be just to hack off the first 39 characters and keep the rest:
/^.{39}(.+)$/
Then $1 will contain the filename.
Better option:
But if you are using Windows DIR (as per your new comment), and you can control the DIR command, try
DIR /b
which removes the unneeded cruft (assuming you don't need the date, size etc.) and gives you one filename per line.
OK, you're using a Unix dir (per newer comment). The CentOS dir I have outputs one file per line, nothing else, when you give it no command line options. Chances are very good that whichever dir you're using can be persuaded to output filenames like that... then you wouldn't have to worry about using a regex that may or may not be correct for every possible input. Try man dir or dir --help to find out what command-line options to use.
\d\d:\d\d\w\s+\d+\s+(.*?)$
$1 will be the file name
The problem with your original regex is that it forgets the special characters :, /, and (?.?) means nothing...
Assuming that the files have extension as .txt you can try.
(?<=(\s)*)\w*.txt
Why not just use the following regex:
\w+\.\w+
Is there a way to run a regexp-string replace on the current line in the bash?
I find myself rather often in the situation, where I have typed a long commandline and then realize, that I would like to change a word somewhere in the line.
My current approach is to finish the line, press Ctrl+A (to get to the start of the line), insert a # (to comment out the line), press enter and then use the ^oldword^newword syntax (^oldword^newword executes the previous command after substituting oldword by newword).
But there has to be a better (faster) way to achieve this. (The mouse is not possible, since I am in an ssh-sessions most of the time).
Probably there is some emacs-like key-command for this, that I don't know about.
Edit: I have tried using vi-mode. Something strange happened. Although I am a loving vim-user, I had serious trouble using my beloved bash. All those finger-movements, that have been burned into my subconscious suddenly stopped working. I quickly returned to emacs-mode and considered, giving emacs a try as my favorite editor (although I guess, the same thing might happen again).
in ksh, in vi mode, if you hit 'v' while in command mode it will spawn a full vi session on the contents of your current command line. You can then edit using the full range of vi commands (global search and replace in your case). When :wq from vi, the edited command is executed. I'm sure something similar exists for bash. Since bash tends to extend its predecessors, there's probably something similar.
G'day,
What about using vi mode instead? Just enter set -o vi
Then you can go to the word you want to change and just do a cw or cW depending on what's in the word?
Oops, forgot to add you enter a ESC k to o to the previous line in the command history.
What do you normally use for an editor?
cheers,
Rob
Edit: What I forgot to say in my original reply was that you need to think of the vi command line in bash using the commands you enter when you are in "ex" mode in vi, i.e. after you've entered the colon.
Worst thing is that you need to move around through your command history using the ancient vi commands of h (to the left) and l (to the right). You can use w (or W) to bounce across words though.
Once you get used to it though, you have all sorts of commands available, e.g. entering ESC / my_command will look back through you r history, most recent first, to find the first occurrance of the command line containing the text my_command. Once it has found that, you can then use n to find the next occurrance, etc. And N to reverse the direction of the search.
I'd go have a read of the man page for bash to see what's available under vi mode. Once you get over the fact that up-arrow and down-arrow are replaced by ESC k, and then j, you'll see that vi mode offers more than emacs mode for command line editing in bash.
IMHO natchurly! (-:
Emacs? Eighty megs and constantly swapping!
cheers,
Rob
Unfortunately, no, there's not really a better way. If you're just tired of making the keystrokes, you can use macros to trim them down. Add the following to your ~/.inputrc:
"\C-x6": "\C-a#\C-m^"
"\C-x7": "\C-m\C-P\C-a\C-d\C-m"
Now, in a new bash instance (or after reloading .inputrc in your current shell by pressing C-x C-r), you can do the following:
Type a bogus command (e.g., ls abcxyz).
Press Ctrl-x, then 6. The macro inserts a # at the beginning of the line, executes the commented line, and types your first ^.
Type your correction (e.g., xyz^def).
Press Ctrl-x, then 7. The macro completes your substitution, then goes up to the previous (commented) line, removes the comment character, and executes it again.
It's not exactly elegant, but I think it's the best you're going to get with readline.