I want to comment gdb commands in the command prompt at the end of the line for two use case:
Writing a tutorial
Documenting a gdb command file for automatic execution
While gdb command files seem to support comments via the hash (#) symbol
I cannot find any command marker for the gdb command line:
(gdb) print 1
$2 = 1
(gdb) print 1 // comment
A syntax error in expression, near `/ comment'.
(gdb) print 1 -- comment
A syntax error in expression, near `comment'.
(gdb) print 1 # comment
Invalid character '#' in expression.
(gdb) print 1 ; comment
Invalid character ';' in expression.
(gdb) print 1 /* comment */
No symbol "comment" in current context.
How can I append comments?
Edit: According to the gdb documentation
Any text from a # to the end of the line is a comment;
it does nothing.
This is useful mainly in command files
I am not sure why it doesn't work as documented...
You can use it on its own line, but not at the end of a line. E.g.:
(gdb) # foo
(gdb) print "foo"
$1 = "foo"
Related
I know I can use info line *<addr> within gdb to decode a single line. But a stack trace has 10+ addresses. Is there an efficient way to let me copy and paste a list of addresses (also without the leading *) and it will list the filenames and line numbers much like the bt command but without the core dump?
Based on the suggestion of #ssbssa, I wrote the following user-defined command that works for me.
define decode
set $i = 0
while $i < $argc
eval "info line *($arg%d)", $i
set $i = $i + 1
end
end
I want to delete the header from all the files, and the header has the lines starting with //.
If I want to delete all the lines that starts with //, I can do following:
sed '/^\/\//d'
But, that is not something I need to do. I just need to delete the lines in the beginning of the file that starts with //.
Sample file:
// This is the header
// This should be deleted
print "Hi"
// This should not be deleted
print "Hello"
Expected output:
print "Hi"
// This should not be deleted
print "Hello"
Update:
If there is a new line in the beginning or in-between, it doesn't work. Is there any way to take care of that scenario?
Sample file:
< new empty line >
// This is the header
< new empty line >
// This should be deleted
print "Hi"
// This should not be deleted
print "Hello"
Expected output:
print "Hi"
// This should not be deleted
print "Hello"
Can someone suggest a way to do this? Thanks in advance!
Update: The accepted answer works well for white space in the beginning or in-between.
Could you please try following. This also takes care of new line scenario too, written and tested in https://ideone.com/IKN3QR
awk '
(NF == 0 || /^[[:blank:]]*\/\//) && !found{
next
}
NF{
found=1
}
1
' Input_file
Explanation: Simply checking conditions if a line either is empty OR starting from // AND variable found is NULL then simply skip those lines. Once any line without // found then setting variable found here so all next coming lines should be printed from line where it's get set to till end of Input_file printed.
With sed:
sed -n '1{:a; /^[[:space:]]*\/\/\|^$/ {n; ba}};p' file
print "Hi"
// This should not be deleted
print "Hello"
Slightly shorter version with GNU sed:
sed -nE '1{:a; /^\s*\/\/|^$/ {n; ba}};p' file
Explanation:
1 { # execute this block on the fist line only
:a; # this is a label
/^\s*\/\/|^$/ { n; # on lines matching `^\s*\/\/` or `^$`, do: read the next line
ba } # and go to label :a
}; # end block
p # print line unchanged:
# we only get here after the header or when it's not found
sed -n makes sed not print any lines without the p command.
Edit: updated the pattern to also skip empty lines.
I sounds like you just want to start printing from the first line that's neither blank nor just a comment:
$ awk 'NF && ($1 !~ "^//"){f=1} f' file
print "Hi"
// This should not be deleted
print "Hello"
The above simply sets a flag f when it finds such a line and prints every line from then on. It will work using any awk in any shell on every UNIX box.
Note that, unlike some of the potential solutions posted, it doesn't store more than 1 line at a time in memory and so will work no matter how large your input file is.
It was tested against this input:
$ cat file
// This is the header
// This should be deleted
print "Hi"
// This should not be deleted
print "Hello"
To run the above on many files at once and modify each file as you go is this with GNU awk:
awk -i inplace 'NF && ($1 !~ "^//"){f=1} f' *
and this with any awk:
ip_awk() { local f t=$(mktemp) && for f in "${#:2}"; do awk "$1" "$f" > "$t" && mv -- "$t" "$f"; done; }
ip_awk 'NF && ($1 !~ "^//"){f=1} f' *
In case perl is available then this may also work in slurp mode:
perl -0777 -pe 's~\A(?:\h*(?://.*)?\R+)+~~' file
\A will only match start of the file and (?:\h*(?://.*)?\R+)+ will match 1 or more lines that are blank or have // with optional leading spaces.
With GNU sed:
sed -i -Ez 's/^((\/\/[^\n]*|\s*)\n)+//' file
The ^((\/\/[^\n]*|\s*)\n)+ expression will match one or more lines starting with //, also matching blank lines, only at the start of the file.
Using ed (the file editor that the stream editor sed is based on),
printf '1,/^[^/]/ g|^\(//.*\)\{0,1\}$| d\nw\n' | ed tmp.txt
Some explanations are probably in order.
ed takes the name of the file to edit as an argument, and reads commands from standard input. Each command is terminated by a newline. (You could also read commands from a here document, rather than from printf via a pipe.)
1,/^[^/]/ addresses the first lines in the file, up to and including the first one that does not start with /. (All the lines you want to delete will be included in this set.)
g|^\(//.*\)\{0,1\}$|d deletes all the addressed lines that are either empty or do start with //.
w saves the changes.
Step 2 is a bit ugly; unfortunately, ed does not support regular expression operators you may take for granted, like ? or |. Breaking the regular expression down a bit:
^ matches the start of the line.
//.* matches // followed by zero or more characters.
\(//.*\)\{0,1\} matches the preceding regular expression 0 or 1 times (i.e., optionally)
$ matches the end of the line.
Input text file:
This is a simple test file.
#BEGIN
These lines should be extracted by our script.
Everything here will be copied.
#END
That should be all.
#BEGIN
Nothing from here.
#END
Desired output:
These lines should be extracted by our script.
Everything here will be copied.
My awk script is:
#!/usr/bin/awk -f
$1 ~ /#BEGIN/{a=1;next};a;$1 ~ /#END/ {exit}
and my current output is:
These lines should be extracted by our script.
Everything here will be copied.
#END
The only problem I'm having is that I'm still printing the "#END". I've been trying for a long time to somehow eliminate that. Not sure how to exactly do it.
This becomes obvious IMO is we comment each command in the script. The script can be written like this:
#!/usr/bin/awk -f
$1 ~ /#BEGIN/ { # If we match the BEGIN line
a=1 # Set a flag to one
next # skip to the next line
}
a != 0 { # if the flag is not zero
print $0 # print the current line
}
$1 ~ /#END/ { # if we match the END line
exit # exit the process
}
Note that I expanded a to the equivalent form a!=0{print $0}, to make the point clearer.
So the script starts printing each line when the flag is set, and when it reaches the END line, it has already printed the line before it exits. Since you don't want the END line to be printed, you should exit before you print the line. So the script should become:
#!/usr/bin/awk -f
$1 ~ /#BEGIN/ { # If we match the BEGIN line
a=1 # Set a flag to one
next # skip to the next line
}
$1 ~ /#END/ { # if we match the END line
exit # exit the process
}
a != 0 { # if the flag is not zero
print $0 # print the current line
}
In this case, we exit before the line is printed. In a condensed form, it can be written as:
awk '$1~/#BEGIN/{a=1;next}$1~/#END/{exit}a' file
or a bit shorter
awk '$1~/#END/{exit}a;$1~/#BEGIN/{a=1}' file
Regarding the additional constraints raised in the comments, to avoid skipping any BEGIN blocks within the block that is to be printed, we should remove the next statement, and rearrange the lines like in the example right above. In an expanded form it would be like this:
#!/usr/bin/awk -f
$1 ~ /#END/ { # if we match the END line
exit # exit the process
}
a != 0 { # if the flag is not zero
print $0 # print the current line
}
$1 ~ /#BEGIN/ { # If we match the BEGIN line
a=1 # Set a flag to one
}
To also avoid exiting if an END line is found before the block to be printed, we can check if the flag is set before exiting:
#!/usr/bin/awk -f
$1 ~ /#END/ && a != 0 { # if we match the END line and the flag is set
exit # exit the process
}
a != 0 { # if the flag is not zero
print $0 # print the current line
}
$1 ~ /#BEGIN/ { # If we match the BEGIN line
a=1 # Set a flag to one
}
or in a condensed form:
awk '$1~/#END/&&a{exit}a;$1~/#BEGIN/{a=1}' file
Try below sed command to get desired output -
vipin#kali:~$ sed '/#BEGIN/,/#END/!d;/END/q' kk.txt|sed '1d;$d'
These lines should be extracted by our script.
Everything here will be copied.
vipin#kali:~$
Explanation -
use d to delete the content between two expression but !d will print them and then q for quit where command found END.
1d;$d to replace first and last line in our case #BEGIN and #END
Is there a cleaner (preferably gdb only) way to do the following (Setting breakpoints on every line in the code where a certain pattern appears),
grep ~/work/proj some_var_or_pattern -Irni| cut –f1,2 –d":" | sed –e 's/^/b /' > ~/moo.txt
and load the results with
(gdb) source ~/moo.txt
rbreak (unfortunately only for functions).
Unfortunately there is no build in command for setting breakpoints on all lines matching certain pattern. But there are build in commands for seting breakpoints on all functions matching the regular expressions:
rbreak regex and rbreak file:regex (short tutorial).
gdb doesn't do that by itself, but you can put your grep and break command generating code into a shell script and call it from gdb with a single command.
Put this in a file in your $PATH as, say, greptobp:
#!/bin/sh
# usage: greptobp pattern dir outputfile
pattern="$1"
dir="$2"
outputfile="$3"
if [ "${dir#/}" = "$dir" ]
then
# if dir isn't absolute, make it absolute, as requested by OP
dir="$PWD/$dir"
fi
grep -HIirn "$pattern" "$dir" |
awk -F: '{ print "break \"" $1 "\":" $2; }' > "$outputfile"
and add this to gdb:
(gdb) define patbreak
Type commands for definition of "patbreak".
End with a line saying just "end".
>shell greptobp $arg0 $arg1 /tmp/gdbtmp
>source /tmp/gdbtmp
>shell rm /tmp/gdbtmp
>end
(gdb) document patbreak
Type documentation for "patbreak".
End with a line saying just "end".
>Set breakpoints at lines matching specified pattern in a specified directory.
>Usage: patbreak pattern dir
>end
I have a group of text files that each contain one instance of the following line somewhere, where the "(MR #" is the best way to identify the right line...
Smith, John A (MR # MR123456)
I'd like to get three sed statements that, when run against a given file, populate the following three variables: Plname, Pfname and MRnum. Using the example above, I'd like the variables to end up holding the following after running the sed statements.
Plname=Smith
Pfname=John
MRnum=MR123456
Despite many sed examples out there, I'm new to this, and currently struggling with the exact syntax. Thanks for your help.
Something like this:
$ cat t
bla-bla-bla
Smith, John A (MR # MR123456)
bla-bla-bla
$ Plname="$(sed -n '/(MR #/{s/^\([^,]\+\),.*/\1/p;q}' t)"
$ Pfname="$(sed -n '/(MR #/{s/^[^,]\+,[ ]\?\([^(]\+\).*/\1/p;q}' t)"
$ MRnum="$(sed -n '/(MR #/{s/^[^(]\+(MR # \([^)]\+\).*/\1/p;q}' t)"
$ printf "Plname = %s, Pfname = %s, MRnum = %s\n" "$Plname" "$Pfname" "$MRnum"
Plname = Smith, Pfname = John A , MRnum = MR123456
Little explanation:
/(MR #/{s/^\([^,]\+\),.*/\1/p;q} is a short form of
/(MR #/ #1
{
s/^\([^,]\+\),.*/\1/; #2
p; #3
q #4
}
sed will parse file line by line
-n flag says that sed won't print each line (it does it by default), because we will do this manually if needed
#1 (//) finds line which contains (MR #
and if line matches then we do following actions:
#2 (s///) replaces its content by regular expression
#3 (p) prints result
#4 (q) stops processing of file because we already find what we are looking for