GDB loop_continue produces warning message - gdb

In order to practice loop_continue, I wrote the following user-defined GDB command:
#
# Print all odd numbers less than a given number.
#
define print_all_odds_less_than
set $max = $arg0
set $i = 0
while $i < ($max - 1)
set $i = $i + 1
if ($i % 2) == 0
loop_continue
end
printf "Odd number %u is less than %u\n", $i, $max
end
printf "Done\n"
end
Sample run:
(gdb) print_all_odds_less_than 10
Odd number 1 is less than 10
Odd number 3 is less than 10
Odd number 5 is less than 10
Odd number 7 is less than 10
Odd number 9 is less than 10
Done
However, when I provide a value such that the while loop does not execute again after the last loop_continue, a warning message is produced and my user-defined command aborts execution.
(gdb) print_all_odds_less_than 9
Odd number 1 is less than 9
Odd number 3 is less than 9
Odd number 5 is less than 9
Odd number 7 is less than 9
warning: Error executing canned sequence of commands.
Documentation for loop_continue:
This command skips the execution of the rest of the body of commands
in the while loop in whose body it is included. Execution branches to
the beginning of the while loop, where it evaluates the controlling
expression.
From the documentation, I am unable to understand the mistake in my code.
My GDB version:
bash$ gdb --version
GNU gdb (Ubuntu 7.11.1-0ubuntu1~16.5) 7.11.1
Any help would be appreciated :)

Related

How I can make gdb print all breakpoint executions without stopping on them?

Use case: I want to research program execution by reading breakpoint hits. Hits count is more than 100.
I can't use continue 1, because hits count is too large. I can't use continue <some big number>, because GDB skipping all occurrences and not writing them in log.
What I can do with this case?
define print_all_br
set $i = 0
while $i < $arg0
continue 1
set $i = $i + 1
end
end
This might work with print_all_br(<some_big_number>)

Keeping specific rows with grep function

I have a large data sets and the variable includes different format
Subject Result
1 3
2 4
3 <4
4 <3
5 I need to go to school<>
6 I need to <> be there
7 2.3 need to be< there
8 <.3
9 .<9
10 ..<9
11 >3 need to go to school
12 <16.1
13 <5.0
I just want to keep the rows which include the "< number" or "> number" and not the rows with the text format (forexample, I want to exclude >3 need to school, I need to go to school <>). The problem is that some records are something like .<3, ..<9, >9., >:9. So how can I remove ".","..",":" from the data set and then keep the rows with "< a number" notation. How can I use "grep" function?
Again, I just want to keep the following rows
Subject Result
> 3 <4
> 4 <3
> 8 <.3
> 9 .<9
> 10 ..<9
> 12 <16.1
> 13 <5.0
You can simply apply two greps, one to find the "<>" keys, and then one to eliminate fields with characters:
grep "[><]" | grep -v "[A-Za-z]"
If you want to be pedantic, you can also apply another grep to find those with numbers
grep "[><]" | grep -v "[A-Za-z]" | grep "[0-9]"
"grep -v" means match and don't return, by the way.
Assuming you're certain that [.,:;] are the only problematic punctuation:
df$Result<-gsub("[.,;:]","", df$Result) # remove any cases of [.,;:] from your results column
df[grep("^\\s*[<>][0-9]+$", df$Result),] # find all cases of numbers preceded by < or > (with possible spaces) and succeeded by nothing else.

AWK: Pattern match multiline data with variable line number

I am trying to write a script which will analyze data from a pipe. The problem is, a single element is described in a variable number of lines. Look at the example data set:
3 14 -30.48 17.23
4 1 -18.01 12.69
4 3 -11.01 2.69
8 12 -21.14 -8.76
8 14 -18.01 -5.69
8 12 -35.14 -1.76
9 2 -1.01 22.69
10 1 -88.88 17.28
10 1 -.88 14.28
10 1 5.88 1.28
10 1 -8.88 -7.28
In this case, the first entry is what defines the event to which the following data belongs. In the case of event number 8, we have data in 3 lines. To simplify the rather complex problem that I am trying to solve, let us imagine, that I want to calculate the following expression:
sum_i($2 * ($3 + $4))
Where i is taken over all lines belonging to a given element. The output I want to produce would then look like:
3=-185.5 [14(-30.48+17.23) ]
4=-30.28 [1(-18.01+12.69) + 3(-11.01+2.69)]
8=-1106.4 [...]
I thus need a script which reads all the lines that have the same index entry.
I am an AWK newbie and I've started learning the language a couple of days ago. I am now uncertain whether I will be able to achieve what I want. Therefore:
Is this doable with AWK?
If not, whith what? SED?
If yes, how? I would be grateful if one provided a link describing how this can be implemented.
Finally, I know that there is a similar question: Can awk patterns match multiple lines?, however, I do not have a constant pattern which separates my data.
Thanks!
You could try this:
awk '{ar[$1]+=$2*($3+$4)}
END{for (key in ar)
{print key"="ar[key]}}' inputFile
For each line input we do the desired calculation and sum the result in an array. $1 serves as the key of the array.
When the entire file is read, we print the results in the END{...}-block.
The output for the given sample input is:
4=-30.28
8=-1133.4
9=43.36
10=-67.2
3=-185.5
If sorting of the output is required, you might want to have a look at gawk's asorti function or Linux' sort-command (e.g. awk '{...} inputFile' | sort -n).
This solution does not require that the input is sorted.
awk 'id!=$1{if(id){print id"="sum;sum=0};id=$1}{sum+=$2*($3+$4)} END{print id"="sum}' file
3=-185.5
4=-30.28
8=-1133.4
9=43.36
10=-67.2
yet another similar awk
$ awk -v OFS="=" 'NR==1{p=$1}
p!=$1{print p,s; s=0; p=$1}
{s+=$2*($3+$4)}
END{print p,s}' file
3=-185.5
4=-30.28
8=-1133.4
9=43.36
10=-67.2
ps. Your calculation for "8" seems off.

awk: Handle positions with NR in a if loop. Next and previous position

I have this line into my bash script.
#Trying to find the FIRST maximum in the column $10
awk 'BEGIN{max=0} {if($10>=max){max=$10} else{exit}} END{print NR}'
And it works.
But I need something more sophisticated (for another purpose). I need to awk to check if the next and the previous row values are higher than the actual one (something like this):
awk 'BEGIN{max=0} {if($10[NR]>=max && $10[NR-1]>=$10[NR] && $10[NR+1]>=$10[NR] ){max=$10} else{exit}} END{print NR}'
But it doesn't work, probably because I don't know how to handle the positions in the column. Can you help me please?
Clarification:
I just want to read a column completely (column 10) and find the row number which has a higher value in the previous row and the next row. For instance if the column has the values 1,2,3,4,1,2 then I want to get the row number "5" (corresponding to the second 1 in the data) because it is the row in the column which has two higher values next to it.
awk '{ if ($10 > old1 && old1 < old2) print NR-1; old2 = old1; old1 = $10; }'
To test, I replaced $10 with $1. Run on this data:
1
2
3
4
3
4
5
4
3
2
1
2
3
4
5
4
5
4
3
2
1
It produces this output:
5
11
16

Improve regex to match javascript comments

I used the regex given in perlfaq6 to match and remove javascript comments, but it results in segmentation fault when the string is too long. The regex is -
s#/\*[^*]*\*+([^/*][^*]*\*+)*/|//([^\\]|[^\n][\n]?)*?\n|("(\\.|[^"\\])*"|'(\\.|[^'\\])*'|.[^/"'\\]*)#defined $3 ? $3 : ""#gse;
Can it be improved to avoid segmentation fault ?
[EDIT]
Long input:
<ent r=\"6\" t=\"259\" w=\"252\" /><ent r=\"6\" t=\"257\" w=\"219\" />
repeated about a 1000 times.
I suspect the trouble is partly that your 'C code' isn't very much like C code. In C, you can't have the sequence \" outside a pair of quotes, single or double, for example.
I adapted the regex to make it readable and wrapped into a trivial script that slurps its input and applies the regex to it:
#!/usr/bin/env perl
### Original regex from PerlFAQ6.
### s#/\*[^*]*\*+([^/*][^*]*\*+)*/|//([^\\]|[^\n][\n]?)*?\n|("(\\.|[^"\\])*"|'(\\.|[^'\\])*'|.[^/"'\\]*)#defined $3 ? $3 : ""#gse;
undef $/; # Slurp input
while (<>)
{
print "raw: $_";
s%
/\*[^*]*\*+([^/*][^*]*\*+)*/ # Simple C comments
| //([^\\]|[^\n][\n]?)*?\n # C++ comments, allowing for backslash-newline continuation
| (
"(\\.|[^"\\])*" # Double-quoted strings
| '(\\.|[^'\\])*' # Single-quoted characters
| .[^/"'\\]* # Anything else
)
% defined $3 ? $3 : ""
%egsx;
print "out: $_";
}
I took your line of non-C code, and created files data.1, data.2, data.4, data.8, ..., data.1024 with the appropriate number of lines in each. I then ran a timing loop.
$ for x in 1 2 4 8 16 32 64 128 256 512 1024
> do
> echo
> echo $x
> time perl xx.pl data.$x > /dev/null
> done
$
I've munged the output to give just the real time for the different file sizes:
1 0m0.022s
2 0m0.005s
4 0m0.007s
8 0m0.013s
16 0m0.035s
32 0m0.130s
64 0m0.523s
128 0m2.035s
256 0m6.756s
512 0m28.062s
1024 1m36.134s
I did not get a core dump (Perl 5.16.0 on Mac OS X 10.7.4; 8 GiB main memory). It does begin to take a significant amount of time. While it was running, it was not growing; during the 1024-line run, it was using about 13 MiB of 'real' memory and 23 MiB of 'virtual' memory.
I tried Perl 5.10.0 (the oldest version I have compiled on my machine), and it used slightly less 'real' memory, essentially the same 'virtual' memory, and was noticeably slower (33.3s for 512
lines; 1m 53.9s for 1024 lines).
Just for comparison purposes, I collected some C code that I had lying around in the test directory to create a file of about 88 KiB, with 3100 lines of which about 200 were comment lines. This compares with the size of the data.1024 file which was about 77 KiB. Processing that took between 10 and 20 milliseconds.
Summary
The non-C source you have makes a very nasty test case. Perl shouldn't crash on it.
Which version of Perl are you using, and on which platform? How much memory does your machine have. However, total quantity of memory is unlikely to be the issue (24 MiB is not an issue on most machines that run Perl). If you have a very old version of Perl, the results might be different.
I also note that the regex does not handle some pathological C comments that a C compiler must handle, such as:
/\
\
* Yes, this is a comment *\
\
/
/\
\
/ And so is this
Yes, you'd be right to reject any code submitted for review that contained such comments.