GDB is trying to be helpful by labeling what I believe are global variables, but in this case each global is more than 0x10 bytes and so the second part of the variable is printed on the next line, but with an offset added to its label, which throws off the alignment of the whole printout (generated by executing x/50wx 0x604130):
Is there a command to disable these labels while examining bytes?
Edit: to be more specific, I would like to printout exactly what is shown in the screenshot, just without the <n1> / <n1+16> labels that are throwing off the alignment of the columns
Is there a command to disable these labels while examining bytes?
I don't believe there is.
One might expect that set print symbol off would do it, but it doesn't.
The closest I can suggest is this answer.
Related
I found that WinPrint or Var.WRITE can only write up to 4095 bytes to the file at a time. For larger size structures, the data out of limits will be lost. To avoid this, we can write multiple times according to the member order.
( If we only know the name of a structure and load the elf through T32, we can find it in the symbol list and view all its members. So, can we get the member name of the structure by some T32 command and then log to file according to the name like Var.WRITE #1 StructA.memberName )
WinPrint.<command>
The WinPrint pre-command is used to generate a hardcopy or a file from one command. The numbers of
columns and lines in the window are adapted to the possibilities of the printer. Printer selection can be
executed by the PRinTer command.
Thus, the output can also be re-routed to a file. In the case of some commands, extended parameters are
possible for printing more than one page.
WinPrint.var.view %m.3 %r.3 structName
This command can output all the contents of the structure to a file. Because var.view is not restricted by 4095, all the contents can be saved to a file.
m stands for multiline. It displays the structure elements in multiple line format. If the elements are in a multidimensional array, the numeric parameter defines the number of levels displayed.
r stands for recursive. This optional number defines the depth of recursion to be displayed. The command SETUP.VarPtr defines the valid address range for pointers. The contents of pointers outside this range are not displayed.
As i run more commands in Stata, the earlier output disappears from the window (i.e, if i scroll to the top, the earlier output is no longer there, suggesting that there is a set 'height' or number of rows of the output window).
Is it possible to change this setting, i.e., to increase the amount of output that is displayed?
Thanks to the suggestion in the comments - in case of relevance to anyone else, this can be achieved with the command:
set scrollbufsize 2000000
(or any value up to 2000000) - this takes effect the next time Stata is opened.
What I'm really doing is trying to set a watchpoint on the setting or clearing of a single bit. I do that by setting a watchpoint on the word containing the bit, then making it conditional on *word & mask (for setting, or (~*word) & mask for clearing.)
The problem is that some other bit in the same word may be modified, and the condition may happen to already match. If I had the old and new values, I could set a condition of (($old ^ $new) & mask).
I looked at the python gdb.Breakpoint class, but it doesn't seem to receive this information either.
I suppose I could go crazy and set a command list that records the current value whenever the value of *word changes, and use that as $old. But half the time I'm using this, I'm actually using it through rr, so I might be going backwards.
There's no direct way to get these values in gdb; it's been a wish-list bug (with your exact case as the example...) for years.. The information is stored in the old_val field of the struct bpstats object associated with the breakpoint; but this is only used to print the old value and not exposed elsewhere.
One option might be to change gdb to expose this value via a convenience variable or via Python.
I suppose I could go crazy and set a command list that records the current value whenever the value of *word changes, and use that as $old. But half the time I'm using this, I'm actually using it through rr, so I might be going backwards.
This seems doable. Your script could check the current execution direction. The main difficulty is remembering to reset the saved value when making this watchpoint, or after disabling and then re-enabling it.
In one of my Nim projects I'm having performance issues. I'm now trying to use nimprof to see what's going on. I have an import nimprof in my main source file, and I'm compiling with --profiler:on. When I run the program I can see the messages:
writing profile_results.txt...
... done
However, profile_results.txt only contains this:
total executions of each stack trace:
Entry: 1/1 Calls: 2741/2741 = 1.0e+02% [sum: 2741; 2741/2741 = 1.0e+02%]
The run time was about 1 minute -- so I don't think it is just not enough time to sample anything. Is there any way to get something more meaningful out of nimprof?
You need to add the compiler flag --stackTrace:on, or there won't be any function names or line numbers to analyze.
1.0e+02% is just a silly way to say 100%. It says it took a lot of stack samples and they were all the same, which is not surprising.
What you need is to actually see the sample.
It should appear below the line above. It will show you what the problem is.
Just as an aside, it should show line numbers as well as function names,
and it shouldn't just sort the stacks by frequency.
The reason is there could easily be a guilty line of code that is on a large fraction of stacks, even though the stacks are otherwise different, so if the stacks are sorted, that line will not be aggregated.
I have structure statically allocated. It has couple of arrays. When I see the values of these arrays they are displayed but all values are zero (0). When I werite this array to file, the values are correct however if I watch the values at the same time in debugger the values show up as zero. I used TRACE to print the values on output window and that is also correct.
So the program is doing all computation correctly but debugger shows the variables' values all zero. I am using VS2010 and C++. Is there a way to fix it?
p.s I have tried other solutions listed around like typing array_name,number in debugger but that doesn't work for me either.