I am trying to use gdb with MySQL source code which is written in C/C++. In mysql-test/t, I create a custom test case file, say, example.test and then debug it by using the following line of code
/mysql-test-run --gdb example
Now I want to see the flow of execution as it changes from one function in a file to another in some different file. I am not sure of how the execution changes, so I can't pre define the break points. Any solution to how I can get to see the flow with multiple files of source code?
You can use the next directive to take line-by-line steps through the source. When appropriate, you can use the step directive to take steps "into" the function(s) being called on the current line.
A reasonable method would be to do next until you think you only just pass the externally visible behavior you're looking for. Then start over, stopping at the line right before you saw the behavior last time. Then step this time. Repeat as necessary until you find the code you're looking for. If you believe that it's encountering some sort of deadlock, it's significantly easier -- just interrupt (Ctrl-C) the program when you think it's stuck and it should stop right in the interesting spot.
In general, walking through the source you'll build up some places you think are interesting. You can note the source file and line number and/or function names as appropriate and set those breakpoints directly in the future to avoid tedious next/next/next business.
Related
I am writing my first Roslyn analyzers. I have basically followed the tutorial https://learn.microsoft.com/en-us/archive/msdn-magazine/2014/special-issue/csharp-and-visual-basic-use-roslyn-to-write-a-live-code-analyzer-for-your-api , and then proceeded by adding a second analyzer class which should be language-agnostics analysis, similar to what presented in https://www.meziantou.net/writing-a-language-agnostic-roslyn-analyzer-using-ioperation.htm .
So, I have one analyzer class which initializes itself with
context.RegisterSymbolAction(AnalyzeSymbol, SymbolKind.NamedType);
and a second one which uses
context.RegisterOperationAction(AnalyzeConversionOperation, OperationKind.Conversion);
context.RegisterOperationAction(AnalyzeInvocationOperation, OperationKind.Invocation);
To test my analyzers, I have the generated .Vsix project set as startup project, and by pressing F5, it gets me to a separate instance of Visual Studio where I write some code and want to see whether my analyzers work as intended. And, I set breakpoints at the begininng of my analyzer actions - AnalyzeSymbol, AnalyzeConversionOperation and AnalyzeInvocationOperation, in the original VS instance.
The analyzer actions are not called as I would expect them too. In fact, they are not being called at all, as I write the code to be analyzed. Only if I place the cursor on some type name (which is related to the SymbolKind.NamedType action), and I click on the lightbulb that appears, I get the calls - and not only to the AnalyzeSymbol action, but also the AnalyzeConversionOperation and AnalyzeInvocationOperation are called as needed for various operations in the code.
So, the actions that get called are all fine - but they not called when I want them. I would expect them to be called as needed - basically, almost continuously as I edit the code. Or at least upon build, or some explicit "Analyze Now" command. But I am not aware of anything alike. The only way I found to trigger them is the way I described. That does not seem correct to me.
I tried Googling but could not find the solution; or possibly I have some misconception and it is supposed to work differently?
Analyzers are run out of process. So I think it's expected that you don't get the breakpoints hit, but in fact these lines get hit.
You have three options:
Disable running analyzers out-of-process.
Attach ServiceHub.RoslynCodeAnalysisService process to the debugger.
Debug through a unit test. This is my preferred approach.
So, I have a program in C++ and I use visual studio 2010. My program is mostly procedural not object oriented programming though. The first part of my program does something, then the second half does something else that uses the information of the first half. The first half takes a while (~ 20 minutes) to run (I knew this through running it in debug mode and put a break point right after the end of that first half).
The thing is that I am experimenting different ideas for that second half. Now, whenever I write the code for any new idea, I have to run the whole code from scratch, and thus have to wait the 20 minutes before the new second half runs. This is very inconvenient/inefficient; since I will be doing this for a while. I also can not really write all my ideas at once and run different programs (with the same first half and different second halves corresponding to each idea) simultaneously, just because I get each new idea after I run the older one and understand somethings about the behavior of my algorithm.
So, is there any way I can start running the code right after the first part whenever I change something(s) in the second part, instead of having to compile it and run it from scratch each time I change something in the second part? And how is that, if possible?
Since you are using Visual Studio, you should look into Edit and Continue:
Edit and Continue is a time-saving feature that enables you to make
changes to your source code while your program is in break mode. When
you resume execution of the program by choosing an execution command
like Continue or Step, Edit and Continue automatically applies the
code changes with some limitations. This allows you to make changes to
your code during a debugging session, instead of having to stop,
recompile your entire program, and restart the debugging session.
But please pay attention to the limitations - Unsupported Scenarios, you might have to structure your code changes to fit within what's supported.
I'm trying to learn the level format in one of my favourite games, which is almost totally undocumented. Basically the only document that describes the level format is simply by saying things like First 12 bytes: header 4 following bytes: number of materials x next bytes: array of materials, and things like that.
I'm very inexperienced in hex and don't completely understand what they're saying. However, there is a level editor, and the source is freely available on google code. I was thinking of adding this in to my visual studio and trying to learn the level format by reading how the level editor opens the files.
However, another problem, I don't know c++ (I know python). This means I probably won't be able to locate which part of the code reads the bytes and whatnot.
What I'm looking for, is something that will allow me to follow the flow of the code, in its execution. Essentially something that acts similar to setting a breakpoint on every line, and having it show me what specific portion of code is executing when reading the file contents.
However, obviously setting breakpoints on every line is very messy and slow. I'm looking for something that will simply show me what code is being run when I open the file in the editor.
Does anyone know what I could do? Thanks.
You're looking for a feature to step from one statement to the next; every debugger I know has such a feature. You start by setting a single breakpoint at the beginning of the interesting region, and starting from there you "step" through your code.
E.g. in Visual C++ 2010, the key F10 does one step; you can also "step into" the next statement (e.g. a method call) with F11.
In your case, set the breakpoint to where the reading of the level file starts, and continue from there. To find the place where the file is read can be a hard problem as well - depending on the clearness of the code; but if it's well written code, there should be a method with "read" in the name or "load" or something similar - you'll figure it out!
You might have to know at least some basic C++ syntax to be able to follow what's going, though.
I would also recommend reading up on Debugging HowTo's (e.g this one).
The document wich you find so obscure, is just the level format specifications, in most cases the specifications are all you need. You need as well some little extra experience with file reading.
When reading a file you have to warry about few things.
1) When reading byte by byte (8 bits) order is no changed.
2) When reading 32bits at a time byte order can change according to endianness of machine.
(for example 0x12345678 becomes 0x78563412 when endiannes changes)
There was a very old tutorial that can help you loading 3D models that helped me to start working with files:
http://www.spacesimulator.net/wiki/index.php?title=Tutorials:3ds_Loader
this is usefull because you have part of the specifications (like in original documentation) and it shows how you can create a loader just starting from specifications. That's all you need. That's C but there is no big difference from C++ in this case.
If you need some other simple file format specification with related file loader for making things clearer to you, you can also look at libktx and ktx specifications:
http://www.khronos.org/opengles/sdk/tools/KTX/file_format_spec/
If I remember correctly there's also a unofficial C++ KTX loader you can look at if you itend to write C++ oop code rather than C.
I am involved in a c++ refactoring project and sometimes there are differences resulting, when there should be none. Currently, what I do is basically setting a breakpoint at some place, and then go through the program by F10/F11. The first problem is the size of the projects, traversing it takes a lot of time. Second, sometimes I have differences only in the end of a very big test sentences (say, 600 words), thus just getting to the different word is painfully slow.
1. Is it possible to write some kind of macro for Visual Studio, which will start from the breakpoint, then go step-by-step through the program until end while printing some fields?
2. Are there any neat tricks or tools to simplify the task?
Thanks!
You can create Macros by using Tools>Macros>Macro IDE
If prefer the following method because it's faster for me.
You can record macros using Tools>Macros>Record temporary macro
Everything you type will then be recorded into a macro.
After you recorded what you want to be automated, you can edit the generated code by using View>Other windows>Macro Explorer. Your macro will be recorded in MyMacros>RecordingModule>TemporaryMacro in Macro Explorer window. If you right click that and select edit.
One way to test if the program is terminated:
While Not DTE.Debugger.CurrentProgram Is Nothing
I have frequently encounter the following debugging scenario:
Tester provide some reproduce steps for a bug. And to find out where the problem is, I try to play with these reproduce steps to get the minimum necessary reproduce steps. Sometimes, luckily I found that when do a minor change to the steps, the problem is gone.
Then the job turns to find the difference in code workflow between these two reproduce steps. This job is tedious and painful especially when you are working on a large code base and it go through a lot code and involve lots of state changes which you are not familiar with.
So I was wondering is there any tools available to compare "code workflow". As I've learned the "wt" command in WinDbg, I thought it might be possible to do it. For example, I can run the "wt" command on some out most functions with 2 different reproduce steps and then compare the difference between outputs. Then it should be easy to found where the code flow starts to diverge.
But the problem with WinDBG is "wt" is quite slow (maybe I should use a log file instead of output to screen) and not very user-friendly (compared with visual studio debugger) ... So I want to ask you guys is there any existing tools available . or is it possible and difficult to develop a "plug-in" for visual studio debugger to support this functionality ?
Thanks
I'd run it under a profiler in "coverage" mode, then use diff on the results to see which parts of the code were executed in one run by not the other.
Sorry, I don't know of a tool which can do what you want, but even if it existed it doesn't sound like the quickest approach to finding out where the lower layer code is failing.
I would recommend to instrument your layer's code with high-level logs so you can know which module fails, stalls, etc. In debug, your logger can write to file, to output debug window, etc.
In general, failing fast and using exceptions are good ways to find out easily where things go bad.
Doing something after the fact is not going to cut it, since your problem is reproducing it.
The issue with bugs is seldom some interal wackiness but usually what the user's actually doing. If you log all the commands that the user enters then they can simply send you the log. You can substitute button clicks, mouse selects, etc. This will have some cost but certainly much less than something that keeps track of every method visited.
I am assuming that if you have a large application that you have good logging or tracing.
I work on a large server product with over 40 processes and over one million lines of code. Most of the time the error in the trace file is enough to identify the location of problem. However sometimes the error I see in the trace file is caused by some earlier code and the reason for this can be hard to spot. Then I use a comparative debugging technique:
Reproduce the first scenario, copy the trace to a new file (if the application is multi threaded ensure you only have the trace for the thread that does the work).
Reproduce the second scenario, copy the trace to a new file.
Remove the timestamps from the log files (I use awk or sed for this).
Compare the log files with winmerge or similar, to see where and how they diverge.
This technique can be a little time consuming, but is much quicker than stepping through thousand of lines in the debugger.
Another useful technique is producing uml sequence diagrams from trace files. For this you need the function entry and exit positions logged consistently. Then write a small script to parse your trace files and use sequence.jar to produce uml diagrams as png files. This is a great way to understand the logic of code you haven't touched in a while. I wrapped a small awk script in a batch file, I just provide trace file and line number to start then it untangles the threads and generates the input text to sequence.jar then runs its to create the uml diagram.