QtCreator performance tunning - c++

I use QtCreator whenever I can although its performance is not great sometimes.
I have a feeling situation gets worse with large source files, to put a number here I'll say over 1000 lines.
It seems disabling a couple of Helper plugins makes it take less CPU.
Is there a way to know CPU usage by each plugin? Which plugins are the most CPU hungry?
Now I'm going with the following and CPU usage seems good (almost close to 1% all the time).

you can disable the clangCodeModel plugin and cppCheck to reduce CPU usage but the main processing used by a background parser that tokenizes and read symbols from your source file. sometimes the third party library may contain myriad file count and make the Qtcreator slow. also, you can reduce the files that must parse by the "Clang Code Model" panel (Tools > Options > C++ > Code Model.).

Related

Eclipse freeze on opening declaration in long file

I am using Eclipse CDT on linux. I have a long header file with 5k lines of code. When I try to open declaration of some variable in this file by pressing F3, Eclipse freezes for about 20 seconds and then opens declaration. This issue makes code navigation unusable in a long file. In shorter files declaration opens almost instantly.
I tried to restart Eclipse and rebuild the index but this did not help.
My Eclipse version is:
Version: Neon.1 (4.6.1)
Build id: Z20161111-1340
How can I workaround this issue ?
Due to the way CDT is architected, operations on larger files will be slower than operations on smaller files.
CDT obtains semantic information about the code for operations like Open Declaration from two places:
For the currently open file: from the AST (abstract syntax tree) it builds for that file.
For included header files and other files in the project: from the index, which is a searchable database of semantic information about the project.
The index is initially built by creating an AST for every file in the project, and storing information from them into a database. This is a time-consuming process, but it only has to be done once (and then it's incrementally updated every time you save a file), and once it's built, querying the index is fast (querying is about O(log n) in the size of the index).
On the other hand, since the AST represents code that is (potentially) being currently edited, it is constantly being rebuilt "as you type". Since building an AST is at least O(n) in the length of the file (possibly worse; I haven't done a careful analysis), operations that depend on the AST get slower as the length of the file you're editing increases.
Now, for workarounds:
Enabling some of the scalability settings in Preferences | C/C++ | Editor | Scalability may help, by restricting the kinds of operations that require building an AST for large files (notice you get to define the threshold for "large"). It's not immediately clear to me whether it will make Open Declaration faster; try it and see.
Your best bet, however, is to break your header up into smaller headers. This has the added advantage of reducing compile times (since not all translation units may need to include all parts of the header), and organizing your code better (this last one is a matter of taste; feel free to disagree).
Looks like this is Eclipse bug 455467.
The reason of freeze is high cpu usage when opening declaration.
I applied workaround from Comment 5 and freeze dropped to 1-5 seconds:
Changing all settings in
.metadata/.plugins/org.eclipse.core.runtime/.settings/org.eclipse.cdt.codan.core.prefs
from RUN_AS_YOU_TYPE\=>true to RUN_AS_YOU_TYPE\=>false seems to help
us out of this but this is not really what we want.
As I understand this workaround partially disables Codan - CDT static analysis framework.

Trying to draw a sequence diagram of portion of huge source code

Goal
"drawing a sequence diagram about a small portion of huge code".
Background info
I got huge source code which is bigger than 2GB.
The code is written in C/C++
I have reviewed/understood less than 1% of code.
I am using eclipse / vim on Ubuntu 12.10
What I would like to know
is there any automatic sequence diagram generator which could be used on above case?
If I have to draw it manually, is there an easy way to figure out messages between life-lines?
I tried to put logs here and there but the code was too big. -> fail
I tried to follow code jump on eclipse (function A calls function B, B calls function C and so on) -> also fail. too much code.
Doxygen can generate call graphs, which aren't the same as sequence diagrams but might actually be more helpful at this scale. You'll probably have to customize the configuration to get the right things to show up without Doxygen choking, but at least it's a tool that is designed to do this on C/C++ and has been used on production size code.

DWARF diff tool for debug info file

I have a binary that is 10s of MB large without debugging symbols, but with debugging symbols its 100s of MB large. In the normal development cycle I copy the several 100 MB binary (with debug symbols) over a very slow link repeatedly. I am trying to minimize the amount of information I need to send to speed up the transfer.
I have researched binary diff tools like bsdiff and courgette, but the time and memory they take is prohibitive for me given the size of the binary and the frequency I'd like to be able to transfer it. As pointed out in responses, there are ways to mitigate the problem of needing to send the debug info to the remote host. gdbserver won't work in my use case because we'd also like the application to be able to log backtrace information with symbols. Using PC values and addr2line was considered, but keeping the source binary around can be confusing if trying to make forward progress while also testing on a remote machine. Additionally, with multiple developers, having access to debug info on some other developer's machine isn't always easy.
strip can separate out the binary from the debug info, so I was wondering if there were tools to compare and "diff" two debug info files, since that's where 95% of my space is anyway? And between iterations, a lot of the raw content in the debug info file is the same (i.e. the names and relationships, which is painfully verbose in C++).
Using the suggestion from user657267 I have also investigated using -gsplit-dwarf to separate out the .dwo files. This may work, but my concern is that over time core headers will change and cause minor changes to every .dwo file, so I'll end up transferring everything anyway assuming my "base" stays the same, even though most of the content of the .dwo file is unchanged. This could possibly be worked around in interesting ways (e.g. repository of .dwo files), but I'd like to avoid it if possible.
So, assuming I have a DWARF debug info file from a previous compilation, is there a way to compare it to the DWARF debug info file from the current compilation and get something smaller to transfer?
As an absolute last resort, I can write some type of lookup and translation code. But are there convenient tools for viewing, modifying, and then "unmodifying" a DWARF debug info file? I have found tools like pyelftools and DWARF utilities. The former only reads the DIEs, and too slowly in my case, while the latter doesn't work well with C++ and I'm still investigating building it from the latest source.
Along these lines, I have investigated what the dwz tool announced here is doing to see if can be tweaked to borrow DIEs from an already existing (but stale) debug info file. Any tips, documents, or pseudo-code in this direction would also be helpful.
In the normal development cycle I have to copy my several 100 MB binary (with debug symbols) over a very slow link over and over again.
Have to?
Your use case screams for using remote debugging, where all the debug info stays on the development system, and you only have to transfer the stripped binary to the target.
Info about using gdbserver is here.
If, for some reason you can't use gdbserver ...
From this link gcc.gnu.org/wiki/DebugFission, I still don't understand how having a separate dwarf file is going to help me diff easier?
With separate debug info, in a usual compile/debug cycle, you'll be recompiling only a few files, and relinking the final binary.
That means that most of the .o and .dwo files will not be rebuilt, which means that you wouldn't have to re-send the unchanged .dwo files to the target, i.e. you get incremental debug info updates "for free".
Update:
We also use the debug symbols to generate backtraces for exceptions in the running application. I think having the symbols locally is the only option for that use case.
Only if you insist on the backtrace being fully-symbolized with file/line info.
The usual way to deal with this is to have the backtrace contain only PC values (and perhaps function names), and use addr2line on the development system to recover file/line info when necessary.

Handling really large multi language projects

I am working on an really large multi language project (1000+ Classes + Configs + Scripts), with files distributed over network drives. I am having trouble fighting through the code, since the available Tools are not helping. The main problem is finding things. For the C++ Part: VS with VAX can only find files and symbols which are in the solution. A lot of them are not. Same problem with Reshaper. Right now i am stuck with doing unindexed string and file searches, which is highly inefficient on a network drive. I heared that SourceInsight would be an option since it allows you to just specify the folders that are part of the project and than indexes them, but my company wont spent money on it.
So my question ist: what Tools are there available to fight through an incredible large amount of code? And if possible they should be low cost or even free/open source.
Check out -
ctags
cscope
idutils
snavigator
In every one of these tools, you would have to invest(*) some time in reading the documentation, and then building your index. Consider switching to an editor that will work with these tools.
(*): I do mean invest, because it will reap dividends once you do.
hope this helps,
If you need to maintain a large amount of code, you really should have a source code managment system, a lot of them will help you find text by indexing all the files
And Most of them will work with various language.
Otherwise you can install some indexer like Apache Lucene and index all your files...
You should take a look at LXR. This is used by many Linux kernel source listings.
Try ndexer http://code.google.com/p/ndexer/
promises to Handle extremely large codebases!
The Perl program ack is also worth a look -- think of it as multi-file grep on steroids. The new version (in what I would call late beta) even lets you specify regexes for the files to process as well as regexes to search for -- a feature I've used extensively since it came out (I've got a subproject with 30k lines in 300+ classes, where this feature has been very helpful). You can even chain the new ack with itself so you can subselect the files to process.
VS with VAX can only find files and symbols which are in the solution. A lot of them are not.
You can add all the files that are not in your solution and set them to not build in the settings. Your VS build will not be affected by this, but now VS knows about those files and you can search them along with your VS native files.

How to quickly debug when something wrong in code workflow?

I have frequently encounter the following debugging scenario:
Tester provide some reproduce steps for a bug. And to find out where the problem is, I try to play with these reproduce steps to get the minimum necessary reproduce steps. Sometimes, luckily I found that when do a minor change to the steps, the problem is gone.
Then the job turns to find the difference in code workflow between these two reproduce steps. This job is tedious and painful especially when you are working on a large code base and it go through a lot code and involve lots of state changes which you are not familiar with.
So I was wondering is there any tools available to compare "code workflow". As I've learned the "wt" command in WinDbg, I thought it might be possible to do it. For example, I can run the "wt" command on some out most functions with 2 different reproduce steps and then compare the difference between outputs. Then it should be easy to found where the code flow starts to diverge.
But the problem with WinDBG is "wt" is quite slow (maybe I should use a log file instead of output to screen) and not very user-friendly (compared with visual studio debugger) ... So I want to ask you guys is there any existing tools available . or is it possible and difficult to develop a "plug-in" for visual studio debugger to support this functionality ?
Thanks
I'd run it under a profiler in "coverage" mode, then use diff on the results to see which parts of the code were executed in one run by not the other.
Sorry, I don't know of a tool which can do what you want, but even if it existed it doesn't sound like the quickest approach to finding out where the lower layer code is failing.
I would recommend to instrument your layer's code with high-level logs so you can know which module fails, stalls, etc. In debug, your logger can write to file, to output debug window, etc.
In general, failing fast and using exceptions are good ways to find out easily where things go bad.
Doing something after the fact is not going to cut it, since your problem is reproducing it.
The issue with bugs is seldom some interal wackiness but usually what the user's actually doing. If you log all the commands that the user enters then they can simply send you the log. You can substitute button clicks, mouse selects, etc. This will have some cost but certainly much less than something that keeps track of every method visited.
I am assuming that if you have a large application that you have good logging or tracing.
I work on a large server product with over 40 processes and over one million lines of code. Most of the time the error in the trace file is enough to identify the location of problem. However sometimes the error I see in the trace file is caused by some earlier code and the reason for this can be hard to spot. Then I use a comparative debugging technique:
Reproduce the first scenario, copy the trace to a new file (if the application is multi threaded ensure you only have the trace for the thread that does the work).
Reproduce the second scenario, copy the trace to a new file.
Remove the timestamps from the log files (I use awk or sed for this).
Compare the log files with winmerge or similar, to see where and how they diverge.
This technique can be a little time consuming, but is much quicker than stepping through thousand of lines in the debugger.
Another useful technique is producing uml sequence diagrams from trace files. For this you need the function entry and exit positions logged consistently. Then write a small script to parse your trace files and use sequence.jar to produce uml diagrams as png files. This is a great way to understand the logic of code you haven't touched in a while. I wrapped a small awk script in a batch file, I just provide trace file and line number to start then it untangles the threads and generates the input text to sequence.jar then runs its to create the uml diagram.