Is there a way to speed up gdb when it loads a very large c++ binary?
The binary I am talking about is about 50MB in a release build (no debug symbols)
A debug build is bigger then 400MB and gdb needs more then 2 minutes to load it.
Maybe there are some settings that speed up loading the application into gdb ?
Any help is very appreciated!
Environment: ubuntu karmic 64bit with gdb v6.8, 32GB Ram, 8 cpu-cores
Begin by upgrading to GDB 7.0.1. GDB developers have spent quite a bit of effort in 2009 making GDB 7.0 faster (and there is more to come in 7.1 :-)
Related
A Windows Qt C++ application is crashing sometimes, it seems to be happening more often on slower machines.
On my main development machine (macbook pro 2019 running windows on bootcamp) it almost never happen.
But on another latptop, a 8GB ram, intel core i5 1.60GHz, it happens pretty much always.
So much I replicated the development environment there to run the Qt Creator project in debug mode.
Then I got the following picture
What is the disassembler telling?
Disassembler (RtlIsZeroMemory)
How can I further debug this?
I have Kubuntu 14.10 and 15.04 installed on my four computers, all having different hardware (the oldest machine was assembbled in 2007 and the newest just a month ago. I have both 32- and 64-bit OSs installed. The amount of RAM varies from 4 to 32 GB). I have been using Code::Blocks on them for a few months, and I experience the same problem on all 4 machines: integrated debugger is painfully slow when debugging a C++ program.
After the debugger stops at a breakpoint, it takes 10 seconds to 5 minutes to step through a single line of code. And while the debugger is performing a step, one core of my CPU is loaded by GDB by 100%. And often trying to step through a line of code hangs forever. After that I have to kill GDB and the process that has been debugged.
Some time ago I updated GDB to version 7.9 (from 7.8) but this did not fix the problem. And I have no slowdown when debugging with GDB from command line, so I suspect that the problem is in the Code::Blocks debugger plugin.
I saw many complaints regarding similar problems, some of which were allegedly caused by outdated libc6-dbg (more exactly, by the fact that debug symbols were not shipped with Ubuntu and other Debian-based distributions), but reinstalling libc6-dbg did not help either.
I am afraid that after a day or two of trying to fix this problem I will give up and will switch to Eclipse or some other IDE. It looks like Code::Blocks and its debugger plugin have not been updated for a couple of years (at least, their Linux versions). So maybe I should not use Code::Blocks at all because its future is not clear (while Eclipse is likely to be in service for long time).
I wonder if anybody else experiences this problem and whether there are solutions. Overall Code::Blocks IDE looks decent and rather convenient, but this debugger problem prevents from using it for purposes other than writing code and compiling.
An update:
I ended up installing Eclipse for C++ (Luna release). It took some time to learn how to use it. It is slow, buggy, glitchy and uses a lot of RAM, but it at least allows me to debug my applications in IDE. Now I am 100% sure that the problem is in Code::Blocks debugger plugin.
I also tried NetBeans, and seems to work fine, but it is even slower than Eclipse and looks really ugly. So I am going to stick with Eclipse for now because no one seems to be willing to fix the debugger plugin in Code::Blocks.
The problem turned out to be with stepping through lines that declare uninitialized std::string objects. A similar (or the same) problem is described here:
https://sourceware.org/bugzilla/show_bug.cgi?id=12555
The probleb with debugging in Code::Blocks was suddenly fixed when I followed these instructions:
http://wiki.eclipse.org/CDT/User/FAQ#How_can_I_inspect_the_contents_of_STL_containers.3F
on how to enable pretty-printing in Eclipse CDT.
I still need to follow these instructions on my other machines to make sure they fix the problem.
You can try and turn off CodeBlock pretty-printing: Settings->Debugger->Default->Enable Watch Scripts = Unchecked
(Source)
I recently noticed that running a program inside gdb in windows makes it a lot slower, and I want to know why.
Here's an example:
It is a pure C++03 project, compiled with mingw32 (gcc 4.8.1, 32 bits).
It is statically linked against libstdc++ and libgcc, no other lib is used.
It is a cpu and memory intensive non-parallel process (a mesh edition operation, lots of news and deletes and queries to data structures involved).
The problem is not start-up time, the whole process is painfully slow.
Debug build (-O0 -g2) runs in 8 secs outside gdb, but in 140 secs within gdb.
Tested from command line, just launching gdb and just typing "run" (no breakpoints defined).
I also tested a release build (optimized, and without debugging information), and it is still much slower inside gdb (3 secs vs 140 secs; yes, it takes the same time as the not optimized version inside gdb).
Tested with gdb 7.5 and 7.6 from mingw32 project, and with a gdb 7.8 compiled by me (all of them without python support).
I usually develop on a GNU/Linux box, and there I can't notice speed differences between running with or withoud gdb.
I want to know what is gdb doing that is making it run so slowly. I have some basic understanding of how a debugger works, but I cannot figure out what is it doing here, and googling didn't helped me this time.
I've finally found the problem, thanks to greatwolf for asking me to test other debuggers. Ollydbg takes the same time as gdb, so it's not a gdb problem, its a Windows problem. This tip changed my search criteria and then I've found this article* that explains the problem very well and gives a really simple solution: define an environment varible _NO_DEBUG_HEAP to 1. This will disable the use of a special heap system windows provides and c++ programs use.
* Here's the link: http://preshing.com/20110717/the-windows-heap-is-slow-when-launched-from-the-debugger/
I once had issues with gdb being incredibly slow and I remember disabling nls (native language support, i.e. the translations of all the messages) would remedy this.
The configure time option is --disable-nls. I might have just been mistaken as to what is the true cause, but it's worth a shot for you anyways.
My bug report from back then is here, although the conclusion there would be that I was mistaken. If you can provide further insight into this, that would be great!
I am running a heavy memory intensive job on a windows OS with 12 GB of RAM. By my computations, 4 GB of memory should be enough to run the program. I am running the program I've written with dynamic memory allocation (I have 2 versions of the program in C and C++ with malloc/free and new/delete respectively) using CodeBlocks.
When I pull up task manager, I see that the program only seems to use about 2 GB of RAM, even when I have a lot more available, and the pagefile size is currently set to 30 GB. Is there any way I can get CodeBlocks to use more memory? I also used DEV-C++ and I get the same bad_alloc error in the C++ code.
Any ideas? Thanks in advance.
Oh and I am using a 64-bit Windows 7.
Look at this page for memory limits based on architecture (x86, 64-bit) and Windows version. Some work-arounds are mentioned:
https://learn.microsoft.com/en-us/windows/win32/memory/memory-limits-for-windows-releases#memory_limits
First you have to make sure you are building a 64-bit executable and not 32-bit.
If using g++, make sure you use option -m64.
As for large address awareness mentioned in the MSDN page, it should be active by default on 64-bit Windows systems.
Still, the Visual C++ linker has an option to explicitly ask for it: /LARGEADDRESSAWARE
Now if you don't use the Visual C++ linker, it appears you can always use this as an extra step if you want to activate large address awareness for your executable:
editbin /LARGEADDRESSAWARE your_executable
(editbin being an M$ Visual Studio tool)
thanks to all the help so far. There was a simple workaround. I installed mingw 64bit compiler, pointed code blocks to that compiler and everything worked like a charm. yay.
I've finally managed to run the QtCreator debugger on Windows after struggling with the Comodo Firewall incompatibilities.
I was hoping to switch from an older version of Qt and Visual C++ to the newest version of Qt and QtCreator, but the debugger performance is atrocious.
I have created a simple GUI with one window that does nothing else but display the window. After starting up QtCreator takes ~60MB RAM (Private bytes in Sysinternals process explorer).
When I start debugging, GDB is using 180MB. I start examining the main window pointer and it jumps to 313. Every time I try to inspect something, one of the cores jumps to 100% use and I have to wait for a few seconds for the information to show. This is just a toy program and I'm afraid that the real program that I want to switch will be much worse.
Is this kind of performance normal for MinGW? Would changing to the latest MinGW release improve things?
Visual C++ IDE + debugger + real-world program takes just close to 100MB of RAM and examining local variables is instantaneous.
Yesterday I built a copy of the Qt 4.5.2 libraries using MSVC 2008 and am using the QtCreator 1.2 MS CDB (Microsoft Console Debugger) support. It seems much faster than gdb. Building Qt for MSVC takes a few hours, but it might be worth trying.
Also, that means smaller Qt DLLs and EXEs as the MS compiler/linker is much better at removing unused code. Some of the Qt DLLs are less than half the size of their MinGW equivalents. Rumour has it that the C++ code the MS compiler generates is faster too.
I had to work with QtCreator a month ago. It's performance is awful, after 30 minutes of working with him, it will start to respond very slowly to everything. Maybe it's because it's still at the beginning.