c++ - Valgrind on codeblocks (linux) - c++

I already use the Valgrind in small programs to check memorys leaks and its work good.
Now i have a big program with many class and .cpp and .h files and i'm trying to use Valgrind to check the memory leak because i use a lot of pointers, memory, etc.
I'm using linux and codeblocks 16.01 with gcc and i trying to run the Valgrind directly in codeblocks but i'm getting the follow error:
--------------- Application output --------------
valgrind: /myPathToTheProject/ValgrindOut.xml: No such file or directory
If i test with a small project with only a .cpp file and main it works good and the Valgrind generate the ValgrindOut.xml. In this big project i always getting this error. Someone have some idea what is wrong? or other way or tool to test memory leak?
EDIT - LEAK SUMMARY after running Valgrind
Leak summary:
definitely lost: 673 bytes in 6 blocks.
indirectly lost: 89,128 bytes in 68 blocks.
possibly lost: 232 bytes in 2 blocks.
still reachable: 80,944 bytes in 6 blocks.
suppressed: 0 bytes in 0 blocks.

I am not sure how to run valgrind directly from codeblocks. I suggest you build your project using codeblocks. While executing, use valgrind as per below command.
Command
valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all --log-file=leak.txt ./myexecutable <my command line arguments>
Example
valgrind --tool=memcheck --leak-check=full --show-leak-kinds=all --log-file=leak.txt ./myexecutable -i 192.168.1.10 -p 5000
This way you can generate valgrind output file, that is leak.txt that contains memory leaks etc.

Related

valgrind does not show memory leak in log [duplicate]

This question already has an answer here:
valgrind, gcc 6.2.0 and "-fsanitize=address"
(1 answer)
Closed 1 year ago.
I got a service, and running in my local, when I stopped the service, ASAN give me memory leak messages. So I tried to use Valgrind to find where it got leaked, but there is no such errors.
I run it with
valgrind --leak-check=full --show-leak-kinds=all --verbose --log-file=out.txt /my/path/to/myshell -m myservice.py
"/my/path/to/myshell -m myservice.py" is the way to start my service in my local.
myshell invokes a Python customer interpreter with os.execve
and after I stopped my service, I see ASAN is giving me a lot of messages about memory leak, but in out.txt, I see the pid, which is the same process as I run ps -ef, but there is no memory leak info at all. where is wrong?
If you intend to run the program under valgrind, then don't compile with ASAN. They do not work together.
Recompile without -fsanitize=address and try again.
You also need the --trace-children=yes flag to valgrind in order for it to check subprocesses executed by execve.

ctest does not find valgrind

Calling
ctest -j4 -DCTEST_MEMORYCHECK_COMMAND="/usr/bin/valgrind" -DMemoryCheckCommand="/usr/bin/valgrind" --output-on-failure -T MemCheck
says
Memory checker (MemoryCheckCommand) not set, or cannot find the specified program.
Why doesn't it find valgrind automatically nor when specified manually?
As described on the CTest Wiki page, CTest reads the location of the memory check command (among other settings) from a file DartConfiguration.tcl in the build directory. One way to create the dart configuration file is to simply include the CTest CMake module in your CMakeLists.txt:
include (CTest)
The CTest module will find your valgrind installation in /usr/bin and put a variable MemoryCheckCommand pointing to it in the DartConfiguration.tcl file.
After install valgrind by "apt-get install valgrind", the error is gone.
The Error was
"Memory checker (MemoryCheckCommand) not set, or cannot find the specified program.
Errors while running CTest"
New result
Processing memory checking output:
1/1 MemCheck: cpp_test Defects: 1
MemCheck log files can be found here: (<#> corresponds to test number)
Memory checking results:
Memory Leak - 1
The detail log is logged
/build/Testing/Temporary# cat MemoryChecker.1.log
Showed detail leaking information
HEAP SUMMARY:
in use at exit: 8,000 bytes in 1 blocks
total heap usage: 2 allocs, 1 frees, 80,704 bytes allocated
you should start over from cmake .. , cmake --build . and ctest -T memcheck

valgrind error with __builtin_ctz

I'm trying to profile my code but run into problems.
If I run the following code:
#include <iostream>
int main() {
size_t val = 8;
std::cout << sizeof(val) << std::endl;
std::cout << __builtin_ctz(val) << std::endl;
}
It returns as expected
8
3
If I run valgrind on it it returns:
==28602== Memcheck, a memory error detector
==28602== Copyright (C) 2002-2011, and GNU GPL'd, by Julian Seward et al.
==28602== Using Valgrind-3.7.0 and LibVEX; rerun with -h for copyright info
==28602== Command: ./test
==28602==
8
vex amd64->IR: unhandled instruction bytes: 0xF3 0xF 0xBC 0xC0 0x89 0xC6 0xBF 0x60
==28602== valgrind: Unrecognised instruction at address 0x400890.
==28602== at 0x400890: main (in /home/magu_/sod/test/test)
==28602== Your program just tried to execute an instruction that Valgrind
==28602== did not recognise. There are two possible reasons for this.
==28602== 1. Your program has a bug and erroneously jumped to a non-code
==28602== location. If you are running Memcheck and you just saw a
==28602== warning about a bad jump, it's probably your program's fault.
==28602== 2. The instruction is legitimate but Valgrind doesn't handle it,
==28602== i.e. it's Valgrind's fault. If you think this is the case or
==28602== you are not sure, please let us know and we'll try to fix it.
==28602== Either way, Valgrind will now raise a SIGILL signal which will
==28602== probably kill your program.
==28602==
==28602== Process terminating with default action of signal 4 (SIGILL)
==28602== Illegal opcode at address 0x400890
==28602== at 0x400890: main (in /home/magu_/sod/test/test)
==28602==
==28602== HEAP SUMMARY:
==28602== in use at exit: 0 bytes in 0 blocks
==28602== total heap usage: 0 allocs, 0 frees, 0 bytes allocated
==28602==
==28602== All heap blocks were freed -- no leaks are possible
==28602==
==28602== For counts of detected and suppressed errors, rerun with: -v
==28602== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 2 from 2)
Illegal instruction (core dumped)
Is this an bug of valgrind or should I not use __builtin_ctz with my computer? __builtin_popcount does not raise any errors.
My system:
g++ (Ubuntu 4.8.1-2ubuntu1~12.04) 4.8.1
CPU : Intel Core Duo T7500
You need to upgrade valgrind to at least 4.8.1 or use an gcc older than v4.8.
The opcode you ran into -- F3 0F BC -- is the TZCNT opcode, introduced in BMI1, which your CPU doesn't implement. However, it is also REP;BSF (F3 is REP) and older CPUs, including yours, ignore the REP for this opcode, and the similar LZCNT == REP;BSR pair. There is very little difference between TZCNT and BSF (they differ in how they handle 0).
Older gcc versions used BSF for older CPUs and TZCNT for newer ones, but since the opcode is relatively rare, in newer gcc versions the logic was simplified and TZCNT is always used, since both older and newer CPUs understand it.
Unfortunately, valgrind did not correctly fallback from TZCNT to BSF until v4.8.1. See bug 295808.
On Debian/Sid/x86-64 (Intel i4750HQ processor) with gcc version 4.9.1 (Debian 4.9.1-4) and valgrind-3.9.0 your test is working ok (and valgrind runs successfully without reporting any errors).
So I suggest you to upgrade your GCC compiler and most importantly valgrind. Start first by compiling valgrind from its valgrind-3.9.0 source code tarball (and use aptitude build-dep valgrind before).
BTW, your distribution version is quite old. Did you consider upgrading to Ubuntu 14.0 LTS?
If you don't have root access, consider passing some explicit --prefix (e.g. $HOME/pub/ ) to valgrind-3.9.0/configure

MPI and Valgrind not showing line numbers

I've written a large program and I'm having a really hard time tracking down a segmentation fault. I posted a question but I didn't have enough information to go on (see link below - and if you do, note that I spent almost an entire day trying several times to come up with a minimally compilable version of the code that reproduced the error to no avail).
https://stackoverflow.com/questions/16025411/phantom-bug-involving-stdvectors-mpi-c
So now I'm trying my hand at valgrind for the first time. I just installed it (simply "sudo apt-get install valgrind") with no special installation to account for MPI (if there is any). I'm hoping for concrete information including file names and line numbers (I understand it's impossible for valgrind to provide variable names). While I am getting useful information, including
Invalid read of size 4
Conditional jump or move depends on uninitialised value(s)
Uninitialised value was created by a stack allocation
4 bytes in 1 blocks are definitely lost
in addition to this magical thing
Syscall param sched_setaffinity(mask) points to unaddressable byte(s) at 0x433CE77: syscall (syscall.S:31) Address 0x0 is not stack'd, malloc'd or (recently) free'd
I am not getting file names and line numbers. Instead, I get
==15095== by 0x406909A: ??? (in /usr/lib/openmpi/lib/libopen-rte.so.0.0.0)
Here's how I compile my code:
mpic++ -Wall -Wextra -g -O0 -o Hybrid.out (…file names)
Here are two ways I've executed valgrind:
valgrind --tool=memcheck --leak-check=full --track-origins=yes --log-file=log.txt mpirun -np 1 Hybrid.out
and
mpirun -np 1 valgrind --tool=memcheck --leak-check=full --track-origins=yes --log-file=log4.txt -v ./Hybrid.out
The second version based on instructions in
Segmentation faults occur when I run a parallel program with Open MPI
which, if I'm understanding the chosen answer correctly, appears to be contradicted by
openmpi with valgrind (can I compile with MPI in Ubuntu distro?)
I am deliberately running valgrind on one processor because that's the only way my program will execute to completion without the segmentation fault. I have also run it with two processors, and my program seg faulted as expected, but the log I got back from valgrind seemed to contain essentially the same information. I'm hoping that by resolving the issues valgrind reports on one processor, I'll magically solve the issue happening on more than one.
I tried to include "-static" in the program compilation as suggested in
Valgrind not showing line numbers in spite of -g flag (on Ubuntu 11.10/VirtualBox)
but the compilation failed, saying (in addition to several warnings)
dynamic STT_GNU_IFUNC symbol "strcmp" with pointer equality in '…' can not be used when making an executably; recompile with fPIE and relink with -pie
I have not looked into what "fPIE" and "-pie" mean. Also, please note that I am not using a makefile, nor do I currently know how to write one.
A few more notes: My code does not use the commands malloc, calloc, or new. I'm working entirely with std::vector; no C arrays. I do use commands like .resize(), .insert(), .erase(), and .pop_back(). My code also passes vectors to functions by reference and constant reference. As for parallel commands, I only use MPI_Barrier(), MPI_Bcast(), and MPI_Allgatherv().
How do I get valgrind to show the file names and line numbers for the errors it is reporting? Thank you all for your help!
EDIT
I continued working on it and a friend of mine pointed out that the reports without line numbers are all coming from MPI files, which I did not compile from source, and since I did not compile them, I can't use the -g option, and hence, don't see lines. So I tried valgrind again based on this command,
mpirun -np 1 valgrind --tool=memcheck --leak-check=full --track-origins=yes --log-file=log4.txt -v ./Hybrid.out
but now for two processors, which is
mpirun -np 2 valgrind --tool=memcheck --leak-check=full --track-origins=yes --log-file=log4.txt -v ./Hybrid.out
The program ran to completion (I did not see the seg fault reported in the command line) but this execution of valgrind did give me line numbers within my files. The line valgrind is pointing to is a line where I call MPI_Bcast(). Is it safe to say that this appeared because the memory problem only manifests itself on multiple processors (since I've run it successfully on np -1)?
It sounds like you are using the wrong tool. If you want to know where a segmentation fault occurs use gdb.
Here's a simple example. This program will segfault at *b=5
// main.c
int
main(int argc, char** argv)
{
int* b = 0;
*b = 5;
return *b;
}
To see what happened using gdb; (the <---- part explains input lines)
svengali ~ % g++ -g -c main.c -o main.o # include debugging symbols in .o file
svengali ~ % g++ main.o -o a.out # executable is linked (no -g here)
svengali ~ % gdb a.out
GNU gdb (GDB) 7.4.1-debian
<SNIP>
Reading symbols from ~/a.out...done.
(gdb) run <--------------------------------------- RUNS THE PROGRAM
Starting program: ~/a.out
Program received signal SIGSEGV, Segmentation fault.
0x00000000004005a3 in main (argc=1, argv=0x7fffffffe2d8) at main.c:5
5 *b = 5;
(gdb) bt <--------------------------------------- PRINTS A BACKTRACE
#0 0x00000000004005a3 in main (argc=1, argv=0x7fffffffe2d8) at main.c:5
(gdb) print b <----------------------------------- EXAMINE THE CONTENTS OF 'b'
$2 = (int *) 0x0
(gdb)

Is valgrind catching Qt 4.8 on Debian Wheezy leaking memory in minimalist app?

I've read several questions here where people run minimal Qt programs through valgrind, and post the results. The general verdict from looking over the output is "well, there are no actual leaks, it's just how Qt uses memory".
However, what I'm getting with a basically empty application looks...worse. I'm getting "definitely lost" leaks, for instance:
https://gist.github.com/3204769
==32147== LEAK SUMMARY:
==32147== definitely lost: 848 bytes in 11 blocks
==32147== indirectly lost: 1,756 bytes in 53 blocks
==32147== possibly lost: 1,720 bytes in 9 blocks
==32147== still reachable: 121,019 bytes in 2,257 blocks
==32147== suppressed: 0 bytes in 0 blocks
Running with:
valgrind --tool=memcheck --leak-check=yes --show-reachable=yes --num-callers=20 --track-fds=yes ./testing 2> valgrind.log
I'm a little bit bleeding edge with this setup, to try and get a relatively-recent C++11-compiling gcc:
Debian Wheezy 3.2.0-2-686-pae
gcc (Debian 4.7.1-2) 4.7.1
If I do sudo kwrite --version I get:
Qt: 4.8.1
KDE Development Platform: 4.8.4 (4.8.4)
KWrite: 4.8.3 (4.8.3)
Anyone in a similar situation, or know what is going on here? :-/
most of that stuff seems to be internal 'global' state of the libraries you are using. One can argue if it is good style to cleanup global resources by 'program termination', but it is probably ok if done right. I personally don't like it, as it makes detection of real leaks harder...