valgrind - different results with hellgrind vs leak-check - c++

I have some strange behaviour that I do not understand. The code is a bit complex so I would refrain from posting it here and instead describe the behaviour and hope that somebody, knowing how valgrind works, has an idea that I can pursue despite this little information.
Background:
I am developing some additional functionality for an open-source, c/c++ based agent-based modelling platform fork # my github. Compilation is fine. Everything seems to work as it should so far based on my validation with test-programs. Also, valgrind does not report any errors of relevance. But, reproducability (which is crucial) is strange.
Within the framework one defines a model file (initialisation of a simulation run, basically). Based on this file, one should be able to reproduce the exact same output (and platform independent). In a way this works: If I start the simulation environment (GUI version), load the file and run it, it produces the same result each time. Also, using the command-line version, I get the same results each time.
But, if, from a running instance of the simulation environment, I run the same model more than once, then the strange behavior occurs - sometimes...
Compiler options used:
CC=g++
GLOBAL_CC=-march=native -std=gnu++14
SSWITCH_CC=-fnon-call-exceptions -Og -ggdb3 -Wall
The set-up:
I run the compiled file and, internally to the program compiled, a fixed simulation set-up three times. Now, it should produce the exact same results each time, which I check by printing random numbers at different stages.
The strange behaviour:
Option #1:
When I run the program in valgrind using the options:
valgrind --leak-check=full --leak-resolution=high --show-reachable=yes
I do not get the same results internally
Report from Option 1:
Finished processing sim1
==6206==
==6206== HEAP SUMMARY:
==6206== in use at exit: 43 bytes in 1 blocks
==6206== total heap usage: 4,124,309 allocs, 4,124,308 frees, 888,390,511 bytes allocated
==6206==
==6206== 43 bytes in 1 blocks are still reachable in loss record 1 of 1
==6206== at 0x4C2DDCF: realloc (vg_replace_malloc.c:785)
==6206== by 0x5BE7FB2: getcwd (getcwd.c:84)
==6206== by 0x143391: lsdmain(int, char**) (lsdmain.cpp:203)
==6206== by 0x10C37D: main (main_gnuwin.cpp:29)
==6206==
==6206== LEAK SUMMARY:
==6206== definitely lost: 0 bytes in 0 blocks
==6206== indirectly lost: 0 bytes in 0 blocks
==6206== possibly lost: 0 bytes in 0 blocks
==6206== still reachable: 43 bytes in 1 blocks
==6206== suppressed: 0 bytes in 0 blocks
==6206==
==6206== For counts of detected and suppressed errors, rerun with: -v
==6206== ERROR SUMMARY: 0 errors from 0 contexts (suppressed: 0 from 0)
Option #2
However, when I use the following option instead:
valgrind --tool=helgrind
I do get the same results each time with the command line version. Interestingly, the first results with option #1 are the same as the results with option #2.
I would be happy for any suggestions. And, I am not a trained computer scientist... I am using and mt1937 (reinitialised each time) - but the initial random numbers between the simulations are the same, so I do not think the error resides here. Although later within the run the random numbers change in Option #1 (this is my test, besides the time the simulation needs to find an equilibrium).

Finally, I could find the issue: At two points in the program I sort a temporary vector with pairs of distance values and pointers of objects located on a 2d space:
std::sort( vector.begin(),vector.end() ); // vector of std::pairs<double, pointer>
The solution, obviously, is to only sort by the first item of the pair:
std::sort( vector.begin(),vector.end(), [](auto const &A, auto const &B ){return A.first < B.first; } );
Some remarks on why I did not find this issue directly:
When I implemented this sort, I intended to make it "stable". The pointers of the objects are kind of unique, thus in different subsets the ordering would be the same and also independently of how I add the items to the set.
I did not consider that pointer values are (not precisely, but in effect) random numbers outside of my control.
I did not see this, because somehow the OS (or whatever) always assigns the same pointer values between different calls of the program (I suggest there is a "virtual" space that is always initialized again). Because of this, I did not suggest that pointers were the issue.
Curiously, when I ran the program with Valgrind and --tool=helgrind option, the issue did not persist. One suggestion I got (offline) was that memcheck preinitialises the memory with a given pattern, this would have been an answer if uninitialised variables had been the cause. As it seems, helgrind also controls the memory in different scopes, providing for each of my subsequent simulations a "fresh" virtual memory such that my pointer sorting was stable in the repeated loop.
I hope this helps somebody if he or she runs into the same problems. Thanks for all the suggestions!

Related

Valgrind error with new array [duplicate]

I am getting an invalid read error when the src string ends with \n, the error disappear when i remove \n:
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
int main (void)
{
char *txt = strdup ("this is a not socket terminated message\n");
printf ("%d: %s\n", strlen (txt), txt);
free (txt);
return 0;
}
valgrind output:
==18929== HEAP SUMMARY:
==18929== in use at exit: 0 bytes in 0 blocks
==18929== total heap usage: 2 allocs, 2 frees, 84 bytes allocated
==18929==
==18929== All heap blocks were freed -- no leaks are possible
==18929==
==18929== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
==18929==
==18929== 1 errors in context 1 of 1:
==18929== Invalid read of size 4
==18929== at 0x804847E: main (in /tmp/test)
==18929== Address 0x4204050 is 40 bytes inside a block of size 41 alloc'd
==18929== at 0x402A17C: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
==18929== by 0x8048415: main (in /tmp/test)
==18929==
==18929== ERROR SUMMARY: 1 errors from 1 contexts (suppressed: 0 from 0)
How to fix this without sacrificing the new line character?
It's not about the newline character, nor the printf format specifier. You've found what is arguably a bug in strlen(), and I can tell you must be using gcc.
Your program code is perfectly fine. The printf format specifier could be a little better, but it won't cause the valgrind error you are seeing. Let's look at that valgrind error:
==18929== Invalid read of size 4
==18929== at 0x804847E: main (in /tmp/test)
==18929== Address 0x4204050 is 40 bytes inside a block of size 41 alloc'd
==18929== at 0x402A17C: malloc (in /usr/lib/valgrind/vgpreload_memcheck-x86-linux.so)
==18929== by 0x8048415: main (in /tmp/test)
"Invalid read of size 4" is the first message we must understand. It means that the processor ran an instruction which would load 4 consecutive bytes from memory. The next line indicates that the address attempted to be read was "Address 0x4204050 is 40 bytes inside a block of size 41 alloc'd."
With this information, we can figure it out. First, if you replace that '\n' with a '$', or any other character, the same error will be produced. Try it.
Secondly, we can see that your string has 40 characters in it. Adding the \0 termination character brings the total bytes used to represent the string to 41.
Because we have the message "Address 0x4204050 is 40 bytes inside a block of size 41 alloc'd," we now know everything about what is going wrong.
strdup() allocated the correct amount of memory, 41 bytes.
strlen() attempted to read 4 bytes, starting at the 40th, which would extend to a non-existent 43rd byte.
valgrind caught the problem
This is a glib() bug. Once upon a time, a project called Tiny C Compiler (TCC) was starting to take off. Coincidentally, glib was completely changed so that the normal string functions, such as strlen() no longer existed. They were replaced with optimized versions which read memory using various methods such as reading four bytes at a time. gcc was changed at the same time to generate calls to the appropriate implementations, depending on the alignment of the input pointer, the hardware compiled for, etc. The TCC project was abandoned when this change to the GNU environment made it so difficult to produce a new C compiler, by taking away the ability to use glib for the standard library.
If you report the bug, glib maintainers probably won't fix it. The reason is that under practical use, this will likely never cause an actual crash. The strlen function is reading bytes 4 at a time because it sees that the addresses are 4-byte aligned. It's always possible to read 4 bytes from a 4-byte-aligned address without segfaulting, given that reading 1 byte from that address would succeed. Therefore, the warning from valgrind doesn't reveal a potential crash, just a mismatch in assumptions about how to program. I consider valgrind technically correct, but I think there is zero chance that glib maintainers will do anything to squelch the warning.
The error message seems to indicate that it's strlen that read past the malloced buffer allocated by strdup. On a 32-bit platform, an optimal strlen implementation could read 4 bytes at a time into a 32-bit register and do some bit-twiddling to see if there's a null byte in there. If near the end of the string, there are less than 4 bytes left, but 4 bytes are still read to perform the null byte check, then I could see this error getting printed. In that case, presumably the strlen implementer would know if it's "safe" to do this on the particular platform, in which case the valgrind error is a false positive.

How do I understand Invalid read in Valgrind, where address is bigger than the alloc'd block size

I am new to Valgrind. Got these Valgrind message:
==932767== Invalid read of size 16
==932767== at 0x3D97D2B9AA: __strcasecmp_l_sse42 (in /lib64/libc-2.12.so)
...
==932767== Address 0x8c3e170 is 9 bytes after a block of size 7 alloc'd
==932767== at 0x6A73B4A: malloc (vg_replace_malloc.c:296)
==932767== by 0x34E821195A: ???
Here I have two questions:
the allocated block is 7 bytes, then how come the address 0x8c3e170 is in 9 bytes? Normally the pointed size is smaller than the allocated size. So under what circumstance we will meet the above issue?
the Invalide read size is 16bytes. Does it include the 2 extra bytes from "Address 0x8c3e170 is 9 bytes after a block of size 7 alloc'd"
If it weren't for the ellipsis I would say the Address 0x8c3e170... msg is directly related to the Invalid read of size 16 because it's indented further.
It's possible to get false positives, so don't rule that out. For example, it's possible that strcasecmp is reading more than it needs to as an optimization.
I read the 2nd message as the address being read from starts 9 bytes after the end of a block of size 7.
I have two suggestions, either of which will probably help you track this down:
1) Run your application under valgrind such that you can attach in a separate terminal window with gdb:
~ valgrind --vgdb=yes --vgdb-error=0 your_program
in another window:
~ gdb your_program
(gdb) target remote | vgdb
This option makes it halt as though a breakpoint were set on every problem valgrind finds
2) Compile with the undefined and/or memory sanitizers either with clang or gcc (4.9 or higher). They catch the same sorts of issues, but I find the error messages more informative.

Regarding Possible Lost in Valgrind

What is wrong if we push the strings into vector like this:
globalstructures->schema.columnnames.push_back("id");
When i am applied valgrind on my code it is showing
possibly lost of 27 bytes in 1 blocks are possibly lost in loss record 7 of 19.
like that in so many places it is showing possibly lost.....because of this the allocations and frees are not matching....which is resulting in some strange error like
malloc.c:No such file or directory
Although I am using calloc for allocation of memory everywhere in my code i am getting warnings like
Syscall param write(buf) points to uninitialised byte(s)
The code causing that error is
datapage *dataPage=(datapage *)calloc(1,PAGE_SIZE);
writePage(dataPage,dataPageNumber);
int writePage(void *buffer,long pagenumber)
{
int fd;
fd=open(path,O_WRONLY, 0644);
if (fd < 0)
return -1;
lseek(fd,pagenumber*PAGE_SIZE,SEEK_SET);
if(write(fd,buffer,PAGE_SIZE)==-1)
return false;
close(fd);
return true;
}
Exact error which i am getting when i am running through gdb is ...
Breakpoint 1, getInfoFromSysColumns (tid=3, numColumns=#0x7fffffffdf24: 1, typesVector=..., constraintsVector=..., lengthsVector=...,
columnNamesVector=..., offsetsVector=...) at dbheader.cpp:1080
Program received signal SIGSEGV, Segmentation fault.
_int_malloc (av=0x7ffff78bd720, bytes=8) at malloc.c:3498
3498 malloc.c: No such file or directory.
When i run the same through valgrind it's working fine...
Well,
malloc.c:No such file or directory
can occur while you are debugging using gdb and you use command "s" instead of "n" near malloc which essentially means you are trying to step into malloc, the source of which may not be not available on your Linux machine.
That is perhaps the same reason why it is working fine with valgrind.
Why error is in malloc:
The problem is that you overwrote some memory buffer and corrupted one
of the structures used by the memory manager. (c)
Try to run valgrind with --track-origins=yes and see where that uninitialized access comes from. If you believe that it should be initialized and it is not, maybe the data came from a bad pointer, valgrind will show you where exactly the values were created. Probably those uninitialized values overwrote your buffer, including memory manager special bytes.
Also, review all valgrind warnings before the crash.

What do the numbers in the valgrind's outputs mean?

I have this output from valgrind:
==4836== 10,232 bytes in 1 blocks are still reachable in loss record 1 of 1
==4836== at 0x4C2779D: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so)
==4836== by 0x401865: thrt() (main.cpp:221)
==4836== by 0x4048B1: main (tester.cpp:35)
what does ==4836== mean?
what does 0x4C2779D mean?
The answer to your first question: that number represents the process ID.
Look at the official source.
From this same source, we can see the answer to your second question:
The code addresses (eg. 0x804838F) are usually unimportant, but occasionally crucial for tracking down weirder bugs.

C/C++ maximum stack size of program on mainstream OSes

I want to do DFS on a 100 X 100 array. (Say elements of array represents graph nodes) So assuming worst case, depth of recursive function calls can go upto 10000 with each call taking upto say 20 bytes. So is it feasible means is there a possibility of stackoverflow?
What is the maximum size of stack in C/C++?
Please specify for gcc for both
1) cygwin on Windows
2) Unix
What are the general limits?
In Visual Studio the default stack size is 1 MB i think, so with a recursion depth of 10,000 each stack frame can be at most ~100 bytes which should be sufficient for a DFS algorithm.
Most compilers including Visual Studio let you specify the stack size. On some (all?) linux flavours the stack size isn't part of the executable but an environment variable in the OS. You can then check the stack size with ulimit -s and set it to a new value with for example ulimit -s 16384.
Here's a link with default stack sizes for gcc.
DFS without recursion:
std::stack<Node> dfs;
dfs.push(start);
do {
Node top = dfs.top();
if (top is what we are looking for) {
break;
}
dfs.pop();
for (outgoing nodes from top) {
dfs.push(outgoing node);
}
} while (!dfs.empty())
Stacks for threads are often smaller.
You can change the default at link time,
or change at run time also.
For reference, some defaults are:
glibc i386, x86_64: 7.4 MB
Tru64 5.1: 5.2 MB
Cygwin: 1.8 MB
Solaris 7..10: 1 MB
MacOS X 10.5: 460 KB
AIX 5: 98 KB
OpenBSD 4.0: 64 KB
HP-UX 11: 16 KB
Platform-dependent, toolchain-dependent, ulimit-dependent, parameter-dependent.... It is not at all specified, and there are many static and dynamic properties that can influence it.
Yes, there is a possibility of stack overflow. The C and C++ standard do not dictate things like stack depth, those are generally an environmental issue.
Most decent development environments and/or operating systems will let you tailor the stack size of a process, either at link or load time.
You should specify which OS and development environment you're using for more targeted assistance.
For example, under Ubuntu Karmic Koala, the default for gcc is 2M reserved and 4K committed but this can be changed when you link the program. Use the --stack option of ld to do that.
I just ran out of stack at work, it was a database and it was running some threads, basically the previous developer had thrown a big array on the stack, and the stack was low anyway. The software was compiled using Microsoft Visual Studio 2015.
Even though the thread had run out of stack, it silently failed and continued on, it only stack overflowed when it came to access the contents of the data on the stack.
The best advice i can give is to not declare arrays on the stack - especially in complex applications and particularly in threads, instead use heap. That's what it's there for ;)
Also just keep in mind it may not fail immediately when declaring the stack, but only on access. My guess is that the compiler declares stack under windows "optimistically", i.e. it will assume that the stack has been declared and is sufficiently sized until it comes to use it and then finds out that the stack isn't there.
Different operating systems may have different stack declaration policies. Please leave a comment if you know what these policies are.
I am not sure what you mean by doing a depth first search on a rectangular array, but I assume you know what you are doing.
If the stack limit is a problem you should be able to convert your recursive solution into an iterative solution that pushes intermediate values onto a stack which is allocated from the heap.
(Added 26 Sept. 2020)
On 24 Oct. 2009, as #pixelbeat first pointed out here, Bruno Haible empirically discovered the following default thread stack sizes for several systems. He said that in a multithreaded program, "the default thread stack size is" as follows. I added in the "Actual" size column because #Peter.Cordes indicates in his comments below my answer, however, that the odd tested numbers shown below do not include all of the thread stack, since some of it was used in initialization. If I run ulimit -s to see "the maximum stack size" that my Linux computer is configured for, it outputs 8192 kB, which is exactly 8 MB, not the odd 7.4 MB listed in the table below for my x86-64 computer with the gcc compiler and glibc. So, you can probably add a little to the numbers in the table below to get the actual full stack size for a given thread.
Note also that the below "Tested" column units are all in MB and KB (base 1000 numbers), NOT MiB and KiB (base 1024 numbers). I've proven this to myself by verifying the 7.4 MB case.
Thread stack sizes
System and std library Tested Actual
---------------------- ------ ------
- glibc i386, x86_64 7.4 MB 8 MiB (8192 KiB, as shown by `ulimit -s`)
- Tru64 5.1 5.2 MB ?
- Cygwin 1.8 MB ?
- Solaris 7..10 1 MB ?
- MacOS X 10.5 460 KB ?
- AIX 5 98 KB ?
- OpenBSD 4.0 64 KB ?
- HP-UX 11 16 KB ?
Bruno Haible also stated that:
32 KB is more than you can safely allocate on the stack in a multithreaded program
And he said:
And the default stack size for sigaltstack, SIGSTKSZ, is
only 16 KB on some platforms: IRIX, OSF/1, Haiku.
only 8 KB on some platforms: glibc, NetBSD, OpenBSD, HP-UX, Solaris.
only 4 KB on some platforms: AIX.
Bruno
He wrote the following simple Linux C program to empirically determine the above values. You can run it on your system today to quickly see what your maximum thread stack size is, or you can run it online on GDBOnline here: https://onlinegdb.com/rkO9JnaHD.
Explanation: It simply creates a single new thread, so as to check the thread stack size and NOT the program stack size, in case they differ, then it has that thread repeatedly allocate 128 bytes of memory on the stack (NOT the heap), using the Linux alloca() call, after which it writes a 0 to the first byte of this new memory block, and then it prints out how many total bytes it has allocated. It repeats this process, allocating 128 more bytes on the stack each time, until the program crashes with a Segmentation fault (core dumped) error. The last value printed is the estimated maximum thread stack size allowed for your system.
Important note: alloca() allocates on the stack: even though this looks like dynamic memory allocation onto the heap, similar to a malloc() call, alloca() does NOT dynamically allocate onto the heap. Rather, alloca() is a specialized Linux function to "pseudo-dynamically" (I'm not sure what I'd call this, so that's the term I chose) allocate directly onto the stack as though it was statically-allocated memory. Stack memory used and returned by alloca() is scoped at the function-level, and is therefore "automatically freed when the function that called alloca() returns to its caller." That's why its static scope isn't exited and memory allocated by alloca() is NOT freed each time a for loop iteration is completed and the end of the for loop scope is reached. See man 3 alloca for details. Here's the pertinent quote (emphasis added):
DESCRIPTION
The alloca() function allocates size bytes of space in the stack frame of the caller. This temporary space is automatically freed when the function that called alloca() returns to its caller.
RETURN VALUE
The alloca() function returns a pointer to the beginning of the allocated space. If the allocation causes stack overflow, program behavior is undefined.
Here is Bruno Haible's program from 24 Oct. 2009, copied directly from the GNU mailing list here:
Again, you can run it live online here.
// By Bruno Haible
// 24 Oct. 2009
// Source: https://lists.gnu.org/archive/html/bug-coreutils/2009-10/msg00262.html
// =============== Program for determining the default thread stack size =========
#include <alloca.h>
#include <pthread.h>
#include <stdio.h>
void* threadfunc (void*p) {
int n = 0;
for (;;) {
printf("Allocated %d bytes\n", n);
fflush(stdout);
n += 128;
*((volatile char *) alloca(128)) = 0;
}
}
int main()
{
pthread_t thread;
pthread_create(&thread, NULL, threadfunc, NULL);
for (;;) {}
}
When I run it on GDBOnline using the link above, I get the exact same results each time I run it, as both a C and a C++17 program. It takes about 10 seconds or so to run. Here are the last several lines of the output:
Allocated 7449856 bytes
Allocated 7449984 bytes
Allocated 7450112 bytes
Allocated 7450240 bytes
Allocated 7450368 bytes
Allocated 7450496 bytes
Allocated 7450624 bytes
Allocated 7450752 bytes
Allocated 7450880 bytes
Segmentation fault (core dumped)
So, the thread stack size is ~7.45 MB for this system, as Bruno mentioned above (7.4 MB).
I've made a few changes to the program, mostly just for clarity, but also for efficiency, and a bit for learning.
Summary of my changes:
[learning] I passed in BYTES_TO_ALLOCATE_EACH_LOOP as an argument to the threadfunc() just for practice passing in and using generic void* arguments in C.
Note: This is also the required function prototype, as required by the pthread_create() function, for the callback function (threadfunc() in my case) passed to pthread_create(). See: https://www.man7.org/linux/man-pages/man3/pthread_create.3.html.
[efficiency] I made the main thread sleep instead of wastefully spinning.
[clarity] I added more-verbose variable names, such as BYTES_TO_ALLOCATE_EACH_LOOP and bytes_allocated.
[clarity] I changed this:
*((volatile char *) alloca(128)) = 0;
to this:
volatile uint8_t * byte_buff =
(volatile uint8_t *)alloca(BYTES_TO_ALLOCATE_EACH_LOOP);
byte_buff[0] = 0;
Here is my modified test program, which does exactly the same thing as Bruno's, and even has the same results:
You can run it online here, or download it from my repo here. If you choose to run it locally from my repo, here's the build and run commands I used for testing:
Build and run it as a C program:
mkdir -p bin && \
gcc -Wall -Werror -g3 -O3 -std=c11 -pthread -o bin/tmp \
onlinegdb--empirically_determine_max_thread_stack_size_GS_version.c && \
time bin/tmp
Build and run it as a C++ program:
mkdir -p bin && \
g++ -Wall -Werror -g3 -O3 -std=c++17 -pthread -o bin/tmp \
onlinegdb--empirically_determine_max_thread_stack_size_GS_version.c && \
time bin/tmp
It takes < 0.5 seconds to run locally on a fast computer with a thread stack size of ~7.4 MB.
Here's the program:
// =============== Program for determining the default thread stack size =========
// Modified by Gabriel Staples, 26 Sept. 2020
// Originally by Bruno Haible
// 24 Oct. 2009
// Source: https://lists.gnu.org/archive/html/bug-coreutils/2009-10/msg00262.html
#include <alloca.h>
#include <pthread.h>
#include <stdbool.h>
#include <stdint.h>
#include <stdio.h>
#include <unistd.h> // sleep
/// Thread function to repeatedly allocate memory within a thread, printing
/// the total memory allocated each time, until the program crashes. The last
/// value printed before the crash indicates how big a thread's stack size is.
///
/// Note: passing in a `uint32_t` as a `void *` type here is for practice,
/// to learn how to pass in ANY type to a func by using a `void *` parameter.
/// This is also the required function prototype, as required by the
/// `pthread_create()` function, for the callback function (this function)
/// passed to `pthread_create()`. See:
/// https://www.man7.org/linux/man-pages/man3/pthread_create.3.html
void* threadfunc(void* bytes_to_allocate_each_loop)
{
const uint32_t BYTES_TO_ALLOCATE_EACH_LOOP =
*(uint32_t*)bytes_to_allocate_each_loop;
uint32_t bytes_allocated = 0;
while (true)
{
printf("bytes_allocated = %u\n", bytes_allocated);
fflush(stdout);
// NB: it appears that you don't necessarily need `volatile` here,
// but you DO definitely need to actually use (ex: write to) the
// memory allocated by `alloca()`, as we do below, or else the
// `alloca()` call does seem to get optimized out on some systems,
// making this whole program just run infinitely forever without
// ever hitting the expected segmentation fault.
volatile uint8_t * byte_buff =
(volatile uint8_t *)alloca(BYTES_TO_ALLOCATE_EACH_LOOP);
byte_buff[0] = 0;
bytes_allocated += BYTES_TO_ALLOCATE_EACH_LOOP;
}
}
int main()
{
const uint32_t BYTES_TO_ALLOCATE_EACH_LOOP = 128;
pthread_t thread;
pthread_create(&thread, NULL, threadfunc,
(void*)(&BYTES_TO_ALLOCATE_EACH_LOOP));
while (true)
{
const unsigned int SLEEP_SEC = 10000;
sleep(SLEEP_SEC);
}
return 0;
}
Sample output (same results as Bruno Haible's original program):
bytes_allocated = 7450240
bytes_allocated = 7450368
bytes_allocated = 7450496
bytes_allocated = 7450624
bytes_allocated = 7450752
bytes_allocated = 7450880
Segmentation fault (core dumped)