Omnet++ :std::bad_alloc - c++

I run simulations based on ad hoc networks and when the numbers of nodes is big (100 nodes) and simulation time is long (more than 300s), I get the following error :
Error in module (MobileOverlay)
MobilePeerNetwork.MobilePeer[73].overlay.moverlay (id=3023) at event
#508013243, t=372.42387824: std::bad_alloc: std::bad_alloc.
I would to know if there is a way to find the exact position of the problem without using Valgrind option because I work in windows 7.

You are getting this error because during object creation new[] fail to allocate the requested storage space.
So, either your hardware can not support the simulation which you are trying to run, or you forget to free up memory allocated previously, so your machine runs out of memory.
Here is a useful post: "std::bad_alloc": am I using too much memory?.
Based on the OMNeT++ guide:
Profiling support is based on the valgrind program,
http://valgrind.org. Valgrind is a suite of tools for debugging and
profiling on Linux. It can automatically detect various memory access
and memory management bugs, and perform detailed profiling of your
program. Valgrind support is brought into the OMNeT++ IDE by the Linux
Tools Project of Eclipse, currently in incubation state.
So the initial suggestion would be to switch to a Linux machine -- maybe virtual machine, to get your work done. In the long run, using OMNeT++ on Linux will benefit you much more.
Obviously, you are looking for a quick (and possibly dirty) solution, so you can refer to this: Windows Eclipse CDT profiler

Related

Troubleshoot C++ program memory usage issue

I am authoring a C++ program and find it consumes too much memory. I would like to know which part of the program consumes the most number of memory, ideally, I would like to know how much percentage of memory are consumed by what kind of C++ objects the program is using at a particular moment.
In Java, I know tools like Eclipse Memory Analyzer (https://www.eclipse.org/mat/) which could take a heap dump and show/visualize such memory usage, and I wonder if this can be done for a C++ program. For example, I expect to use a tool/approach letting me know a particular vector<shared_ptr<MyObject>> is holding 30% of the memory.
Note:
I develop the program mainly on macOS (compile using Apple Clang), so it will be better if the approach works on macOS. But I do deploy to Linux as well (compile using gcc) so approaches/tools on Linux is okay.
I tried using Apple's Intruments for such purpose, but so far I can only use it to find memory allocation issue. I have no idea how to figure out the memory consumption of the program at a particular moment (the memory consumption should be related with C++ objects in the program so that I can do some action to reduce it accordingly).
I don't find an easy way to visualize/summarize each part of my program's memory yet. So far, the best tool/approach that I found is Apple's Instruments (if you are on macOS).
By using Instruments, you can use Allocations profiling template. When using this profiling template, you can choose File ==> Recording Options ==> Check Discard events for freed memory option
And you will be able to figure out the un-free memory (aka. the data that are still in the memory) during allocation recording. If you have your program's debug symbol loaded, you can see which function leads to this result.
Although this doesn't address all the issues, it does help to identify part of the problem.

How to get Valgrind to not instrument a specific shared object?

I'm using a proprietary library to import data, it uses the GPU (OpenGL) to transform the data.
Running my program through Valgrind (memcheck) causes the data import to take 8-12 hours (instead of a fraction of a second). I need to do my Valgrind sessions overnight (and leave my screen unlocked all night, since the GPU stuff pauses while the screen is locked). This is causing a lot of frustration.
I'm not sure if this is related, but Valgrind shows thousands of out-of-bound read/write errors in the driver for my graphics card:
==10593== Invalid write of size 4
==10593== at 0x9789746: ??? (in /usr/lib/x86_64-linux-gnu/dri/i965_dri.so)
(I know how to suppress those warnings).
I have been unable to find any ways of selectively instrumenting code, or excluding certain shared libraries from being instrumented. I remember using a tool on Windows 20 years or so ago that would skip instrumenting selected binaries. It seems this is not possible with memcheck:
Is it possible to make valgrind ignore certain libraries? -- 2010, says this is not possible.
Can I make valgrind ignore glibc libraries? -- 2010, solutions are to disable warnings.
Restricting Valgrind to a specific function -- 2011, says it's not possible.
using valgrind at specific point while running program -- 2013, no answers.
Valgrind: disable conditional jump (or whole library) check -- 2013, solutions are to disable warnings.
...unless things have changed in the last 6 or 7 years.
My question is: Is there anything at all that can be done to speed up the memory check? Or to not check memory accesses in certain parts of the program?
Right now the only solution I see is to modify the program to read data directly from disk, but I'd rather test the actual program I'm planning to deploy. :)
No, this is not possible. When you run an application under Valgrind it is not running natively under the OS but rather in a virtual environment.
Some of the tools like Callgrind have options to control the instrumentation. However, even with the instrumentation off the application under test is still running under the Valgrind virtual environment.
There are a few things you can do to make things less slow
Test an optimized build of your application. You will lost line number information as a result, however.
Turn of leak detection
Avoid costly options like trace-origins
The sanitizers are faster and can also detect stack overflows, but at the cost of requiring instrumentation.

Memory counter - Collision Detection Project

I thought I would ask the experts - see if you can help me :o)
My son has written C++ code for Collision Detection using Brute Force and Octree algorithms.
He has used Debug etc - and to collect stats on mem usage he has used windows & task manager - which have given him all the end results he has needed so far. The results are not yet as they were expect to be (that Octree would use more memory overall).
His tutor has suggested he checks memory once each is "initialised" and then plot at points through the test.
He was pointed in the direction of Valgrind .... but it looked uite complicated and becaus ehe has autism, he is worried that it might affect his programmes :o)
Anyone suggest a simple way to grab the information on Memory if not also Frame Rate and CPU usage???
Any help gratefully received, as I know nothing so can't help him at all, except for typing this on here - as it's "social" environment he can't deal with it.
Thanks
Rosalyn
For the memory leaks:
If you're on Windows, Visual C++ by Microsoft (the Express version is free) has a nice tool for debugging and is easy to setup with instructions can be found here; otherwise if you're on Linux, Valgrind is one of the standards. I have used the Visual C++ tool often and it's a nice verification that you have no memory leaks. Also, you can use it to enabled your programs to break on allocation numbers that you get from the memory leak log so it quickly points you to when and where the memory is getting assigned that leaks. Again, it's easy to implement (just a few header files and then a single function call where you want to dump the leaks at).
I have found the best way to implement the VC++ tool is to make the call to dump the memory leaks to the output window right before main returns a value. That way, you can catch the leaks of absolutely everything in your program. This works very well and I have used it for some advanced software.
For the framerate and CPU usage:
I usually use my own tools for benchmarking since they're not difficult to code once you learn the functions to call; this would usually require OS API calls, but I think Boost has that available and is cross-platform. There might be other tools out there that can track the process in the OS to get benchmarking data as well, but I'm not certain if they would be free or not.
It looks like you're running under a windows system. This isn't a programming solution, and you may have already tried it (so feel free to ignore), but if not, you should take a look at performance monitor (it's one of the tools that ships with windows). It'll let you track all sorts of useful stats about individual proceses and the system as a whole (cpu/commit size etc). It plots the results for you as a graph as the program is running and you can save the results off for future viewing.
On Windows 7, you get to it from here:
Control Panel\All Control Panel Items\Performance Information and Tools\Advanced Tools
Then Open Performance Monitor.
For older versions of windows, it used to be one of the administrative tools options.

Finding heap corruption

This is an extension of my previous question, Application crash with no explanation.
I have a lot of crashes that are presumably caused by heap corruption on an application server. These crashes only occur in production; they cannot be reproduced in a test environment.
I'm looking for a way to track down these crashes.
Application Verifier was suggested, and it would be fine, but it's unusable with our production server. When we try to start it in production with application verifier, it becomes so slow that it's completely unusable, even though this is a fairly powerful server (64-bit application, 16 GB memory, 8 processors). Running it without application verifier, it only uses about 1 GB of memory and no more than 10-15% of any processor's cycles.
Are there any other tools that will help find heap corruption, without adding a huge overhead?
Use the debug version of the Microsoft runtime libraries. Turn on red-zoning and get your heap automatically checked every 128 (say) heap operations by calling _CrtSetDbgFlag() once during initialisation.
_CRTDBG_DELAY_FREE_MEM_DF can be quite useful for finding memory-used-after-free bugs, but your heap size grows monitonically while using it.
Would there be any benefit in running it virtualized and taking scheduled snapshots, so that you hopefully can get a snapshot just a little before it actually crashes? Then take the pre-crash snapshot and start it in a lab environment. If you can get it to crash again there, restart the snapshot and start inspecting your server process.
Mudflap with GCC. It does code instrumentation for production code.
You have to compile your soft with -fmudflap. It will check any wrong pointer access (heap/stack/static). It is designed to work for production code with a little slowdown (between x1.5 to x5). You can also disable check at read access for speedup.

Identifying Major Page Fault cause

I've been asked to look at an internal application written in C++ and running on Linux thats having some difficulties.
Periodically it will have a large amount of major page faults (~200k), which cause the wall clock run time to increase by x10+, then on some runs it will have none.
I've tried isolating different pieces of the code but am struggling to repeat the page fault errors when testing it.
Does anyone have any suggestions for getting any more information out of the application/Linux on major page faults? All I have really is a total.
You may like to consider Valgrid, described on the home page as:
Valgrind is an instrumentation framework for building dynamic analysis tools. There are Valgrind tools that can automatically detect many memory management and threading bugs, and profile your programs in detail. You can also use Valgrind to build new tools.
Specifically Valgrind contains a tool called Massif, for which the following (paraphrased) overview is given in the manual:
Massif is a heap profiler. It measures how much heap memory your program uses. [..]
Heap profiling can help you reduce the amount of memory your program uses. On modern machines with virtual memory, this provides the following benefits:
It can speed up your program -- a smaller program will interact better with your machine's caches and avoid paging.
If your program uses lots of memory, it will reduce the chance that it exhausts your machine's swap space.