Maybe someone on here can shed some light on why this is the case or what other calls to use for walking the heap for debugging purposes in an application. If it turns out that there is no way of getting Page Heap and / or the Application Verifier to work with an application that uses HeapWalk() and there is no replacement - at least I can stop looking for one :)
So a confirmation from people with more experience on the matter that these will never play together would be appreciated.
Nothing in the documentation on HeapWalk() I could find mentions problems with Page Heap / Application Verifier or suggests any replacement for this use case.
Considering the small sample code below:
If the code is executed "as is" it will work as expected. If I turn on either Page Heap or Application Verifier (or both) for it the call to HeapWalk() fails. The error code returned by GetLastError() in this case is always 0x00000001 which regarding to the Microsoft documentation is "ERROR_INVALID_FUNCTION".
Underlying motivation for the question:
Track down heap corruption and potential memory leaks in a legacy application with custom heap management. Running with Page Heap and / or Application Verifier was intended to help with that.
Thanks for reading this and any comments that could shed some light unto the issue are welcome.
#include <Windows.h>
#include <iostream>
#include <sstream>
#include <iomanip>
#include <string>
size_t DummyHeapWalk(void* heapHandle)
{
size_t commitSize = 0;
PROCESS_HEAP_ENTRY processHeapEntry;
processHeapEntry.lpData = 0;
HeapLock(heapHandle);
if (HeapWalk(heapHandle, &processHeapEntry))
{
commitSize = processHeapEntry.Region.dwCommittedSize;
}
else
{
int lastError = GetLastError();
std::ostringstream errorMsg;
errorMsg << "DummyHeapWalk failed on heapHandle [0x" << std::setfill('0') << std::setw(16) << std::hex << (size_t)(heapHandle) << "] last error: " << lastError << std::endl;
OutputDebugString(errorMsg.str().c_str());
DebugBreak();
}
HeapUnlock(heapHandle);
return commitSize;
}
int main(void)
{
HANDLE myProcess = GetProcessHeap();
std::cout << "Process Heap Commit size: " << DummyHeapWalk(myProcess) << std::endl;
return 0;
}
Related
I couldn't come up with a better title, so feel free to give suggestions.
I tried to follow OneLoneCoder's tutorial on sound synthesizing, I'm only halfway through the first video and my code already throws an exception.
All I did was downloading his olcSoundMaker.h from his github, and copying the entry point:
#include <iostream>
#include "olcNoiseMaker.h"
double make_noise(double time)
{
return 0.5 * sin(440.0 * 2 * PI * time);
}
int main()
{
std::wcout << "Synthesizer, part 1" << std::endl;
std::vector<std::wstring> devices = olcNoiseMaker<short>::Enumerate();
for (auto d : devices)
{
std::wcout << "Found output device: " << d << std::endl;
}
olcNoiseMaker<short> sound(devices[0], 44100, 1, 8, 512);
sound.SetUserFunction(make_noise);
while (1) { ; }
return EXIT_SUCCESS;
}
In the video he runs this just fine; for me, it starts producing a sound, then after 60-80 iterations of the while (1) loop, it stops and raises this:
Unhandled exception thrown: write access violation.
std::_Atomic_address_as<long,std::_Atomic_padded<unsigned int> >(...) was 0xB314F7CC.
(from the <atomic> header file, line 1474.)
By stepping through the code with VS I didn't find out much, except that it happens at different times during every run, which may mean it has something to do with multithreading, but I'm not sure since I'm not very familiar with the topic.
I found this question which is similar, but even though it says [SOLVED] it doesn't show me the answers.
Anyone that can help to get rid of that exception?
I am quite new to boost, as well as to multithreading and launching application using libraries. For my desired funcitonality, I was recommended by colleague to use boost::process library.
But the documentation to this part of boost is quite insufficient, so I could not determine which function suits my task best by documentation. I therefore started to try several functions there, but non has all the desired properties.
However there is one I cannot figure out, how to properly use. I cannot even compile it, let alone run it. And the function is boost::process::async_system. I could not find anywhere on internet some step-by-step guide on how to use this function and what individual components mean and do.
Could someone explain to me in detail the individual arguments and template arguments of the function ? Or provide a link to a detailed manual?
I like the examples here: https://theboostcpplibraries.com/boost.thread-futures-and-promises
For example, look at example 44.16, they clearly show how to use async:
#define BOOST_THREAD_PROVIDES_FUTURE
#include <boost/thread.hpp>
#include <boost/thread/future.hpp>
#include <iostream>
int accumulate()
{
int sum = 0;
for (int i = 0; i < 5; ++i)
sum += i;
return sum;
}
int main()
{
boost::future<int> f = boost::async(accumulate);
std::cout << f.get() << '\n';
}
Waiting happens at the get method, not before. You might use a non-waiting mechanism, too.
As for compiling, you need to first build boost. Building is explained in detail here: https://www.boost.org/doc/libs/1_62_0/more/getting_started/windows.html
Most parts of the library work header-only. For asio, building the binary libraries (also explained in the link) is necessary. In your project (i.e. visual studio projects, xcode project or just some make files), you need to set include and library headers of boost to use it. The link above helps with this as well.
I'm just ramping up on Boost.Process but the sample code I have working might be helpful here.
boost::process:async_system() takes 3 parameters: a boost::asio::io_context object, an exit-handler function, and the command you want to run (just like system(), and it can be either a single line or more than one arg).
After it's invoked, you use the io_context object from the calling thread to manage and monitor the async task - I use the run_one() method which will "Run the io_context object's event processing loop to execute at most one handler" but you can also use other methods to run for a duration etc.
Here's my working code:
#include <boost/process.hpp>
#include <iostream>
using namespace boost;
namespace {
// declare exit handler function
void _exitHandler(boost::system::error_code err, int rc) {
std::cout << "DEBUG async exit error code: "
<< err << " rc: " << rc <<std::endl;
}
}
int main() {
// create the io_context
asio::io_context ioctx;
// call async_system
process::async_system(ioctx, _exitHandler, "ls /usr/local/bin");
std::cout << "just called 'ls /usr/local/bin', async" << std::endl;
int breakout = 0; // safety for weirdness
do {
std::cout << " - checking to see if it stopped..." << std::endl;
if (ioctx.stopped()) {
std::cout << " * it stopped!" << std::endl;
break;
} else {
std::cout << " + calling io_context.run_one()..." << std::endl;
ioctx.run_one();
}
++breakout;
} while (breakout < 1000);
return 0;
}
The only thing my example lacks is how to use boost::asio::async_result to capture the result - the samples I've see (including here on slashdot) still don't make much sense to me, but hopefully this much is helpful.
Here's the output of the above on my system:
just called 'ls /usr/local/bin', async
- checking to see if it stopped...
+ calling io_context.run_one()...
- checking to see if it stopped...
+ calling io_context.run_one()...
VBoxAutostart easy_install pybot
VBoxBalloonCtrl easy_install-2.7 pyi-archive_viewer
((omitted - a bunch more files from the ls -l command))
DEBUG async exit error code: system:0 rc: 0
- checking to see if it stopped...
* it stopped!
Program ended with exit code: 0
I'm getting a memory leak when I create a 50'000 values vector of double, and I don't know why.
#include <stdafx.h>
#include <Windows.h>
#include <psapi.h>
#define MEMLOGINIT double mem1, mem2;\
PROCESS_MEMORY_COUNTERS_EX pmc;\
GetProcessMemoryInfo(GetCurrentProcess(), (PROCESS_MEMORY_COUNTERS*)&pmc, sizeof(pmc));\
SIZE_T virtualMemUsedByMe = pmc.PrivateUsage;\
mem1 = virtualMemUsedByMe/1024.0;\
std::cout << "1st measure \n Memory used : " << mem1 <<" Ko.\n\n";\
#define MEMLOG(stepName) GetProcessMemoryInfo(GetCurrentProcess(), (PROCESS_MEMORY_COUNTERS*)&pmc, sizeof(pmc));\
virtualMemUsedByMe = pmc.PrivateUsage; \
mem2 = virtualMemUsedByMe/1024.0; \
std::cout << stepName << "\n Memory used : " << mem2 << " Ko.\n Difference with previous measure : " << mem2 - mem1 <<" Ko.\n\n";\
mem1 = mem2;
int _tmain(int argc, _TCHAR* argv[])
{
MEMLOGINIT;
{
vector<double> spTestmemo(50000 ,100.0);
MEMLOG("measure before destruction");
}
MEMLOG("measure after destruction");
};
output with 50k values
Clearly here the 400 ko allocated by the vector aren't released.
However, the destructor works with a vector of 500'000 values.
int _tmain(int argc, _TCHAR* argv[])
{
MEMLOGINIT;
{
//vector<double> spTestmemo(50000 ,100.0);
vector<double> spTestmemo(500000 ,100.0); //instead of the line above
MEMLOG("measure before destruction");
}
MEMLOG("measure after destruction");
};
output with 500k values
Here, a vector ten times bigger than the previous one is almost completely destroyed (small bias of 4 ko).
Thanks for your help.
As NathanOlivier and PaulMcKenzie pointed out in their comments, this is not a memory leak.
The c++ std library may not be releasing all the memory to the OS when you free it, but the memory is still being accounted for.
So don't worry too much about what you see as the OS reported virtual memory usage of your program as long as it is not abnormally high or continuously increasing while your program runs.
--- begin visual studio specific:
Since you seem to be building your code with Visual Studio, its debug runtime library has a facilities for doing what you are doing with your MEMLOGINIT and MEMLOG macros, see https://msdn.microsoft.com/en-us/library/974tc9t1.aspx#BKMK_Check_for_heap_integrity_and_memory_leaks
Basically you can use _CrtMemCheckpoint to get the status of what has been allocated, and _CrtMemDifference and _CrtMemDumpStastistics to compare and log the difference between 2 checkpoints.
The debug version of the runtime library also automatically dumps leaked memory to the debugger console of your program when the program exits. If you define new as DEBUG_NEW it will even log the source file and line number where each leaked allocation was made. That is often very valuable when hunting down memory leaks.
This program works as expected:
#include <iostream>
#include <string>
#include <vector>
using namespace std;
struct Thumbnail
{
string tag;
string fileName;
};
int main()
{
{
Thumbnail newThumbnail;
newThumbnail.tag = "Test_tag";
newThumbnail.fileName = "Test_filename.jpg";
std::vector<Thumbnail> thumbnails;
for(int i = 0; i < 10; ++i) {
thumbnails.push_back(newThumbnail);
}
}
return 0;
}
If I copy and paste the main block of code in another project (still single threaded), inside any function, I get this exception from the line commented // <-- crash at the 2nd loop:
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
If I clear the vector before any push_back, everything is all right (but of course this is not the desired behaviour); this makes me think that it is like if the vector could not store more than one such object.
This is the function where the code is crashing:
int ImageThumbnails::Load(const std::string &_path)
{
QDir thumbDir(_path.c_str());
if(!thumbDir.exists())
return errMissingThumbPath;
// Set a filter
thumbDir.setFilter(QDir::Files);
thumbDir.setNameFilters(QStringList() << "*.jpg" << "*.jpeg" << "*.png");
thumbDir.setSorting(QDir::Name);
// Delete previous thumbnails
thumbnails.clear();
Thumbnail newThumbnail;
///+TEST+++
{
Thumbnail newThumbnail;
newThumbnail.tag = "Test_tag";
newThumbnail.fileName = "Test_filename.jpg";
std::vector<Thumbnail> thumbnails;
for(int i = 0; i < 10; ++i)
{
TRACE << i << ": " << sizeof(newThumbnail) << " / " << newThumbnail.tag.size() << " / " << newThumbnail.fileName.size() << std::endl;
//thumbnails.clear(); // Ok with this decommented
thumbnails.push_back(newThumbnail); // <-- crash at the 2nd loop
}
exit(0);
}
///+TEST+END+++
...
This is the output:
> TRACE: ImageThumbnails.cpp:134:Load
0: 8 / 8 / 17
> TRACE: ImageThumbnails.cpp:134:Load
1: 8 / 8 / 17
terminate called after throwing an instance of 'std::bad_alloc'
what(): std::bad_alloc
Why do I get this different behaviour for the same piece of code in two different projects?
Platform: Windows 7, MinGW 4.4, GCC
To add to this (because first Google result): In my szenario i got bad_alloc even when i still had a few GBs of RAM available.
If your application needs more than 2GB of memory you have to enable the /LARGEADDRESSAWARE option in the Linker settings.
If you need more than 4GB you have to set your build target to x64 (in Project Settings and the Build configuration)
Due to how the automatic resizing of vectors works you will hit the breakpoints at ~1gb / ~2gb vector size
If it is crashing when using the exact same code in another application, there is the possibility that the program is out of memory (std::bad_alloc exceptions can be because of this). Check how much memory your other application is using.
Another thing ... use the reserve() method when using std::vectors and you know ahead of time how many elements are going to be pushed into the vector. It looks like you are pushing the exact same element 10 times. Why not use the resize() method that includes the default object parameter?
Is there any way to easily limit a C/C++ application to a specified amount of memory (30 mb or so)? Eg: if my application tries to complete load a 50mb file into memory it will die/print a message and quit/etc.
Admittedly I can just constantly check the memory usage for the application, but it would be a bit easier if it would just die with an error if I went above.
Any ideas?
Platform isn't a big issue, windows/linux/whatever compiler.
Read the manual page for ulimit on unix systems. There is a shell builtin you can invoke before launching your executable or (in section 3 of the manual) an API call of the same name.
On Windows, you can't set a quota for memory usage of a process directly. You can, however, create a Windows job object, set the quota for the job object, and then assign the process to that job object.
Override all malloc APIs, and provide handlers for new/delete, so that you can book keep the memory usage and throw exceptions when needed.
Not sure if this is even easier/effort-saving than just do memory monitoring through OS-provided APIs.
In bash, use the ulimit builtin:
bash$ ulimit -v 30000
bash$ ./my_program
The -v takes 1K blocks.
Update:
If you want to set this from within your app, use setrlimit. Note that the man page for ulimit(3) explicitly says that it is obsolete.
You can limit the size of the virtual memory of your process using the system limits. If you process exceeds this amount, it will be killed with a signal (SIGBUS I think).
You can use something like:
#include <sys/resource.h>
#include <iostream>
using namespace std;
class RLimit {
public:
RLimit(int cmd) : mCmd(cmd) {
}
void set(rlim_t value) {
clog << "Setting " << mCmd << " to " << value << endl;
struct rlimit rlim;
rlim.rlim_cur = value;
rlim.rlim_max = value;
int ret = setrlimit(mCmd, &rlim);
if (ret) {
clog << "Error setting rlimit" << endl;
}
}
rlim_t getCurrent() {
struct rlimit rlim = {0, 0};
if (getrlimit(mCmd, &rlim)) {
clog << "Error in getrlimit" << endl;
return 0;
}
return rlim.rlim_cur;
}
rlim_t getMax() {
struct rlimit rlim = {0, 0};
if (getrlimit(mCmd, &rlim)) {
clog << "Error in getrlimit" << endl;
return 0;
}
return rlim.rlim_max;
}
private:
int mCmd;
};
And then use it like that:
RLimit dataLimit(RLIMIT_DATA);
dataLimit.set(128 * 1024 ); // in kB
clog << "soft: " << dataLimit.getCurrent() << " hard: " << dataLimit.getMax() << endl;
This implementation seems a bit verbose but it lets you easily and cleanly set different limits (see ulimit -a).