I threw the lodePNG sample files into a blank project in Visual C++ 2008 Express, along with a 7kb PNG file I made, but I'm getting this memory allocation error during runtime:
Invalid allocation size: 429967295 bytes.
After breaking on the error & backtracking through stack frames, I think it's being caused by a null argument being passed to the resize function in std::vector. This project was recently updated (April 2012), and is pretty thoroughly documented, so it's possible that I'm doing something wrong (or don't have the right compilation options). Would someone please take a look at my project?
Here's a ZIP file of the project folder: http://www.mediafire.com/file/791b9z9ld74n3eu/TestLodePNG.zip
You most likely have the png file in the wrong place. By default, the working directory is where the project file is, not where the solution file is when running in the debugger. When I moved the file to the project file directory it worked fine.
You might consider adding some error checking to the the file opening code, like this:
void load_file(std::vector<unsigned char>& buffer, const std::string& filename)
{
std::ifstream file(filename.c_str(), std::ios::in|std::ios::binary|std::ios::ate);
if(!file)
{
//Do something about the error and don't crash
}
...
Related
I'm creating a project with openframeworks (the full source is here: https://github.com/morphogencc/ofxAsio/tree/master/example-udpreceiver), and the empty project seems to compile fine.
I added the ASIO library, and a few header classes, and now the project seems to be give me the following error:
1>------ Build started: Project: example-udpreceiver, Configuration: Debug x64 ------
1> main.cpp
1>cl : Command line error D8049: cannot execute 'C:\Program Files (x86)\Microsoft Visual Studio 14.0\VC\bin\x86_amd64\c1xx.dll': command line is too long to fit in debug record
1>cl : Command line error D8040: error creating or communicating with child process
I couldn't find any examples of error D8049 on stackoverflow or even on Microsoft's pages, and google turned up painfully few results. The only remotely useful one was this github issue:
https://github.com/deplinenoise/tundra/issues/270
But I'm still not sure what's causing the problem. Is anyone familiar with this error, and can recommend a method for troubleshooting what's causing it?
thanks in advance!
For me, working with UE4, this was an intermittent error.
I added "bLegacyPublicIncludePaths = false;" to the innermost block of project.Build.cs and recompiled without errors.
Then I removed that line and compiled again w/o errors.
The error message suggested adding "DefaultBuildSettings = BuildSettingsVersion.V2;" to project.Target.cs which worked.
This is a bit of a weird sounding error, as it is from essentially internally generated data. However, you do have control over that. Taking the error message at face value, you probably have many/lots of defined symbols passed in on the command line (or the the ones you do have have lengthy definitions), or you may have some lengthy file paths.
If you look under the project properties, one of the selections under the C++ section is "Command Line", which will show you exactly what gets passed to the compiler. When you view that you can see where you have many or lengthy parameters, and then make changes to shorten them.
Too many defines? Put them in a header (possibly stdafx.h) and include them that way.
Long file paths? Shorten the paths, put the files somewhere else, or set up file system aliases to your real directories that use shorter paths.
I am working in Optix Using Visual studio 2013 platform, I have been working over month , suddenly I got this error, "memcpy.asm not found".
I found this file in Visual studio folder but it says "The source file is different from when the module was built"
Look carefully at the error, often this comes up as a secondary message because the debugger is trying to load "memcpy.asm" in order to show you debugging information because a memcpy failed, not because your program is actually missing "memcpy.asm"
Heres someone with a similar issue, in short - check the stack trace to see where the issue originated in your code, probably with trying to copy memory to an uninitialized pointer or something.
I am programming a DirectShow Filter that detects objects with an OpenCV HaarcascadeClassifier. It is working fine in Debug mode but not in Release mode and I'm not sure whether there is an memory leak in the OpenCV function (VC 2010 binary of opencv_249 libs) or whether there is something wrong with my project (settings).
I am loading the filter in GraphStudio, a tool to build a DirectShow FilterGraph easily. I'm not sure whether there are assumptions about the filter DLL to be compiled in Debug mode or not.
I'm basically doing the following, after some preprocessing:
std::vector<cv::Rect> objects;
mClassifier.detectMultiScale(inputGray,objects, 1.3);
for(unsigned int i=0; i<objects.size(); ++i)
{
cv::rectangle(outputImage, objects[i], cv::Scalar(255,255,255));
}
So within the function block I am preprocessing, followed by the shown part of code and followed by writing the data to the DirectShow Buffer.
If I use the DLL in Release mode, I get the following error message AFTER the whole function terminated (so probably somewhere else inside the DirectShow Filtergraph):
Debug Assertion Failed!
Program: C:\Program Files (x86)\Graphstudio\graphstudio.exe
File: f:\dd\vctools\crt_bld\self_x86\crt\src\dbgdel.cpp
Line: 52
Expression: _BLOCK_TYPE_IS_VALID(pHead->nBlockUse)
For information [...]
followed by a
Debug Assertion Failed!
Program: C:\Program Files (x86)\Graphstudio\graphstudio.exe
File: f:\dd\vctools\crt_bld\self_x86\crt\src\dbgdel.cpp
Line: 1322
Expression: _CrtlsValidHeapPointer(pUserData)
When I comment mClassifier.detectMultiScale(inputGray,objects, 1.3); out, the filter doesn't crash. Though some things might be optimized away, I replaced the detectMultiScale call with a loop that randomly (previously seeded with time(NULL)) inserts cv::Rect objects into the vector. The filter does not crash and displays the random rectangles in the way I would assume.
I've read that others have observed (valgrind) cv::CascadeClassifiert::detectMultiScale to produce memory leaks. And I've found a link where someone had a problem with detectSingleScale and some OpenCV committer marked it to be fixed ( http://code.opencv.org/issues/2628 ).
Questions:
Is there a chance that this exact problem (see previous link) is (still) within detectMultiScale?
Is there a chance that the problem is not within my project, but in the OpenCV library?
Why does this problem only occur in Release mode?
Why does this problem only occur in the DirectShow filter? (if I run the "same" code/functionality in Release mode in a stand-alone project, I don't get Debug Assert Failed errors - though there might be a unrecognized memory corruption?!?).
I hope someone has an idea and thx in advance!
EDIT:
ok... I had linked against msvcrtd.lib ... removed the whole lib from my project (seems I didnt even need it) and it "works" now... There is the question left, whether there is some kind of memory leak. Or was linking against that lib the only whole problem?
ok... I had linked against msvcrtd.lib ...
removed the whole lib from my project (so default libs are added?!?) and it "works" now...
There is the question left, whether there is some kind of memory leak.
Or was linking against that lib the only whole problem?
stupid me...
I have this piece of code working on Linux with g++:
GLuint Shader::initShader_(GLenum shaderType, const std::string& shaderFilename)
{
std::ifstream inputFile(shaderFilename.c_str());
if (inputFile.is_open() == false)
{
std::ostringstream oss;
oss << "Shader " << shaderFilename << " doesn't exist!";
print(LOG_LEVEL::ERROR, oss.str());
}
...
}
where the three dots represent some code. On both g++ and Visual Studio (2012) the code compiles. But with Visual Studio, the first line throws an access violation exception. This actually happens when opening the file, and the debugger redirects me to do_always_noconv but I do not understand the problem.
The string containing the filename is valid and the file the program is trying to open is in the good directory, and the debugger works in this directory. I guess the problem does not come from the file itself, because if the stream cannot open it then I could still enter the next line without an access violation.
Does anyone already encountered this problem or has an idea? Again it worked without any problem on Linux with g++.
Thanks for your help.
An access violation exception doesn't indicate a problem with the file, but with the in-memory representation of the ifstream object or the string. Start looking for memory corruption.
Be sure you're referencing the correct GLSDK libraries for your build type. e.g. debug builds should reference debug libraries and release builds should reference release libraries.
As PaulH suggested above, I checked some array code that I wrote recently and the error came from some wrong indices and pointers. However I still do not understand why the errors in the array code have something to do with the ifstream. Thanks to PaulH!
This is a followup to Using #include to load OpenCL code
I've noticed that when you use the method described by grrussel (and used in Bullet Physics), the created string has all newlines stripped (comments seem to be stripped, too, but I'm not too worried about that). Now for the most part this is fine if the included opencl code doesn't have any preprocessor definitions in it, but if there is the code will fail to compile with the OpenCL compiler.
Is there a way to get the #include to keep the newlines in there, or is there a better method to embed the opencl code into my executable (other than copying the string into a cpp file and putting quotes around everything)?
I tested this in Visual Studio 2010, I'm not sure if other compilers exhibit the same behavior. I would prefer a method which doesn't require any external tools and works with a variety of compilers/platforms.
Copied code from other answer:
In C++ / C source
#define MSTRINGIFY(A) #A
char* stringifiedSourceCL =
#include "VectorAddKernels.cl"
In the OpenCL source
MSTRINGIFY(
__kernel void VectorAdd(__global float8* c)
{
// snipped out OpenCL code...
return;
}
);
I'd write a small tool that generates a cpp with a constant string from the text file, which you would then compile along with the other stuff.
This is what Qt's resource compiler moc does (and more, since it has a whole API around it), and there's also the option of Windows's resource kit (see this answer below).
Anyway, a small build tool that walks a files in a dir and builds a cpp from them should be enough, and something you can put together in 15 minutes (given suitable scripting skills).
Update: As per Henry's comment, on Linux (or with Msys/Cygwin), you could use xxd with the -i (i.e. xxd -i ) flag, which reads a file and generates a c-include file with the exact contents defined properly.
Excellent question!
One thing that works on Windows is to treat your OpenCL C files as Windows resources. Then the source code gets built into your executable. In your reasource (.rc) file, you would add something like this:
IDR_OPENCL_FILE_1 OPENCL_SOURCE "mydir\\myfile.cl"
Then, in the host source, something like this:
// Load the resource memory
HRSRC resource = FindResource(NULL, L"IDR_OPENCL_FILE_1", L"OPENCL_SOURCE");
HGLOBAL resMem = LoadResource(NULL, resource);
char* source = (char*)LockResource(resMem);
// Build a STL string out of it
DWORD resSize = SizeofResource(NULL, resource);
std::string cl_source = string(source, resSize);
On Linux, objcopy will let you do something similar. You can build an elf .o with the source file exported as a symbol. Sorry I don't have a code snippet handy for that right now.
I wish there were a better platform-independent answer. If you come across it, let me know.