Program works fine in Debug build, but fails in Release build - c++

I am facing a problem in release build of Visual Studio
pseudo code is given below
#include "lib/A/inc/A.h"
main()
{
A a;
a.f1();//this fails in release build and works fine in debug build
a.f2();//this fails in release build and works fine in debug build
}
A is derived from B present in lib/B/inc/B.h
class A :public B
{
virtual f2();
};
B has a pure virtual function f2() and normal f1()
class B {
private:
string name;
public:
void f1();
virtual void f2() = 0;
};
I stepped in to the f1() function. At this moment this pointer of B has value 0x0000000 and __vfptr is invalid.
But in main() , object a is valid and __vfptr is also valid. Any idea why this happend in release build ?

Have a look through some of the differences between a debug and release build and my tips for finding the bug:
Common reasons for bugs in release version not present in debug mode

Related

How to correctly expose interfaces from a dll in the presence of VC++ 2015 whole program optimization

recently we encountered in our legacy code that is currently ported from VS2010 to VS2015 an interesting effect. Unfortunately I couldn't create a small example that shows this effect, but I'll try to describe it as accurately as I can.
We have 2 dlls (I'll call them dll A and dll B). The project for dll A defines the interface IFoo & a derived interface IFxFoo
class __declspec(novtable) IFoo {
public:
virtual int GetType() = 0;
virtual ~IFoo() {}
};
class __declspec(novtable) IFxFoo : public IFoo {
public:
virtual int GetSlot() = 0;
};
in dll B, both interfaces are used.
class CBImpl : public IFxFoo {
public:
...
void processFoo(IFoo* f) {
...
if (f->GetType() == IFXFOO) {
IFxFoo* fx = static_cast<IFxFoo>(f); //downcast
fill(fx);
}
}
void fill(IFxFoo* fx) {
m_slot = fx->GetSlot();
}
private:
int m_slot;
};
processFoo() will be called with different implementations of IFoo. Some from dll A and some from dll B.
What now happened was the following:
- if we turned on whole program optimization when compiling dll B, the call to the virtual function GetSlot() in function fill() got de-virtualized by Visual C++. This caused our program to crash.
we can fix this behavior if we either
turn of whole program optimization
turn of optimization for fill
or mark our interfaces with __declspec(dllimport) / __declspec(dllexport)
The questions that I have now are:
is our assumption correct that the de-virtualization happened because the optimizer saw only one implementation of IFxFoo in dll B and assumed that this is the only one because IFxFoo was not marked to be from a different dll?
what's the best way to create "interfaces" in header files? We used to do them like above but this seems to lead to some problems.
do other compiler (gcc / clang) exhibit similar behavior?
Thank you for your help
Tobias
Using LTO results in the compiler making drastic adjustments to any functions for which is it able to see the complete callgraph.
What you are seeing is expected and using __declspec(dllexport) or extern on the functions that need to be utilised from a separate module or explicitly declaring them as part of a DLL .def file is the expected way to resolve the problem as the compiler will no longer consider the functions to be internal-only.

pure virtual method called without active exception - run-time err0r

this is a very basic code, after running it, i have this run-time error.
class A{
A(){...
}
~A(){...
t.detach();
}
start_tread(){
t=std::thread(&A::back_groud_job, this);
}
void back_groud_job(){...}
}
main///
A a =new A();
a.start_thread()'
////just a skileton
this code runs fine under windows vs and ,mingw.
on linux g++ i am having this run-time error, i read something about a bug, but it was g++4.6, i am using g++4.9...
what do i miss, and how do i fix this?

MFC-related crash when calling constructors

I'm currently writing an application using MFC and CLR in visual studio, and my program is crashing whenever I call the constructor of a class I've written (the class is to control a camera over USB).
I've got a base class CameraBase:
class CameraBase
{
public:
virtual bool getFrame(cv::Mat& outImage) { return true; };
};
and a derived class LumeneraCamera (for the specific camera):
class LumeneraCamera : public CameraBase
{
public:
DLL_API LumeneraCamera();
DLL_API bool connect(int cameraNum);
DLL_API bool disconnect();
DLL_API bool getFrame(cv::Mat& outImage);
private:
//Bunch of misc variables
};
These classes are compiled into a DLL and accessed from another program:
int main()
{
cout << "Initing camera" << endl;
camera = new LumeneraCamera();
//More operations
}
When I run the program, it prints Initing camera and then fails because of an assertion in dllinit.cpp (line 133: VERIFY(AfxInitExtensionModule(controlDLL, hInstance));). It crashes before executing anything in the constructor. I'm not really sure what the problem is but it seems tied to MFC, so I'm currently looking into untangling my project from MFC entirely. Any suggestions or fixes are appreciated!
According to MSDN, if your DLL is dynamically linked against the MFC DLLs, each function exported from this DLL which call into MFC must have the AFX_MANAGE_STATE macro added at the very beginning of the function:
AFX_MANAGE_STATE(AfxGetStaticModuleState());
I eventually solved it by disabling MFC - a library I was using suggested MFC but as far as I can tell works fine without it.

dynamic_cast issue xcode

I am working on porting a game from visual studio to xcode the game was completely written in c++ and I am having some troubles with dynamic casting that I never had when running in visual studio. I am wondering if it is a compiler issue or some things are just not supported in the mac environment, any help will be greatly appreciated. Here is a stripped down version of the code that I am running in xcode which will crash when doing the dynamic_cast
class base {
public:
int dm;
virtual void vm(){}
base(){}
};
class specific : public base {
public:
virtual void vm(){dm++;}
specific (){}
};
specific* sp = new specific();
base* b = (base*) sp;
specific * s = dynamic_cast< specific * >( b );
You can try to set "Enable Runtime Type" = YES
Build setting > Apple LLVM 5.0 - Language - C++
In your xcode project.
hope this help.

Crash in program when trying to access base class vector member from a drived class instance loaded from a DLL

I'm running into a strange crash. I am trying to separate out various modules of my Application (MFC based, developed in VS2005) into DLLs. Following is the skeletal code of how I'm trying to achieve it:
In a common Header file(say base.h):
class Base {
vector<message> messages;
...
...
};
In a header file in DLL source code(say class.h):
class Derived : public Base {
private:
int hoo();
...
public:
void foo();
int goo();
...
};
extern "C" __declspec (dllexport) Derived* CreateDerived();
In class.cpp
Derived* CreateDerived()
{
return new Derived;
}
In a file in main Application code:
#include "base.h"
#include "class.h"
typedef Derived* (*DerivedCreator)();
...
...
void LoadDll()
{
//DLL Load Code...
...
...
DerivedCreator creator = reinterpret_cast<DerivedCreator>(::GetProcAddress(dllHandle, "CreateDerived"));
Derived* pDerived = creator();
pDerived->messages.push_back(message("xyz"));//Crashes here...
}
The problem is the code craches the moment I try to access the vector member of the Base class. This only happens in Release mode. It works fine in Debug mode. The error message that i get when I execute it from Visual Studio in Release mode is:
"Microsoft Visual Studio C Runtime Library has detected a fatal error in Samsung SSD Magician.exe.
Press Break to debug the program or Continue to terminate the program."
But when I execute the release binary directly and attach the debugger to it, I get an Access Violation. At this point if I check the vector in debugger, it shows 6-digit entries in it, none of them readable. I'm able to see correct values of rest of the members of Base class in the Derived pointer.
Any help would be much appreciated.
It's dangerous to pass stl containers across a DLL boundary.
The reason here is that each module (the main application and the DLL) has it's own instance of the heap. If you allocate dynamic memory in the context of DLL, then pass the pointer to the application and release that memory in the context of the application, that causes heap corruption.
That is exactly what happens in your example.
Derived* pDerived = creator();
CreateDerived is called.
Derived* CreateDerived()
{
return new Derived;
}
new Derived allocates memory in DLL heap.
pDerived->messages.push_back(message("xyz"));
Inside push_back, an additional memory is allocated for Base::messages, and that allocation is done on the application heap. Crash!
A conclusion is that you need to rethink the DLL interface in order to perform all operation on the vector only inside the DLL.