Weird Intel C++ compiler error - c++

I'm working on this VST convolution plugin (Windows 7 64bit, VS2010) and I decided to try the Intel c++ compiler. I was in the process of optimizing the algorithm so I had a backup project in case of any screw ups and one I was doing experiments on. Both projects would compile and run with no problems. After installing the Intel compiler though the project I was experimenting on would cause a heap corruption error, so I start debugging to track down the problem but I can't find the line of code that causes it since the heap corruption error is not triggered during execution but after the termination of the DLL (there are also no access violations showed by the debugger).
At this point I start cutting out parts of the code to see if I can isolate the problem and I discover (obviously) that it was the class I was eperimenting on. Now here comes the weird part: I can change the code inside the methods but as soon as I add a variable to the backup class (the one that works fine), even an int, I get the heap corruption error, just a decleared and never referenced variable is enough.
This is the class CRTConvolver:
class CRTConvolver
{
public:
CRTConvolver();
~CRTConvolver();
bool Init(float* Imp, unsigned ImpLen, unsigned DataLen);
void doConv(float* input);
Buff Output;
int debug_test;
private:
void ZeroVars();
int Order(int sampleFrames);
template <class T> void swap ( T& a, T& b );
Buff *Ir_FFT,*Input_FFT,Output2,Tmp,Prev,Last;
float *Tail;
unsigned nBlocks,BlockLen,Bl_Indx;
IppsFFTSpec_R_32f* spec;
};
that "int debug_test;" makes the difference between a perfectly working VST module and a program that crashes on initialization from Cubase.
always for debugging purposes here are destr and constr:
CRTConvolver::CRTConvolver()
{
//IppStatus status=ippInit();
//ZeroVars();
}
CRTConvolver::~CRTConvolver()
{
//Init(NULL,NULL,NULL);
}
Here is what class Buff looks like:
class Buff {
public:
Buff();
Buff(unsigned len);
~Buff();
float* buff;
unsigned long length;
private:
void Init(unsigned long len);
void flush();
friend class CRTConvolver;
}
Buff::Buff()
{
length=NULL;
buff=NULL;
}
Buff::~Buff()
{
// flush();
}
basically this class if created and destructed does absolutely nothing, it just contains the length and buff variables. If I also bypass those two variable initializations the heap error goes away.
The software crashes on simple construction and subsequent destruction of the class CRTConvolver even though all it does is nothing, this is the part that really doesn't make sense to me...
As a side note, I create my CRTConvolver class like this:
ConvEng = new CRTConvolver[NCHANNELS];
If I declare it like this instead:
CRTConvolver ConvEng[NCHANNELS];
I get a stack corruption error around variable ConvEng.
If I switch back to Microsoft compiler the situation stays the same even when compiling and running the exact same version that could run without errors before....
I can't stress enough the fact that before installing the Intel compiler everything was running just fine, is it possible that something went wrong or there's an incompatibility somewhere ?
I'm really running out of ideas here, I hope someone will be able to help.
thanks

Going to guess, since the problem is most likely undefined behavior, but in some other place in your code:
Obey the rule of three. You should have a copy constructor and assignment operator. If you're using std containers, or making copies or assignments, without these you run into trouble if you delete memory inside the destructor.

It looks to me that the CRTConvolver default constructor (used in creating an array) is writing to memory it doesn't own. If the Intel compiler has different class layout rules (or data alignment rules), it might be unmasking a bug that was benign under the Microsoft compiler's rules.
Does the CRTConvolver class contain any instances of the Buff class?
Updated to respond to code update:
The CRTConvolver class contains four instances of Buff, so I suspect that is where the problem lies. It could be a version mismatch -- the CRTConvolver class thinks that Buff is smaller than it really is. I suggest you recompile everything and get back to us.

Related

Netbeans 8.2 code assistance issue with unique_ptr (not shared_ptr)

Using Netbeans 8.2 on Linux and GCC 8.1, unique_ptr::operator->() gives an erroneous warning
Unable to resolve template based identifier {var} and more importantly will not autocomplete member names.
Code compiles and runs just fine, and amazingly, the autocomplete still works with no warnings if a shared_ptr is used instead. I have no idea how that is possible. Here is a problematic example, for reference
#include <iostream>
#include <memory>
struct C { int a;};
int main () {
std::unique_ptr<C> foo (new C);
std::shared_ptr<C> bar (new C);
foo->a = 10; //Displays warning, does not auto-complete the members of C
bar->a = 20; //No warning, auto-completes members of C
return 0;
}
I've tried the following to resolve this issue with no luck:
Code Assistance > Reparse Project
Code Assistance > Clean C/C++ cache and restart IDE
Manually deleting cache in ~/.cache/netbeans/8.2
Looking through View > IDE Log for anything that may help
Setting both the C and C++ compilers to C11 and C++11 in project properties
Changing the pre-processing macro __cplusplus to both 201103L and 201402L
Creating a new Netbeans project and trying the above
A large variety of permutations of the above options in different orders
Again, everything compiles and runs just fine, it's just Code Assistance that is giving me an issue. I've run out of things that I've found in other stack overflow answers. My intuition tells me that shared_ptr working and unique_ptr not working is helpful, but I don't know enough about C++ to utilize that information. Please help me, I need to get back to work...
Edit 1
This stack overflow question, although referencing Clang and the libc++ implementation, suggests that it may be an implementation issue within GCC 8.1's libstdc++ unique_ptr.
TLDR Method 1
Add a using pointer = {structName}* directive in your structure fixes code assistance, and will compile and run as intended, like so:
struct C { using pointer = C*; int a;};
TLDR Method 2
The answer referenced in my edit does in fact work for libstdc++ as well. Simply changing the return type of unique_ptr::operator->() from pointer to element_type* will fix code assistance, compile and run as expected (the same thing can be done to unique_ptr::get()). However, changing things in implementations of the standard library worries me greatly.
More Information
As someone who is relatively new to c++ and barely understands the power of template specializations, reading through unique_ptr.h was scary, but here is what I think is messing up Netbeans:
Calling unique_ptr::operator->() calls unique_ptr::get()
unique_ptr::get() calls the private implementation's (__unique_ptr_impl) pointer function __unique_ptr_impl::_M_ptr(). All these calls return the __unique_ptr_impl::pointer type.
Within the private implementation, the type pointer is defined within an even more private implementation _Ptr. The struct _Ptr has two template definitions, one that returns a raw pointer to the initial template variable of unique_ptr, and the second that seems to strip any reference off this template variable, and then find it's type named pointer. I think that this is where Netbeans messes up.
So my understanding is when you call unique_ptr<elementType, deleterType>::operator->(), it goes to __unique_ptr_impl<elementType, deleterType>, where the internal pointer type is found by stripping elementType of any references and then getting the type named dereferenced(elementType)::pointer. So by including the directive using pointer =, Netbeans gets what it wants in finding the dereferenced(elementType)::pointer type.
Including the using directive is entirely superficial, evidenced by the fact that things will compile without it, and by the following example
#include <memory>
#include <iostream>
struct A{
using pointer = A*;
double memA;
A() : memA(1) {}
};
struct B {
using pointer = A*;
double memB;
B() : memB(2) {}
};
int main() {
unique_ptr<A> pa(new A);
unique_ptr<B> pb(new B);
std::cout << pa->memA << std::endl;
std::cout << pb->memB << std::endl;
};
outputs
1
2
As it should, even though in struct B contains using pointer = A*. Netbeans actually tries to autocomplete B-> to B->memA, further evidence that Netbeans is using the logic proposed above.
This solution is contrived as all hell but at least it works (in the wildly specific context that I've used it) without making changes to implementations of stl. Who knows, I'm still confused about the convoluted typing system within unique_ptr.

Internal Compiler Error on Array Value-Initialization in VC++14 (VS2015)

I'm getting an ICE on Visual Studio 2015 CTP 6. Unfortunately, this is happening in a large project, and I can't post the whole code here, and I have been unable to reproduce the problem on a minimal sample. What I'm hoping to get is help in constructing such a sample (to submit to Microsoft) or possibly illumination regarding what's happening and/or what I'm doing wrong.
This is a mock-up of what I'm doing. (Note that the code I'm presenting here does NOT generate an ICE; I'm merely using this simple example to explain the situation.)
I have a class A which is not copyable (it has a couple of "reference" members) and doesn't have a default constructor. Another class, B holds an array of As (plain C array of A values, no references/pointers) and I'm initializing this array in the constructor of B using uniform initialization syntax. See the sample code below.
struct B;
struct A
{
int & x;
B * b;
A (B * b_, int & x_) : x (x_), b (b_) {}
A (A const &) = delete;
A & operator = (A const &) = delete;
};
struct B
{
A a [3];
int foo;
B ()
: a {{this,foo},{this,foo},{nullptr,foo}} // <-- THE CULPRIT!
, foo (2)
{ // <-- This is where the compiler says the error occurs
}
};
int main ()
{
B b;
return 0;
}
I can't use std::array because I need to construct the elements in their final place (can't copy.) I can't use std::vector because I need B to contain the As.
Note that if I don't use an array and use individual variables (e.g. A a0, a1, a2;, which I can do because the array is small and fixed in size) the ICE goes away. But this is not what I want since I'll lose ability to get to them by index, which I need. I can use a union of the loose variables over the array to solve my ICE problem and get indexing (construct using the variables, access using the array,) but I think that would result in "undefined behavior" and seems convoluted.
The obvious differences between the above sample and my actual code (aside from the scale) is that A and B are classes instead of structs, each is declared/defined in its own source/header file pair, and none of the constructors is inline. (I duplicated these and still couldn't reproduce the ICE.)
For my actual project, I've tried cleaning the built files and rebuild, to no avail. Any suggestions, etc.?
P.S. I'm not sure if my title is suitable. Any suggestions on that?!?!
UPDATE 1: This is the compiler file referenced in the C1001 fatal error message: (compiler file 'f:\dd\vctools\compiler\utc\src\p2\main.c', line 230).
UPDATE 2: Since I had forgotten to mention, the codebase compiles cleanly (and correctly) under GCC 4.9.2 in C++14 mode.
Also, I'm compiling with all optimizations disabled.
UPDATE 3: I have found out that if I rearrange the member data in B and put the array at the very end, the code compiles. I've tried several other permutations and it sometimes does compile and sometimes doesn't. I can't see any patterns regarding what other members coming before the array make the compiler go full ICE! (being UDTs or primitives, having constructors or not, POD or not, reference or pointer or value type, ...)
This means that I have sort of a solution for my problem, although my internal class layout is important to me and this application, I can tolerate the performance hit (due to cache misses resulting from putting some hot data apart from the rest) in order to get past this thing.
However, I still really like a minimal repro of the ICE to be able to submit to Microsoft. I don't want to be stuck with this for the next two years (at least!)
UPDATE 4: I have tried VS2015 RC and the ICE is still there (although the error message refers to a different internal line of code, line 247 in the same "main.c" file.)
And I have opened a bug report on Microsoft Connect.
I did report this to Microsoft, and after sharing some of my project code with them, it seems that the problem has been tracked down and fixed. They said that the fix will be included in the final VC14 release.
Thanks for the comments and pointers.

Segmentation fault when declaring variable inside struct

I've recently come across a very strange segfault while developing my application. Basically, if I add another variable to one of my structs, a segfault is caused upon execution, for no apparent reason. Removing this variable immediately solves the problem. The struct is as follows:
typedef struct Note {
char cNote;
unsigned int uiDuration;
unsigned int uiVelocity;
};
As soon as I add a
long lStartTime;
variable anywhere in the struct, the code compiles as usual but will throw a segmentation fault. GDB's backtrace is lost somewhere in some obscure WIN methods that I don't even use.
Any ideas?
Thanks!
I see several possible explanations:
Something somewhere assumes that the struct is of a certain size. Changing the size break things.
You might have a memory bug of some sort, which is brought to the surface by you changing the layout of things in memory. Try a tool like valgrind or Purify.
You are changing the struct in a header file, but are failing to rebuild all source files that use the struct.

Code breaking when trying to set member variable in constructor

I have a class like this:
class Player
{
public:
Player(Board * someBoard);
void setSide(char newSide);
protected:
Board * board;
char side;
};
and its implementation is as such:
Player::Player(Board * someBoard)
{
board = someBoard;
side = '0';
}
void Player::setSide(char newSide)
{
side = newSide;
}
Now I have another class inheriting from it:
class HumanPlayer : public Player
{
public:
HumanPlayer(Board * someBoard);
};
And its short implementation is this:
HumanPlayer::HumanPlayer(Board * someBoard) : Player(someBoard)
{
}
Now the problem is the side = '0'; line makes the program freeze (the window turns white in Windows 7, not sure if that means it froze or crashed). Commenting it out make the program run fine (and it's okay to comment it out because the variable isn't used anywhere yet).
What is causing the error and how can I fix it?
EDIT!
After printing out some stuff to an fstream, all of a sudden the program worked. I tried commenting the printing out. It still worked. I tried deleting the debugging code I added in. It still worked. So now my code is exactly the same as listed above, but it magically works now.
So what do I do now? Ignore the anomaly? Could it have been a compiler mistake?
I suspect the problem isn't what you think it is.
It sounds like a memory corruption issue of some sort, but it's really impossible to tell based on the information provided. I have two suggestions for you:
Either post the smallest complete program that demonstrates the problem, or
Try a valgrind-type tool to see if it helps you figure out what's going on.
Or, better yet, start by looking at the state of the program in the debugger (once it's hung.)
Is it possible that you are using two incompatible definitions of your Player class, defined in different header files? If the definitions have different sizes, then the 'size' member might lie outside the memory block allocated for the class instance.
Edit I see that your problem has disappeared. So it was probably a module that didn't get re-compiled after a change in the class definition; but now that you've re-compiled everything, the problem has fixed itself.

Remove never-run call to templated function, get allocation error on run-time

I have a piece of templated code that is never run, but is compiled. When I remove it, another part of my program breaks.
First off, I'm a bit at a loss as to how to ask this question. So I'm going to try throwing lots of information at the problem.
Ok, so, I went to completely redesign my test project for my experimental core library thingy. I use a lot of template shenanigans in the library. When I removed the "user" code, the tests gave me a memory allocation error. After quite a bit of experimenting, I narrowed it down to this bit of code (out of a couple hundred lines):
void VOODOO(components::switchBoard &board) {
board.addComponent<using_allegro::keyInputs<'w'> >();
}
Fundementally, what's weirding me out is that it appears that the act of compiling this function (and the template function it then uses, and the template functions those then use...), makes this bug not appear. This code is not being run. Similar code (the same, but for different key vals) occurs elsewhere, but is within Boost TDD code.
I realize I certainly haven't given enough information for you to solve it for me; I tried, but it more-or-less spirals into most of the code base. I think I'm most looking for "here's what the problem could be", "here's where to look", etc. There's something that's happening during compile because of this line, but I don't know enough about that step to begin looking.
Sooo, how can a (presumably) compilied, but never actually run, bit of templated code, when removed, cause another part of code to fail?
Error:
Unhandled exceptionat 0x6fe731ea (msvcr90d.dll) in Switchboard.exe:
0xC0000005: Access violation reading location 0xcdcdcdc1.
Callstack:
operator delete(void * pUser Data)
allocator< class name related to key inputs callbacks >::deallocate
vector< same class >::_Insert_n(...)
vector< " " >::insert(...)
vector<" ">::push_back(...)
It looks like maybe the vector isn't valid, because _MyFirst and similar data members are showing values of 0xcdcdcdcd in the debugger. But the vector is a member variable...
Update: The vector isn't valid because it's never made. I'm getting a channel ID value stomp, which is making me treat one type of channel as another.
Update:
Searching through with the debugger again, it appears that my method for giving each "channel" it's own, unique ID isn't giving me a unique ID:
inline static const char channel<template args>::idFunction() {
return reinterpret_cast<char>(&channel<CHANNEL_IDENTIFY>::idFunction);
};
Update2: These two are giving the same:
slaveChannel<switchboard, ALLEGRO_BITMAP*, entityInfo<ALLEGRO_BITMAP*>
slaveChannel<key<c>, char, push<char>
Sooo, having another compiled channel type changing things makes sense, because it shifts around the values of the idFunctions? But why are there two idFunctions with the same value?
you seem to be returning address of the function as a character? that looks weird. char has much smaller bit count than pointer, so it's highly possible you get same values. that could reason why changing code layout fixes/breaks your program
As a general answer (though aaa's comment alludes to this): When something like this affects whether a bug occurs, it's either because (a) you're wrong and it is being run, or (b) the way that the inclusion of that code happens to affect your code, data, and memory layout in the compiled program causes a heisenbug to change from visible to hidden.
The latter generally occurs when something involves undefined behavior. Sometimes a bogus pointer value will cause you to stomp on a bit of your code (which might or might not be important depending on the code layout), or sometimes a bogus write will stomp on a value in your data stack that might or might not be a pointer that's used later, or so forth.
As a simple example, supposing you have a stack that looks like:
float data[10];
int never_used;
int *important pointer;
And then you erroneously write
data[10] = 0;
Then, assuming that stack got allocated in linear order, you'll stomp on never_used, and the bug will be harmless. However, if you remove never_used (or change something so the compiler knows it can remove it for you -- maybe you remove a never-called function call that would use it), then it will stomp on important_pointer instead, and you'll now get a segfault when you dereference it.