i have an problem with definining some constant Eigen (eigen.tuxfamily.org) Vectors in a header file, but obviously it's more a compiler problem than an Eigen-specific problem..
Defining this in a header file:
const double[] hardcodedData = {1,2,3};
const Vector3d myConstVector(hardcodedData);
works perfectly using Microsoft VC2010 via cython/distutils (which i use for testing).
Once the header file is included, i can access myConstVectory from every function/method/whatever and use it for calculations..
Using the same code with:
Microsoft (R) C/C++ Optimizing Compiler Version 17.00.50727.1 for x64
which is called by ABAQUS 6.13-2 , a finite element software,
every const Vector is initialized with zeros! Until now, i found no workaround except using something like this:
const Vector3d myConstVector()
{
const static Vector3d vec(hardcodedData);
return vec;
}
This workaround is OK, but not really what i intended to do .. Also it has some overhead.
Is there a clean solution, to get the "hardcoded" option running? Thx in advance!
Related
EDIT: The title was
How do eigen expressions and link-time-optimization interact
But this issue has nothing to do with lto.
I just introduced and then fixed the following bug:
Eigen::Vector4d some_function(double arg) {
Eigen::Matrix4d mat;
mat << 2,2,3,4,5,6,7,8,9,10,11,2,13,14,15,16;
Eigen::Vector4d spline_constraints(a(arg),b(arg),c,d);
auto coefficients = mat.lu().solve(spline_constraints);
return coefficients;
}
This is pseudocode. In actuality both the function and the matrix are static members of a class and a(double arg) and b(double arg) are also static member functions.
I hope it is suffcient. If not, I will try to cook up a minimal working example.
The behaviour is very compiler dependent:
GCC
When I compile it into a static library and link the library elsewhere, it SEEMS to work fine, regardless of debug build or release build (i.e. with non-lto optimizations).
If I compile and link it with link time optimization, it compiles and links fine and the runtime behaves in a wrong way:
With gcc a sparse matrix of size 1350x1350, whose construction depends on the code above is claimed to be non-decomposible with the lu decomposition (without lto it doesn't claim so, although I'm not sure, the decomposition is correct in that case).
Clang
In release build with or without lto I get a segmentation fault.
The debug build works fine.
Fix
The code is fixed by with one of the following:
Eigen::Vector4d some_function() {
Eigen::Matrix4d mat;
mat << 2,2,3,4,5,6,7,8,9,10,11,2,13,14,15,16;
Eigen::Vector4d spline_constraints(a,b,c,d);
auto coefficients = mat.lu().solve(spline_constraints).eval();
return coefficients;
}
that is by adding .eval()
or
Eigen::Vector4d some_function() {
Eigen::Matrix4d mat;
mat << 2,2,3,4,5,6,7,8,9,10,11,2,13,14,15,16;
Eigen::Vector4d spline_constraints(a,b,c,d);
return mat.lu().solve(spline_constraints);
}
that is, by returning the expression itself.
My question is: What exactly happened here?
Related (would be nice to know, not really part of the question): Is there a compiler warning or other feature that could prevent me from making this mistake in the future?
Also (likewise): Is there some part of the Eigen docs, that would have warned me of this, had I found it before?
My impression from the docs was, that failing to add .eval() somewhere could result in decreased performance but not in undefined behaviour.
EDIT: I was wrong: The docs state that wrong application of eval() can lead to segfaults:
Eigen docs on eval()
I'm getting an ICE on Visual Studio 2015 CTP 6. Unfortunately, this is happening in a large project, and I can't post the whole code here, and I have been unable to reproduce the problem on a minimal sample. What I'm hoping to get is help in constructing such a sample (to submit to Microsoft) or possibly illumination regarding what's happening and/or what I'm doing wrong.
This is a mock-up of what I'm doing. (Note that the code I'm presenting here does NOT generate an ICE; I'm merely using this simple example to explain the situation.)
I have a class A which is not copyable (it has a couple of "reference" members) and doesn't have a default constructor. Another class, B holds an array of As (plain C array of A values, no references/pointers) and I'm initializing this array in the constructor of B using uniform initialization syntax. See the sample code below.
struct B;
struct A
{
int & x;
B * b;
A (B * b_, int & x_) : x (x_), b (b_) {}
A (A const &) = delete;
A & operator = (A const &) = delete;
};
struct B
{
A a [3];
int foo;
B ()
: a {{this,foo},{this,foo},{nullptr,foo}} // <-- THE CULPRIT!
, foo (2)
{ // <-- This is where the compiler says the error occurs
}
};
int main ()
{
B b;
return 0;
}
I can't use std::array because I need to construct the elements in their final place (can't copy.) I can't use std::vector because I need B to contain the As.
Note that if I don't use an array and use individual variables (e.g. A a0, a1, a2;, which I can do because the array is small and fixed in size) the ICE goes away. But this is not what I want since I'll lose ability to get to them by index, which I need. I can use a union of the loose variables over the array to solve my ICE problem and get indexing (construct using the variables, access using the array,) but I think that would result in "undefined behavior" and seems convoluted.
The obvious differences between the above sample and my actual code (aside from the scale) is that A and B are classes instead of structs, each is declared/defined in its own source/header file pair, and none of the constructors is inline. (I duplicated these and still couldn't reproduce the ICE.)
For my actual project, I've tried cleaning the built files and rebuild, to no avail. Any suggestions, etc.?
P.S. I'm not sure if my title is suitable. Any suggestions on that?!?!
UPDATE 1: This is the compiler file referenced in the C1001 fatal error message: (compiler file 'f:\dd\vctools\compiler\utc\src\p2\main.c', line 230).
UPDATE 2: Since I had forgotten to mention, the codebase compiles cleanly (and correctly) under GCC 4.9.2 in C++14 mode.
Also, I'm compiling with all optimizations disabled.
UPDATE 3: I have found out that if I rearrange the member data in B and put the array at the very end, the code compiles. I've tried several other permutations and it sometimes does compile and sometimes doesn't. I can't see any patterns regarding what other members coming before the array make the compiler go full ICE! (being UDTs or primitives, having constructors or not, POD or not, reference or pointer or value type, ...)
This means that I have sort of a solution for my problem, although my internal class layout is important to me and this application, I can tolerate the performance hit (due to cache misses resulting from putting some hot data apart from the rest) in order to get past this thing.
However, I still really like a minimal repro of the ICE to be able to submit to Microsoft. I don't want to be stuck with this for the next two years (at least!)
UPDATE 4: I have tried VS2015 RC and the ICE is still there (although the error message refers to a different internal line of code, line 247 in the same "main.c" file.)
And I have opened a bug report on Microsoft Connect.
I did report this to Microsoft, and after sharing some of my project code with them, it seems that the problem has been tracked down and fixed. They said that the fix will be included in the final VC14 release.
Thanks for the comments and pointers.
Briefly, I try to init a matrix as follows:
struct MyClass {
arma::mat _mymat;
};
MyClass::MyClass() :
_mymat(0,0)
{
}
but in the VS2010 debugger, the properties are
{n_rows=0 n_cols=14829735428352901220 n_elem=7925840 ... }
Later I try to set the dimensions again to 3x3, but then the properties change to
{n_rows=3435973836 n_cols=3435973836 n_elem=3435973836 ... }
and when I use MyClass._mymat in multiplication the program throws an exception at runtime complaining that the matrix dimensions are not equal.
The platform is VS2010, 64-bit with armadillo 4.200
I have also tried this with previous versions of Armadillo to the same effect.
This error does not occur under Win32 32-bit.
I found the answer.
TL;DR: ARMA_64BIT_WORD was not defined for the source file I was using, but it was defined for other object files, thus creating an unstable mix of 32-bit and 64-bit word sizes in the Armadillo library.
The simple fix was to add ARMA_64BIT_WORD as a preprocessor macro in the configuration properties for the project.
I am using Visual Studio 2012. When I try this:
std::unordered_set<std::shared_ptr<A>> myset;
I get this error:
error C2338: The C++ Standard doesn't provide a hash for this type.
According to the standard and this error report (https://connect.microsoft.com/VisualStudio/feedback/details/734888#tabs) this should compile and Microsoft has implemented support in VC++11. So why doesn't this work?
EDIT: HOW do I make this work? I have tried the workaround on the linked page and it simply gives me an error saying that the hash function has already been defined. Sure enough, on line 1803 of the "memory" file in the VC directory there is this:
template<class _Ty>
struct hash<shared_ptr<_Ty> >
: public unary_function<shared_ptr<_Ty>, size_t>
{ // hash functor
typedef shared_ptr<_Ty> _Kty;
typedef _Ty *_Ptrtype;
size_t operator()(const _Kty& _Keyval) const
{ // hash _Keyval to size_t value by pseudorandomizing transform
return (hash<_Ptrtype>()(_Keyval.get()));
}
};
I'm usually reluctant to blame a compiler because it's usually my fault, but this time it really seems like they messed up...
I guess your version of MSVC doesn't support it, but there's a workaround that seems usable in the "Workarounds" tab on the page you linked. Basically you just implement std::hash yourself for this type (on this compiler).
Ok here's what I had originally:
typedef std::shared_ptr<Colony> colony_sptr;
std::unordered_set<colony_sptr> affected_colonies; // ERROR
And here's what "fixed" it:
std::unordered_set<std::shared_ptr<Colony>> affected_colonies;
EDIT: This error disappeared after a VS restart, reappeared when I manually defined the type again and then switched back to the typedef, and has since disappeared again. I suspect this is either a very trippy bug or VS is mistaking errors in other parts of my code for some reason. I don't know enough / care enough to track this down. For now it seems to work. Thanks for the help.
I've inherited a C++ project that compiled fine in VS2005, but when I open it in VS2010 I get lots of IntelliSense erros like this:
IntelliSense: expression must have integral or enum type
Actually opening one of the cpp files in the project seems to cause the errors to appear.
Here's an example of the type of line that causes the error.
if (pInfoset->Fields->Item["Contact"]->Size <= 0)
I recognize the code, that's ADO syntax. You are battling a non-standard language extension that made COM programming easier in the previous decade. It allowed declaring properties on a C++ class, using the __declspec(property) declarator. An example:
class Example {
public:
int GetX(const char* indexer) { return 42;}
void PutX(const char* indexer, int value) {}
__declspec(property(get=GetX,put=PutX)) int x[];
};
int main()
{
Example e;
int value = e.x["foo"]; // Barf
return 0;
}
The IntelliSense parser was completely overhauled in VS2010 and re-implemented by using the Edison Design Group front-end. It just isn't compatible enough with the language extension and trips over the index operator usage. For which they can be forgiven, I'd say.
You can complain about this at connect.microsoft.com but I wouldn't expect miracles. The problem is still present in VS2012. A workaround is to stop using the virtual property and use the getter function instead, get_Item("Contact") in your case.
From something you said in the comments (about IntelliSense not finding a .tli file), the errors should go away once you build the solution. .tli (and .tlh) files are automatically-generated files that are created by the #import directive, but obviously you need to compile the files that have #import directives in order for those files to be generated (IntelliSense alone won't generate them).