Armadillo Matrix Dimensions Initialisation Incorrect - c++

Briefly, I try to init a matrix as follows:
struct MyClass {
arma::mat _mymat;
};
MyClass::MyClass() :
_mymat(0,0)
{
}
but in the VS2010 debugger, the properties are
{n_rows=0 n_cols=14829735428352901220 n_elem=7925840 ... }
Later I try to set the dimensions again to 3x3, but then the properties change to
{n_rows=3435973836 n_cols=3435973836 n_elem=3435973836 ... }
and when I use MyClass._mymat in multiplication the program throws an exception at runtime complaining that the matrix dimensions are not equal.
The platform is VS2010, 64-bit with armadillo 4.200
I have also tried this with previous versions of Armadillo to the same effect.
This error does not occur under Win32 32-bit.

I found the answer.
TL;DR: ARMA_64BIT_WORD was not defined for the source file I was using, but it was defined for other object files, thus creating an unstable mix of 32-bit and 64-bit word sizes in the Armadillo library.
The simple fix was to add ARMA_64BIT_WORD as a preprocessor macro in the configuration properties for the project.

Related

How to configure the nvcc compiler for using Eigen library?

I had an absolutely working project containing the Eigen 3.4.0. After installing Сuda 12.0, Visual Studio began to issue an error at the compilation stage on the script in the Eigen library itself in the project. This is happening in the file NumTraits.h in the part:
struct default_digits10_impl<T,false,false> // Floating point
{
EIGEN_DEVICE_FUNC EIGEN_CONSTEXPR
static int run() {
using std::log10;
using std::ceil;
typedef typename NumTraits<T>::Real Real;
return int(ceil(-log10(NumTraits<Real>::epsilon())));
}
C2665 "log10": no overloaded function can convert all types of arguments - this is error.
The library is external, I don't want to make changes in this code, I want to understand how to avoid this error.
I tried to paste
#if (defined __GNUC__) && (__GNUC__>4 || __GNUC_MINOR__>=7)
#undef _GLIBCXX_ATOMIC_BUILTINS
#undef _GLIBCXX_USE_INT128
#endif
in main file and define EIGEN_DEFAULT_DENSE_INDEX_TYPE, as it write in the description of Cuda and Eigen compatibility. But it didn't help.
Here is an addition to the question:
the error occurs when initializing a variable with the type:
using CubeXX3d = Eigen::Matrix<Eigen::Vector3d, -1, -1>;
This error occurred only because I submitted an instance of the CubeXX3d class for output:
std::cout<<instance;
(the CubeXX3d class definition is described above).
After I deleted this line, there was no error.

Can I include a DLL generated by GCC in a MSVC project?

I have a library of code I'm working on upgrading from x86 to x64 for a Windows application.
Part of the code took advantage of MSVC inline assembly blocks. I'm not looking to go through and interpret the assembly but I am looking to keep functionality from this part of the application.
Can I compile the functions using the inline assembly using GCC to make a DLL and link that to the rest of the library?
EDIT 1:(7/7/21) The flexibility with which compiler the project uses is open and I am currently looking into using Clang for use with MSVC.(also the Intel C++ compiler as another possibility) As stated in the first sentence it is a Windows application that I want to keep on Windows and the purpose of using another compiler is due to me 1.) not wanting to rewrite the large amount of assembly and 2.) because I know that MSVC does not support x64 inline assembly. So far clang seems to be working with a couple issues of how it declares comments inside of the assembly block and a few commands. The function is built around doing mathematical operations on a block of data, in what was supposed to be as fast as possible when it was developed but now that it works as intended I'm not looking to upgrade just maintain functionality. So, any compiler that will support inline assembly is an option.
EDIT 2:(7/7/21) I forgot to mention in the first edit, I'm not necessarily looking to load the 32-bit DLL into another process because I'm worried about copying data into an out of shared memory. I've done a similar solution for another project but the data set is around 8 MB and I'm worried that slow copy times for the function would cause the time constraint on the math to cause issues in the runtime of the application.(slow, laggy, and buffering are effects I'm trying to avoid.) I'm not trying to make it any faster but it definitely can't get any slower.
In theory, if you manage to create a plain C interface for that DLL (all exported symbols from DLL are standard C functions) and don't use memory management functions across "border" (no mixed memory management) then you should be able to dynamically load that DLL from another another (MSVC) process and call its functions, at least.
Not sure about statically linking against it... probably not, because the compiler and linker must go hand in hand (MSVC compiler+MSVC linker or GCC compiler+GCC linker) . The output of GCC linker is probably not compatible with MSVC at least regarding name mangling.
Here is how I would structure it (without small details):
Header.h (separate header to be included in both DLL and EXE)
//... remember to use your preferred calling convention but be consistent about it
struc Interface{
void (*func0)();
void (*func1)(int);
//...
};
typedef Interface* (*GetInterface)();
DLL (gcc)
#include "Header.h"
//functions implementing specific functionality (not exported)
void f0)(){/*...*/}
void f1)(int){/*...*/}
//...
Interface* getInterface(){//this must be exported from DLL (compiler specific)
static Interface interface;
//initialize functions pointers from interface with corresponding functions
interface.func0 = &f0;
interface.func1 = &f1;
//...
return &interface;
}
EXE (MSVC)
#include "Header.h"
int main(){
auto dll = LoadLibrary("DLL.dll");
auto getDllInterface = (GetInstance)GetProcAddress(dll, "getInterface");
auto* dllInterface = getDllInterface();
dllInterface->func0();
dllInterface->func1(123);
//...
return 0;
}

Visual studio the following code is built without any error?

I have to link the video related, but I will also add the code below.
How does this code builds without any error?
#define INTEGER Cherno
INTEGER Multiply(int a, int b){
INTEGER result = a * b;
return result;
}
When he hits Ctrl+F7 and builds the code in visual studio, it builds without any error. What do I miss?
Thanks.
p.s : I know this code won't(at least should not) compile, I just wondered why it does in the case of the video owner.
you are returning INTEGER, not result. Even in the video as linked, he is returning result.
after the edit
In that video, he has a system with a preprocessor that adds additional includes at build-time to his code. He goes over this a few seconds after that link. Cherno must be defined somewhere in that included file.

Why do I get a Heap Corruption when resizing a vector from inside a dll?

I am writing a XLL (using XLW library) that calls a DLL function. This DLL function will get a vector reference, modify the vector and return it by argument.
I have a VS10 solution with several c++ projects, some DLLs and a XLL that will call DLL functions from excel. I compiled everything using VS10 compiler, with _HAS_ITERATOR_DEBUGGING=0 and _CRT_SECURE_NO_WARNINGS and used same runtime library (/MDd) for all projects.
I also had to rebuild the XLW library to comply with _HAS_ITERATOR_DEBUGGING=0 that I have to use in my projects.
When calling the xll_function I was getting Heap Corruption errors and couldn't figure out why.
After I tried resizing my vector before calling the dll function the error was gonne. That is, I can call the function and get the right vector returned by argument and no heap corruptions.
Could someone shed some light on this?
As I am new to using DLLs I'm not sure if this should happen or if I am doing something wrong.
As you can see in the code below, the dll function will try to resize forwards and that is the point that I think is generating the heap errors.
I'm trying to understand why this happens and how this resizing and allocation works for dlls. Maybe I can't resize a vector allocated in another heap.
** Code below - the first function is a static method in a class from a dll project and the second function is exported to the XLL.
void dll_function(double quote, const std::vector<double>& drift, const std::vector<double>& divs, std::vector<double>& forwards)
{
size_t size = drift.size();
forwards.resize(size);
for( size_t t = 0; t < size; t++)
{
forwards[t] = (quote - divs[t]) * drift[t];
}
}
MyArray xll_function(double quote, const MyArray& drift, const MyArray& divs)
{
// Resizing the vector before passing to function
std::vector<double> forwards(drift.size());
dll_function(quote, drift, divs, forwards);
return forwards;
}
To pass references to std::vector or other C++ collections across DLL boundaries, you need to do following.
Use same C++ compiler for both modules, and same version of the compiler.
In project settings, set up same value to the setting General / Platform Toolset.
In project settings, set up C/C++ / Code Generation / Runtime Library value to “Multi-threaded DLL (/MD)”, or Multi-threaded Debug DLL (/MDd) for debug config. If one of the projects have a dependency which requires static CRT setting, sorry you’re out of luck, it won’t work.
Use same configuration in both sides: if you’ve built debug version of the DLL, don’t link with release version of the consuming EXE. Also don’t change preprocessor defines like _ITERATOR_DEBUG_LEVEL or _SCL_SECURE_NO_WARNINGS, or if you did, change them to the same value for both projects.
The reason for these complications, C++ doesn’t have standardized ABI. The memory layout of std::vector and other classes changes based on many things. Operators new and delete are also in C++ standard library, i.e. you can’t allocate memory with C++ in one module, free in different one.
If you can’t satisfy these conditions, there’re several workarounds, here’s a nice summary: https://www.codeproject.com/Articles/28969/HowTo-Export-C-classes-from-a-DLL

Create a const Eigen (Eigen_Library) Matrix REVISITED

i have an problem with definining some constant Eigen (eigen.tuxfamily.org) Vectors in a header file, but obviously it's more a compiler problem than an Eigen-specific problem..
Defining this in a header file:
const double[] hardcodedData = {1,2,3};
const Vector3d myConstVector(hardcodedData);
works perfectly using Microsoft VC2010 via cython/distutils (which i use for testing).
Once the header file is included, i can access myConstVectory from every function/method/whatever and use it for calculations..
Using the same code with:
Microsoft (R) C/C++ Optimizing Compiler Version 17.00.50727.1 for x64
which is called by ABAQUS 6.13-2 , a finite element software,
every const Vector is initialized with zeros! Until now, i found no workaround except using something like this:
const Vector3d myConstVector()
{
const static Vector3d vec(hardcodedData);
return vec;
}
This workaround is OK, but not really what i intended to do .. Also it has some overhead.
Is there a clean solution, to get the "hardcoded" option running? Thx in advance!