I'm experimenting with building a simple application from a couple of .cu source files and a very simple C++ main that calls a function from one of the .cu files. I'm making a shared library (.so file) from the compiled .cu files. I'm finding that everything builds without trouble, but when I try to run the application, I get a linker undefined symbol error, with the mangled name of the .cu function I'm calling from main(). If I build a static library instead, my application runs just fine. Here's the makefile I've set up:
.PHONY: clean
NVCCFLAGS = -std=c++11 --compiler-options '-fPIC'
CXXFLAGS = -std=c++11
HLIB = libhello.a
SHLIB = libhello.so
CUDA_OBJECTS = bridge.o add.o
all: driver
%.o :: %.cu
nvcc -o $# $(NVCCFLAGS) -c -I. $<
%.o :: %.cpp
c++ $(CXXFLAGS) -o $# -c -I. $<
$(HLIB): $(CUDA_OBJECTS)
ar rcs $# $^
$(SHLIB): $(CUDA_OBJECTS)
nvcc $(NVCCFLAGS) --shared -o $# $^
#driver : driver.o $(HLIB)
# c++ -std=c++11 -fPIC -o $# driver.o -L. -lhello -L/usr/local/cuda-10.1/targets/x86_64-linux/lib -lcudart
driver : driver.o $(SHLIB)
c++ -std=c++11 -fPIC -o $# driver.o -L. -lhello
clean:
-rm -f driver *.o *.so *.a
Here are the various source files that the makefile takes as fodder.
add.cu:
__global__ void add(int n, int* a, int* b, int* c) {
int index = threadIdx.x;
int stride = blockDim.x;
for (int ii = index; ii < n; ii += stride) {
c[ii] = a[ii] + b[ii];
}
}
add.h:
extern __global__ void add(int n, int* a, int* b, int* c);
bridge.cu:
#include <iostream>
#include "add.h"
void bridge() {
int N = 1 << 16;
int blockSize = 256;
int numBlocks = (N + blockSize - 1)/blockSize;
int* a;
int* b;
int* c;
cudaMallocManaged(&a, N*sizeof(int));
cudaMallocManaged(&b, N*sizeof(int));
cudaMallocManaged(&c, N*sizeof(int));
for (int ii = 0; ii < N; ii++) {
a[ii] = ii;
b[ii] = 2*ii;
}
add<<<numBlocks, blockSize>>>(N, a, b, c);
cudaDeviceSynchronize();
for (int ii = 0; ii < N; ii++) {
std::cout << a[ii] << " + " << b[ii] << " = " << c[ii] << std::endl;
}
cudaFree(a);
cudaFree(b);
cudaFree(c);
}
bridge.h:
extern void bridge();
driver.cpp:
#include "bridge.h"
int main() {
bridge();
return 0;
}
I'm very new to cuda, so I expect that's where I'm doing something wrong. I've played a bit with using extern "C" declarations, but that just seems to move the "undefined symbol" error from run time to build time.
I'm familiar with various ways that one can end up with an undefined symbol, and I've mentioned various experiments I've already performed (static linking, extern "C" declarations) that make me think that this problem isn't addressed by the proposed duplicate question.
My unresolved symbol is _Z6bridgev
It looks to me as though the linker should be able resolve the symbol. If I can nm on driver.o, I see:
0000000000000000 T main
U _Z6bridgev
And if I run nm on libhello.so, I see:
0000000000006e56 T _Z6bridgev
When Robert Crovella was able to get my example to work on his machine, while I wasn't able to get his example to work on mine, I started realizing that my problem had nothing to do with cuda or nvcc. It was the fact that with a shared library, the loader has to resolve symbols at runtime, and my shared library wasn't in a "well-known location". I built a simple test case just now, purely with c++ sources, and repeated my failure. Once I copied libhello.so to /usr/local/lib, I was able to run driver successfully. So, I'm OK with closing my original question, if that's the will of the people.
I am creating an R package 'lapacker' to provide the C interface for internal LAPACK library provided and used by R (with double precision and double complex only) using the R API header file "R_ext/Lapack.h". The source code:
https://github.com/ypan1988/lapacker/
And the project structure:
/lapacker
/inst
/include
/lapacke.h
/someother header files
/R
/zzz.R
/src
/lapacke_dgetrf.c
/lapacke_dgetrf_work.c
/loads of other utility functions provided by LAPACKE
/rcpp_hello.cpp
DESCRIPTION
NAMESPACE
Inside the project, I tried a test function in rcpp_hello.cpp file (Note this example is coming from https://www.netlib.org/lapack/lapacke.html#_calling_code_dgels_code):
//'#export
// [[Rcpp::export]]
void example_lapacke_dgels()
{
double a[5][3] = {{1,1,1},{2,3,4},{3,5,2},{4,2,5},{5,4,3}};
double b[5][2] = {{-10,-3},{12,14},{14,12},{16,16},{18,16}};
lapack_int info,m,n,lda,ldb,nrhs;
int i,j;
m = 5;
n = 3;
nrhs = 2;
lda = 3;
ldb = 2;
info = LAPACKE_dgels(LAPACK_ROW_MAJOR,'N',m,n,nrhs,*a,lda,*b,ldb);
for(i=0;i<n;i++)
{
for(j=0;j<nrhs;j++)
{
printf("%lf ",b[i][j]);
}
printf("\n");
}
}
The whole package can compile properly with no errors, and in R it gives the right answer(indicating symbol LAPACKE_dgels can be found):
> example_lapacke_dgels()
2.000000 1.000000
1.000000 1.000000
1.000000 2.000000
However, when I create a separate C++ file, say demo3.cpp with exactly the same function,
#include <Rcpp.h>
#include <lapacke.h>
// [[Rcpp::depends(lapacker)]]
// [[Rcpp::export]]
void lapacke_dgels_test()
{
double a[5][3] = {{1,1,1},{2,3,4},{3,5,2},{4,2,5},{5,4,3}};
double b[5][2] = {{-10,-3},{12,14},{14,12},{16,16},{18,16}};
lapack_int info,m,n,lda,ldb,nrhs;
int i,j;
m = 5;
n = 3;
nrhs = 2;
lda = 3;
ldb = 2;
info = LAPACKE_dgels(LAPACK_ROW_MAJOR,'N',m,n,nrhs,*a,lda,*b,ldb);
for(i=0;i<n;i++)
{
for(j=0;j<nrhs;j++)
{
printf("%lf ",b[i][j]);
}
printf("\n");
}
}
it no longer compiles properly (actually I tried under both macOS and ubuntu, same linking problem), and gives linking error messages (Cannot find symbol LAPACKE_dgels):
> Rcpp::sourceCpp("~/Desktop/demo3.cpp", showOutput = TRUE)
/usr/lib/R/bin/R CMD SHLIB -o 'sourceCpp_6.so' 'demo3.cpp'
g++ -I/usr/share/R/include -DNDEBUG -I"/home/yipan/R/x86_64-pc-linux-gnu-library/3.4/Rcpp/include" -I"/home/yipan/R/x86_64-pc-linux-gnu-library/3.4/lapacker/include" -I"/home/yipan/Desktop" -fpic -g -O2 -fdebug-prefix-map=/build/r-base-AitvI6/r-base-3.4.4=. -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -g -c demo3.cpp -o demo3.o
g++ -shared -L/usr/lib/R/lib -Wl,-Bsymbolic-functions -Wl,-z,relro -o sourceCpp_6.so demo3.o -L/usr/lib/R/lib -lR
Error in dyn.load("/tmp/RtmpUsASwK/sourceCpp-x86_64-pc-linux-gnu-1.0.0/sourcecpp_159e6145591d/sourceCpp_6.so") :
unable to load shared object '/tmp/RtmpUsASwK/sourceCpp-x86_64-pc-linux-gnu-1.0.0/sourcecpp_159e6145591d/sourceCpp_6.so':
/tmp/RtmpUsASwK/sourceCpp-x86_64-pc-linux-gnu-1.0.0/sourcecpp_159e6145591d/sourceCpp_6.so: undefined symbol: LAPACKE_dgels
I have also checked the lapacker.so under /R/x86_64-pc-linux-gnu-library/3.4/lapacker/libs and found:
000000000000c6b0 g DF .text 00000000000001bf Base LAPACKE_dgels
Do I miss something to get the demo3.cpp compile correctly? Thanks very much for your patience and time!
You are facing a difficult problem here. The Symbol you are trying to resolve LAPACKE_dgels is part of lapacker.so, build during package installation. Problem is, that the libraries for R packages are not meant for linking. Instead, they are loaded by R dynamically at run-time. Basically, I see four possibilities:
Convert lapacke into a header only library and install that in inst/include (c.f. RcppArmadillo).
Link with a system installation of lapacke (easy on Linux ...)
Register all functions with R and use the methods provided by R to link to them (c.f. WRE and nloptr).
Compile a library meant for linking and install it with the R package. You will still need a plugin for that to work, since you have to add -L<path/to/lib> -l<libname> .... to PKG_LIBS.
I am sure there are examples on CRAN that use method 4, but none comes to mind right now. However, as a "code kata" I have converted a recent test package of mine to use this structure, c.f. https://github.com/rstub/levmaR/tree/static.
(Original incomplete answer.)
In src/Makevars you have
PKG_LIBS = $(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS)
You need an analogue setting when compiling a cpp file via Rcpp attributes. The best way achieve this is by using an Rcpp plugin, c.f. RcppArmadillo's solution (adjustments are untested!):
inlineCxxPlugin <- function(...) {
plugin <-
Rcpp::Rcpp.plugin.maker(
include.before = "#include <lapacke.h>",
libs = "$(LAPACK_LIBS) $(BLAS_LIBS) $(FLIBS)",
package = "lapacker"
)
settings <- plugin()
settings$env$PKG_CPPFLAGS <- "-I../inst/include"
settings
}
BTW, why do you want to interface with LAPACK directly, when RcppArmadillo does so already?
I am compiling a C++ code together with Fortran subroutine. The C++ cpp code is like:
#include "Calculate.h"
extern "C" double SolveEq_(double *Gvalue, double *GvalueU, double *GvalueV, double *Gnodex, double *Gnodey, double *GtimeInc, double *Glfs);
template <class T1, class T2>
void Calculate(vector<Element<T1, T2> > &elm, GParameter &GeqPm, GmeshInfo &Gmesh)
{
// Solving Equation using Fortran code
SolveEq_(&Gmesh.Gvalue[0], &Gmesh.GvalueU[0], &Gmesh.GvalueV[0], &Gmesh.Gnodex[0], &Gmesh.Gnodey[0], &GeqPm.GtimeInc, &GeqPm.Glfs);
return;
}
And the Fortran code is like:
!==========================================================================
Module Inputpar
Implicit None
Integer,parameter :: Numx = 200, Numy = 200
End Module
!======================================== PROGRAM =============================================
Subroutine SolveEq(Gvalue, GvalueU, GvalueV, Gnodex, Gnodey, Deltat, Lfs);
Use Inputpar
Implicit None
Real*8 Deltat, Lfs, Dt, Su
Real*8 Gvalue(1, (Numx+1)*(Numy+1)), GvalueU(1, (Numx+1)*(Numy+1)), GvalueV(1, (Numx+1)*(Numy+1))
Real*8 Gnodex(0:Numx), Gnodey(0:Numy)
Real*8 DX, DY
Real*8 X(-3:Numx+3), Y(-3:Numy+3)
Real*8 VelX(-3:Numx+3,-3:Numy+3), VelY(-3:Numx+3,-3:Numy+3)
Real*8 G(-3:Numx+3,-3:Numy+3)
Common /CommonData/ X, Y, DX, DY, VelX, VelY, G, Dt, Su
!============================= Data Transfer ============================
Dt = Deltat
Su = Lfs
X (0:Numx) = Gnodex(0:Numx)
Y (0:Numy) = Gnodey(0:Numy)
VelX(0:Numx,0:Numy) = transpose(reshape(GvalueU,(/Numy+1,Numx+1/)))
VelY(0:Numx,0:Numy) = transpose(reshape(GvalueV,(/Numy+1,Numx+1/)))
G (0:Numx,0:Numy) = transpose(reshape(Gvalue ,(/Numy+1,Numx+1/)))
!==========Some other lines neglected here=================
End
!======================================== END PROGRAM =========================================
Firstly compile the Fortran code using command:
gfortran SolveEq.f90 -c -o SolveEq.o
And then compile the C++/Fortran codes together using makefile:
# Compiler
CC = g++
# Debug option
DEBUG = false
# Source directory of codes
SRC1 = /home
SRC2 = $(SRC1)/Resources
SRC3 = $(SRC1)/Resources/Classes
OPT=-fopenmp -O2
ifdef $(DEBUG)
PROG=test.out
else
PROG=i.out
endif
# Linker
#LNK=-I$(MPI)/include -L$(MPI)/lib -lmpich -lopa -lmpl -lpthread
OBJS = libtseutil.a Calculate.o SolveEq.o
OBJS_F=$(OBJS)
SUF_OPTS1=$(OBJS_F)
SUF_OPTS2=-I$(SRC2)/
SUF_OPTS3=-I$(SRC3)/
SUF_OPTS4=
# Details of compiling
$(PROG): $(OBJS_F)
$(CC) $(OPT) -o $# $(SUF_OPTS1)
%.o: $(SRC1)/%.cpp
$(CC) $(OPT) -c $< $(SUF_OPTS2)
%.o: $(SRC2)/%.cpp
$(CC) $(OPT) -c $< $(SUF_OPTS3)
%.o: $(SRC3)/%.cpp
$(CC) $(OPT) -c $< $(SUF_OPTS4)
# Clean
.PHONY: clean
clean:
#rm -rf *.o *.oo *.log
However, the error shows that:
g++ -fopenmp -O2 -o libtseutil.a Calculate.o SolveEq.o
Calculate.o: In function `void Calculate<CE_Tri, SolElm2d>(std::vector<Element<CE_Tri, SolElm2d>, std::allocator<Element<CE_Tri, SolElm2d> > >&, GParameter&, GmeshInfo&)':
Calculate.cpp:(.text._Z10CalculateGI6CE_Tri8SolElm2dEvRSt6vectorI7ElementIT_T0_ESaIS6_EER10GParameterR9GmeshInfo[_Z10CalculateGI6CE_Tri8SolElm2dEvRSt6vectorI7ElementIT_T0_ESaIS6_EER10GParameterR9GmeshInfo]+0x3c): undefined reference to `SolveEq_'
SolveEq.o: In function `solveeq_':
SolveEq.f90:(.text+0x2b8e): undefined reference to `_gfortran_reshape_r8'
SolveEq.f90:(.text+0x2d2a): undefined reference to `_gfortran_reshape_r8'
SolveEq.f90:(.text+0x2ec6): undefined reference to `_gfortran_reshape_r8'
SolveEq.f90:(.text+0x31fa): undefined reference to `_gfortran_reshape_r8'
collect2: error: ld returned 1 exit status
How does this happen?
I used a simple case to test the mixed compiling. The C++ code was:
#include <stdlib.h>
#include <stdio.h>
#include <iostream>
#include <vector>
using namespace std;
extern "C" double fortran_sum_(double *sum, double *su2m, double *vec, double* vec2, int *size);
int main(int argc, char ** argv)
{
int size;
double sum, sum2;
double aa,bb,cc,dd;
vector<double> vec;
vector<double> vec2;
size=2;
aa=1.0;
bb=2.0;
sum=0.0;
sum2=0.0;
vec.push_back(aa);
vec.push_back(bb);
vec2.push_back(aa*2.0);
vec2.push_back(bb*2.0);
fortran_sum_(&sum, &sum2, &vec[0], &vec2[0], &size);
cout << "Calling a Fortran function" << endl;
cout << "============================" << endl;
cout << "size = " << size << endl;
cout << "sum = " << sum << endl;
cout << "sum2 = " << sum2 << endl << endl;
}
And the Fortran code was:
Subroutine fortran_sum(gsum, gsum2, gvec, gvec2, gsize2)
integer gsize,gsize2
real*8 gvec(0:gsize2-1), gvec2(0:1)
real*8 gsum, gsum2
gsum = gvec(0)+gvec(1);
gsum2 = gvec2(0)+gvec2(1);
gsize = gsize*2;
end
Then used commands to compile :
gfortran fortran_sum.f90 -c -o fortran_sum.o
g++ fortran_sum.o call_fortran.cpp -o a.out
./a.out
It worked well:
Calling a Fortran function
============================
size = 2
sum = 3
sum2 = 6
The function _gfortran_reshape_r8 is part of the library that is used by code compiled by gfortran. When you compile in a single language such libraries are automatically linked because whichever program you use to do the linking knows about them. When you link mixed code, you need to find and explicitly put on the command line the libraries for the language that doesn't correspond to the linker you've chosen. Usually you link with the C++ syntax, as you've done here, and add the fortran compiler's libraries explicitly.
I am little bit weak on Fortran language. When I compiled your fortran code and put it through nm, it gave me the following. There is no symbol "SolveEq_". There is just "solveeq_".
0000000000000020 r A.15.3480
0000000000000000 r A.3.3436
0000000000000010 r A.9.3463
U _gfortran_reshape_r8
00000000000fbe28 C commondata_
U free
U malloc
0000000000000000 T solveeq_
Edit:
It compiled for me when I used "solveeq_". Here is simplified code for demo (main.cpp):
extern "C" double solveeq_(
double *Gvalue, double *GvalueU,
double *GvalueV, double *Gnodex,
double *Gnodey, double *GtimeInc, double *Glfs
);
template <typename T>
void Calculate(T *one, T *two, T *three,
T *four, T *five, T *six, T *seven) {
solveeq_(one,two,three,four,five,six,seven);
}
int main(int argc, char ** argv) {
double one,two,three,four,five,six,seven;
Calculate<double>(&one,&two,&three,&four,&five,&six,&seven);
}
Compiled it as (f.f90 has the fortran code):
gfortran -c f.f90
g++ f.o main.cpp -lgfortran
It seems, since 2003, if you want to call a fortran function from C/C++, you could use BIND (It may not with fortran/fortran though without some more additional effort).
Subroutine SolveEq(F) BIND(C,NAME="SolveMyEquation")
Real Gvalue, GvalueU, GvalueV, Gnodex, Gnodey, Deltat, Lfs
End
Thanks to every one, especially #Brick and #blackpen.
The problem has been fixed.
1), Add -lgfortran into command line so that the function,otherwise it will show: undefined reference to `_gfortran_reshape_r8'.
2), Change the name of the .f90 function into small letters "solveeq" other wise it will show: undefined reference to `SolveGeq_'
So finally my .cpp is changed into :
#include "Calculate.h"
extern "C" void solveeq_(double *Gvalue, double *GvalueU, double *GvalueV, double *Gnodex, double *Gnodey, double *GtimeInc, double *Glfs);
template <class T1, class T2>
void Calculate(vector<Element<T1, T2> > &elm, GParameter &GeqPm, GmeshInfo &Gmesh)
{
// Solving Equation using Fortran code
solveeq_(&Gmesh.Gvalue[0], &Gmesh.GvalueU[0], &Gmesh.GvalueV[0], &Gmesh.Gnodex[0], &Gmesh.Gnodey[0], &GeqPm.GtimeInc, &GeqPm.Glfs);
return;
}
The fortran code .f90 is like:
!==========================================================================
Module Inputpar
Implicit None
Integer,parameter :: Numx = 200, Numy = 200
End Module
!======================================== PROGRAM =============================================
Subroutine SolveEq(Gvalue, GvalueU, GvalueV, Gnodex, Gnodey, Deltat, Lfs);
Use Inputpar
Implicit None
Real*8 Deltat, Lfs, Dt, Su
Real*8 Gvalue(1, (Numx+1)*(Numy+1)), GvalueU(1, (Numx+1)*(Numy+1)), GvalueV(1, (Numx+1)*(Numy+1))
Real*8 Gnodex(0:Numx), Gnodey(0:Numy)
Real*8 DX, DY
Real*8 X(-3:Numx+3), Y(-3:Numy+3)
Real*8 VelX(-3:Numx+3,-3:Numy+3), VelY(-3:Numx+3,-3:Numy+3)
Real*8 G(-3:Numx+3,-3:Numy+3)
Common /CommonData/ X, Y, DX, DY, VelX, VelY, G, Dt, Su
!============================= Data Transfer ============================
Dt = Deltat
Su = Lfs
X (0:Numx) = Gnodex(0:Numx)
Y (0:Numy) = Gnodey(0:Numy)
VelX(0:Numx,0:Numy) = transpose(reshape(GvalueU,(/Numy+1,Numx+1/)))
VelY(0:Numx,0:Numy) = transpose(reshape(GvalueV,(/Numy+1,Numx+1/)))
G (0:Numx,0:Numy) = transpose(reshape(Gvalue ,(/Numy+1,Numx+1/)))
!==========Some other lines neglected here=================
End
!======================================== END PROGRAM =========================================
And the makefile is like:
# Compiler
CC = g++
# Debug option
DEBUG = false
# Source directory of codes
SRC1 = /home
SRC2 = $(SRC1)/Resources
SRC3 = $(SRC1)/Resources/Classes
OPT=-fopenmp -O2
ifdef $(DEBUG)
PROG=test.out
else
PROG=i.out
endif
# Linker
#LNK=-I$(MPI)/include -L$(MPI)/lib -lmpich -lopa -lmpl -lpthread
OBJS = libtseutil.a Calculate.o solveeq.o
OBJS_F=$(OBJS)
SUF_OPTS1=$(OBJS_F)
SUF_OPTS2=-I$(SRC2)/
SUF_OPTS3=-I$(SRC3)/
SUF_OPTS4=
# Details of compiling
$(PROG): $(OBJS_F)
$(CC) $(OPT) -o $# $(SUF_OPTS1)
%.o: $(SRC1)/%.cpp
$(CC) $(OPT) -c $< $(SUF_OPTS2)
%.o: $(SRC2)/%.cpp
$(CC) $(OPT) -c $< $(SUF_OPTS3)
%.o: $(SRC3)/%.cpp
$(CC) $(OPT) -c $< $(SUF_OPTS4)
solveeq.o: $(SRC1)/solveeq.f90
gfortran -c $<
# Clean
.PHONY: clean
clean:
#rm -rf *.o *.oo *.log
I am trying to understand why using -O2 -march=native with GCC gives a slower code than without using them.
Note that I am using MinGW (GCC 4.7.1) under Windows 7.
Here is my code :
struct.hpp :
#ifndef STRUCT_HPP
#define STRUCT_HPP
#include <iostream>
class Figure
{
public:
Figure(char *pName);
virtual ~Figure();
char *GetName();
double GetArea_mm2(int factor);
private:
char name[64];
virtual double GetAreaEx_mm2() = 0;
};
class Disk : public Figure
{
public:
Disk(char *pName, double radius_mm);
~Disk();
private:
double radius_mm;
virtual double GetAreaEx_mm2();
};
class Square : public Figure
{
public:
Square(char *pName, double side_mm);
~Square();
private:
double side_mm;
virtual double GetAreaEx_mm2();
};
#endif
struct.cpp :
#include <cstdio>
#include "struct.hpp"
Figure::Figure(char *pName)
{
sprintf(name, pName);
}
Figure::~Figure()
{
}
char *Figure::GetName()
{
return name;
}
double Figure::GetArea_mm2(int factor)
{
return (double)factor*GetAreaEx_mm2();
}
Disk::Disk(char *pName, double radius_mm_) :
Figure(pName), radius_mm(radius_mm_)
{
}
Disk::~Disk()
{
}
double Disk::GetAreaEx_mm2()
{
return 3.1415926*radius_mm*radius_mm;
}
Square::Square(char *pName, double side_mm_) :
Figure(pName), side_mm(side_mm_)
{
}
Square::~Square()
{
}
double Square::GetAreaEx_mm2()
{
return side_mm*side_mm;
}
main.cpp
#include <iostream>
#include <cstdio>
#include "struct.hpp"
double Do(int n)
{
double sum_mm2 = 0.0;
const int figuresCount = 10000;
Figure **pFigures = new Figure*[figuresCount];
for (int i = 0; i < figuresCount; ++i)
{
if (i % 2)
pFigures[i] = new Disk((char *)"-Disque", i);
else
pFigures[i] = new Square((char *)"-Carré", i);
}
for (int a = 0; a < n; ++a)
{
for (int i = 0; i < figuresCount; ++i)
{
sum_mm2 += pFigures[i]->GetArea_mm2(i);
sum_mm2 += (double)(pFigures[i]->GetName()[0] - '-');
}
}
for (int i = 0; i < figuresCount; ++i)
delete pFigures[i];
delete[] pFigures;
return sum_mm2;
}
int main()
{
double a = 0;
StartChrono(); // home made lib, working fine
a = Do(10000);
double elapsedTime_ms = StopChrono();
std::cout << "Elapsed time : " << elapsedTime_ms << " ms" << std::endl;
return (int)a % 2; // To force the optimizer to keep the Do() call
}
I compile this code twice :
1 : Without optimization
mingw32-g++.exe -Wall -fexceptions -std=c++11 -c main.cpp -o main.o
mingw32-g++.exe -Wall -fexceptions -std=c++11 -c struct.cpp -o struct.o
mingw32-g++.exe -o program.exe main.o struct.o -s
2 : With -O2 optimization
mingw32-g++.exe -Wall -fexceptions -O2 -march=native -std=c++11 -c main.cpp -o main.o
mingw32-g++.exe -Wall -fexceptions -O2 -march=native -std=c++11 -c struct.cpp -o struct.o
mingw32-g++.exe -o program.exe main.o struct.o -s
1 : Execution time :
1196 ms (1269 ms with Visual Studio 2013)
2 : Execution time :
1569 ms (403 ms with Visual Studio 2013) !!!!!!!!!!!!!
Using -O3 instead of -O2 does not improve the results.
I was, and I still am, pretty convinced that GCC and Visual Studio are equivalents, so I don't understand this huge difference.
Plus, I don't understand why the optimized version is slower than the non-optimized version with GCC.
Do I miss something here ?
(Note that I had the same problem with genuine GCC 4.8.2 on Ubuntu)
Thanks for your help
Considering that I don't see the assembly code, I'm going to speculate the following :
The allocation loop can be optimized (by the compiler) by removing the if clause and causing the following :
for (int i=0;i <10000 ; i+=2)
{
pFigures[i] = new Square(...);
}
for (int i=1;i <10000 ; i +=2)
{
pFigures[i] = new Disk(...);
}
Considering that the end condition is a multiple of 4 , it can be even more "efficient"
for (int i=0;i < 10000 ;i+=2*4)
{
pFigures[i] = ...
pFigures[i+2] = ...
pFigures[i+4] = ...
pFigures[i+6] = ...
}
Memory wise this will make Disks to be allocated 4 by 4 an Squares 4 by 4 .
Now, this means they will be found in the memory next to each other.
Next, you are going to iterate the vector 10000 times in a normal order (by normal i mean index after index).
Think about the places where these shapes are allocated in memory.You will end up having 4 times more cache misses (think about the border example, when 4 disks and 4 squares are found in different pages, you will switch between the pages 8 times... in a normal case scenario you would switch between the pages only once).
This sort of optimization (if done by the compiler, and in your particular code) optimizes the time for Allocation , but not the time of access (which in your example is the biggest load).
Test this by removing the i%2 and see what results you get.
Again this is pure speculation, and it assumes that the reason for lower performance was a loop optimization.
I suspect that you've got an issue unique to the combination of mingw/gcc/glibc on Windows because your code performs faster with optimizations on Linux where gcc is altogether more 'at home'.
On a fairly pedestrian Linux VM using gcc 4.8.2:
$ g++ main.cpp struct.cpp
$ time a.out
real 0m2.981s
user 0m2.876s
sys 0m0.079s
$ g++ -O2 main.cpp struct.cpp
$ time a.out
real 0m1.629s
user 0m1.523s
sys 0m0.041s
...and if you really take the blinkers off the optimizer by deleting struct.cpp and moving the implementation all inline:
$ time a.out
real 0m0.550s
user 0m0.543s
sys 0m0.000s