C++ internal compiler error - c++

Since many weeks our compilation server is crashing randomly while compiling our C++ code.
Sometimes the compilation failed and we have the following error :
/usr/include/c++/7/future:429:7: internal compiler error: Segmentation fault
The error is always raised from system libraries (but not always the same) and at different step of the compilation process.
We have tried to increase the size of RAM up to 10 GB and the size of the swap (up to 5GB) but the issue has not been solved. We have also tried multiple version of the cc compiler but without success.
We have a set a machine but the issue is only reproducible on out compilation server. We have to fix it because this server is part of our continuous integration chain.
The source code is composed of about 10000-20000 line of codes (not really much) but we use some template.
Does someone knows how to solve or investigate this error ?
System information:
compiler = c++
compiler version = c++ (Ubuntu 7.2.0-1ubuntu1~16.04) 7.2.0
compilation tools = cmake and make
ubuntu-xenial
RAM = 10G
Swap = 5G
NbCPU = 4
Thank you very much for your help

So you've got intermittent errors in (presumably well tested) compiler internals from (presumably also well tested) system libraries, and the problems are reproducible on multiple compiler versions, but only on this single machine. That points towards a hardware issue.
Bad RAM seems like a good candidate. A C++ compiler processing a moderately sized code base is likely to crash from e.g. random bit flips at least some of the time.
You should test your RAM (or just swap it out and see if the failures go away).

Related

What are common causes in OpenCl programs, for not finding GPUs?

So I´ve been using this OpenCl based program (https://github.com/johguse/profanity) for a while now and wanted to build it from its source code. The resulting executable (source code unchanged), seems to stop execution when looking for devices.
user#room:~/profanity$ ./profanity.x64 -I 900 --zeros
Mode: zeros
Target: Address
Devices:
user#room:~/profanity$
Ive tried this on 3 different PCs now, which all had no problem running the original program, so my hardware shouldnt be the problem.
Since this is a lot about the program itself, nobody will probably know the solution right away, but im getting kind of desperate with this situation, so I would like to know, what some common causes of GPU-finding problems in OpenCl programs are.
Thanks in advance.

Declaration of std::vector<Eigen::MatrixXd> results in a crash

I had asked this question a while back where the following declaration results in a run time crash on W7 machine.
std::vector<Eigen::MatrixXd> inv_K_mat2(42, Eigen::MatrixXd::Zero(4, 5));
However now I am seeing this on a Win10 machine also and wondering if I can get some help figuring this out. Since Win10 is a supported platform I need to somehow address this issue. Running through the debugger I see the application crashing exactly at the above line. Other declarations such as
Eigen::MatrixXd BMat(3,10);
are working. I can't determine why the above declaration is failing on some machine architecture. May be it is due to the mixing of stl containers with Eigen or missing run time libraries? Is there another way to specify the above declaration. Any help to fix this problem is very much appreciated

Dev C++: How to fix "Unrecognisable insn, internal compiler error" (full error in description)

I was writing some simple C++ code using DevC++ when this error came up:
I have no clue as to why I am getting this upon initialising a vector array (a graph adjacency list).
I couldn't co much to solve this problem since I am not an expert in c++ compilers. I tried reinstalling the program but that didn't help at all.
My compiler is TDM-GCC and in the compiler options I added "-std=c++11", which is executed when calling the compiler.
This line
std::vector<int> adj[NK];
defines an array of 100 million std::vector objects, along with a static initializer to create all of them.
Did you mean to create a single vector of size 100M?
std::vector<int> adj(NK);
Despite the fact that I made a mistake in the code, it still compiled successfully after a very simple fix. The compiler used to be a 64-bit Release version. I changed the field to the 32-bit Release and the problem disappeared, despite the ridiculous amount of memory my program had needed.
Please note that your mileage may vary and this solution might have some side effects I am not aware of. However, this worked just fine for me and it seems that all my other c++ files compile without any errors.

Fortran77 program does not execute

Working Fortran compilers sometimes generate invalid Win32 .exe files
Hello everybody,
several working Fortran compilers seem to have a strange behavior in certain situations. I have tried to compile and run Prof. John Denton's programs which can be found here:
https://www.dropbox.com/sh/8i0jyxzjb57q4j4/AABD9GQ1MUFwUm5hMWFylucva?dl=0
The different versions of the programs Meangen und Stagen could be compiled and worked fine. The last program named Multall also has several different versions. As before, the appropriate source codes could be compiled without any problems. But: as I tried to run the resulting .exe files, I got a very strange error message saying Multall's .exe would NOT be a valid Win32 executable.
I used four different Fortran compilers (g77, Cygwin, Mingw, FTN95) on Windows XP and Windows 8, always with the same result. I made several tests, and it seems to me the reason of the strange error message is the huge amount of source code Multall consists of. There are much more than 16000 lines of code, so maybe the memory being allocated by default by the compiler for the code segment is too small and an overflow occurs.
I tried several command line options of the g77 compiler in order to increase the code segment's amount of memory, but none worked. Can anybody tell me which of the g77's command line options make the huge program Multall's .exe work? Or maybe I am wrong, and the strange error message has nothing to do with the code segment? Who can help me?
Thanks a lot, I highly appreciate your help
Indeed, the problem is not the program size but the stack size. This is due to the large common blocks. As a test you could reduce JD in commall-open-18.3 to 1000 and you will notice that the problem is solved.
You could check whether the arrays are not oversized and adjust some parameters.
I tried reducing common blocks - without any effect - then I tried on another computer and there the compilation went fine and the code runs - I am guessing it is some sort of screw-up of the libraries - maybe because I made a messy (first) installation where I didn't really know what I wass doing - but I really don't know.

Crashing binary when building with for PPC

i am developing a program for some board, which uses PowerPC Architechture. I just have made some changes to the repository, refactored a bit and moved and erased classes.
On my development machine (VM linux x64) the binaries build fine and are executable. When i build with the CorssCompile Toolchain, it runs through smoothly without any errors nor warnings. But on the target system i cannot get the program to run, it seems to be not even making it to the main entry point.
So my guess is, that i have somehow created a linkage problem in the project. I just don't know how to untangle that beast.
So my questions, how can i get to the bottom of errors that occur before the main entry point was reached. How can i find the possible circular dependencies existing.
And just for "fun": Why in gods name would it build and run on x86 but not on ppc.
Yes i know this is few information to really help out, but i am asking for directions, sort of. Since i will have to deal with these problems some times anyways.
Why in gods name would it build and run on x86 but not on ppc.
There are a million possible reasons, from broken cross-toolchain, to bugs in libc, to incorrect toolchain invocation.
i am asking for directions
You should start by compiling this source:
int main() { return 0; }
and verifying that it works (this verifies basic toolchain sanity). If it does, you then extend it to print something. Then compile it with exactly the flags that you use in your real project.
If all of that checks out, you can run your real project under strace and/or GDB and see if you understand where the crash is happening. If you don't, edit your question with output from the tools, and someone may be able to guess better.
Update:
It seems the PPC toolchain or rather its compiler did not know how to handle static variables being declared after usage
If that were true, you would get a compilation error. You didn't, so this is false.
On Target: "gdb crashing/app" and then look at the frames, somewhere there is a "__static_initialization_and_destruction_0" frame, which should even point you to the file and line the yet undeclared static variable is used.
Your real problem is very likely this: order of construction of global (or class-static) variables in different translation units (i.e. different source files) is undefined (and usually is opposite between x86_64 and ppc). If you have two such globals in different files (say A and B), and if Bs constructor depends on A having been constructed already, then your program will work fine on platforms where A is constructed before B, but will crash on platforms where B is attempted to be constructed before A.