In an application I'm developing I have a template function like this:
template<class T>
void CIO::writeln(T item)
{
stringstream ss;
ss << item << '\r' << endl;
write(ss.str());
}
This function is called from several places, with T = const char* and T=std::string. With CodeSourcery Lite 2008.03-41 (GCC 4.3.2) this compiled and linked fine with -O3 compiler flag. However, since I changed to CodeSourcery Lite 2012.03-57 (GCC 4.6.3), compiling with -O3 is OK, but then linking fails with undefined reference to void CIO::writeln<std::string>(std::string). With -O2 or lower everything is OK, linking succeeds.
I had a deeper look into this and I discovered something strange in the assembly output: when compiling with -O2, I can find two specializations of the function: one for const char* (_ZN3CIO7writelnIPKcEEvT_) and one for std::string (_ZN3CIO7writelnISsEEvT_), but when compiling with -O3, the second specialization is missing, which explains the linking error.
Is this a compiler bug? Is this some weird optimization turned evil?
Thanks in advance!
Edit: this function is in a source file. Following Mike Seymour's comment, I moved it to the header and everything's fine now. I admit that I should've realized this earlier. Nevertheless, it still frightens me that a language rule is checked or not depending on an optimization flag.
Unlike what the other answer says, this is probably not a compiler bug.
One of the optimisations that gets enabled by -O3 is function inlining. What I think is happening is:
Source file 1 is calling CIO::writeln without having its definition available. It is compiled to object file 1.
Source file 2 is calling CIO::writeln while having its definition available. It is compiled to object file 2.
Object file 1 will only be usable if object file 2 contains the definition of CIO::writeln. If the call in source file 2 gets inlined, object file 2 won't contain a definition for it. If the call does not get inlined, a definition will be available.
The solution given in the comments, move the definition to a header file, is correct.
Related
Consider this simple code:
#include <stdio.h>
extern "C"
{
void p4nenc256v32();
void p4ndec256v32();
}
void bigFunctionTest()
{
p4nenc256v32();
p4ndec256v32();
}
int main()
{
printf("hello\n");
}
Code size of those p4nenc256v32/p4ndec256v32 functions is significant, roughly 1.5MB. This binary size when compiled with latest VS2022 with optimizations enabled is 1.5MB. If I comment out that unused bigFunctionTest function then resulting binary is smaller by 1.4MB. Any ideas why would this clearly unused function wouldn't be eliminated by compiler and/or linker in release builds? By default, VS2022 in release uses /Gy and /OPT:REF.
I also tried mingw64 (gcc 12.2) with -fdata-sections -ffunction-sections -Wl,--gc-sections and results were much worse: when compiled with that dummy function exe grew by 5.2MB. Seem like ms and gcc compilers agree that for some reason these functions cannot be removed.
I created a working sample project that shows the issue: https://github.com/pps83/TestLinker.git (make sure to pull submodules as well) and filled an issue with VS issue tracker: Linker doesn't eliminate correctly dead code, however, I think I might get better explanation from SO users explaining what might be the reason for the problem.
Moving from using Intel compiler & VC to Apple clang 12.0.
In my code there are functions that are never called for a certain project (but needed when included in other projects). Clang insists on compiling the uncalled functions and detects errors, where Intel and VC simply skipped compilation.
These are errors that are tricky to fix for that certain project.
Is there a Clang flag that means "Don't compile if not called"?
EDIT: example:
template <class T> class A
{
public:
void foo() { garbage }; // <--- syntax error
};
int main() {
A<int> my_obj;
//my_obj.foo(); // <--- when unremarked, will fail all compilers
}
Compiler Explorer demo: Intel vs. Clang
Intel and VC compilers are relaxed until the call to foo() enters the scene.
Clang has a mode in which is tries to behave as if it's MSVC. This was introduced as part clang-cl, the driver for clang that accepts a lot of the same arguments as MSVC. You can find some information about it on the user manual and the MSVC compatibility pages.
Long story short, there is an option -fdelayed-template-parsing in clang that takes over the faulty behavior of the templates. As far as I'm aware, this ain't a 100% match, however, it is good enough.
If we add this to the example of Artyer, it compiles the code, see compiler-explorer.
From my experience of adding clang as 2nd compiler next to MSVC (it was still both on Windows using clang-cl, I didn't have to deal with the complexity of multiple OS and/or STL), I want to recommend to you to take this option as a temporary thing to get things working. Take your time removing this, as it will help making your code more maintainable.
EDIT: If you want to know more about why the compilation error is the right thing to do, you can lookup the term 2 phase lookup. You can find the announcement of it's introduction in the MSVC compiler here: https://devblogs.microsoft.com/cppblog/two-phase-name-lookup-support-comes-to-msvc/
From what I can see online, the intel compiler ain't doing 2 phase lookup either, or at least not the reporting of the errors.
In 29.5 Atomic types of the C++ Standard November 2014 working draft it states:
There is a generic class template atomic. The type of the template argument T shall be trivially copyable (3.9). [ Note: Type arguments that are not also statically initializable may be difficult to use. —end note ]
So - as far as I can tell - this:
#include <atomic>
struct Message {
unsigned long int a;
unsigned long int b;
};
std::atomic<Message> sharedState;
int main() {
Message tmp{1,2};
sharedState.store(tmp);
Message tmp2=sharedState.load();
}
should be perfectly valid standard c++14 (and also c++11) code. However, if I don't link libatomic manually, the command
g++ -std=c++14 <filename>
gives - at least on Fedora 22 (gcc 5.1) - the following linking error:
/tmp/ccdiWWQi.o: In function `std::atomic<Message>::store(Message, std::memory_order)':
main.cpp:(.text._ZNSt6atomicI7MessageE5storeES0_St12memory_order[_ZNSt6atomicI7MessageE5storeES0_St12memory_order]+0x3f): undefined reference to `__atomic_store_16'
/tmp/ccdiWWQi.o: In function `std::atomic<Message>::load(std::memory_order) const':
main.cpp:(.text._ZNKSt6atomicI7MessageE4loadESt12memory_order[_ZNKSt6atomicI7MessageE4loadESt12memory_order]+0x1c): undefined reference to `__atomic_load_16'
collect2: error: ld returned 1 exit status
If I write
g++ -std=c++14 -latomic <filename>
everything is fine.
I know that the standard doesn't say anything about compiler flags or libraries that have to be included, but so far I thought that any standard conformant, single file code can be compiled via the first command.
So why doesn't that apply to my example code? Is there a rational why -latomic is still necessary, or is it just something that hasn't been addressed by the compiler maintainers, yet?
Relevant reading on the GCC homepage on how and why GCC makes library calls in certain cases regarding <atomic> in the first place.
GCC and libstdc++ are only losely coupled. libatomic is the domain of the library, not the compiler -- and you can use GCC with a different library (which might provide the necessary definitions for <atomic> in its main proper, or under a different name), so GCC cannot just assume -latomic.
Also:
GCC 4.7 does not include a library implementation as the API has not been firmly established.
The same page claims that GCC 4.8 shall provide such a library implementation, but plans are the first victims of war. I'd guess the reason for -latomic still being necessary can be found in that vicinity.
Besides...
...so far I thought that any standard conformant, single file code can be compiled via the first command.
...-lm has been around for quite some time if you're using math functions.
I know that the standard doesn't say anything about compiler flags or libraries that have to be included
Right.
but so far I thought that any standard conformant, single file code can be compiled via the first command.
Well, no. As you just said, there is no particular reason to assume this. Consider also that GCC extensions are enabled by default.
That being said, it seems self-evident that the intention is to make -latomic a default part of the runtime when it's settled down a bit.
g++ is a wrapper for gcc which adds the correct C++ libraries. Clearly -latomic is missing from that list. Not a core compiler problem then, simply a minor bug in the wrapper.
In 29.5 Atomic types of the C++ Standard November 2014 working draft it states:
There is a generic class template atomic. The type of the template argument T shall be trivially copyable (3.9). [ Note: Type arguments that are not also statically initializable may be difficult to use. —end note ]
So - as far as I can tell - this:
#include <atomic>
struct Message {
unsigned long int a;
unsigned long int b;
};
std::atomic<Message> sharedState;
int main() {
Message tmp{1,2};
sharedState.store(tmp);
Message tmp2=sharedState.load();
}
should be perfectly valid standard c++14 (and also c++11) code. However, if I don't link libatomic manually, the command
g++ -std=c++14 <filename>
gives - at least on Fedora 22 (gcc 5.1) - the following linking error:
/tmp/ccdiWWQi.o: In function `std::atomic<Message>::store(Message, std::memory_order)':
main.cpp:(.text._ZNSt6atomicI7MessageE5storeES0_St12memory_order[_ZNSt6atomicI7MessageE5storeES0_St12memory_order]+0x3f): undefined reference to `__atomic_store_16'
/tmp/ccdiWWQi.o: In function `std::atomic<Message>::load(std::memory_order) const':
main.cpp:(.text._ZNKSt6atomicI7MessageE4loadESt12memory_order[_ZNKSt6atomicI7MessageE4loadESt12memory_order]+0x1c): undefined reference to `__atomic_load_16'
collect2: error: ld returned 1 exit status
If I write
g++ -std=c++14 -latomic <filename>
everything is fine.
I know that the standard doesn't say anything about compiler flags or libraries that have to be included, but so far I thought that any standard conformant, single file code can be compiled via the first command.
So why doesn't that apply to my example code? Is there a rational why -latomic is still necessary, or is it just something that hasn't been addressed by the compiler maintainers, yet?
Relevant reading on the GCC homepage on how and why GCC makes library calls in certain cases regarding <atomic> in the first place.
GCC and libstdc++ are only losely coupled. libatomic is the domain of the library, not the compiler -- and you can use GCC with a different library (which might provide the necessary definitions for <atomic> in its main proper, or under a different name), so GCC cannot just assume -latomic.
Also:
GCC 4.7 does not include a library implementation as the API has not been firmly established.
The same page claims that GCC 4.8 shall provide such a library implementation, but plans are the first victims of war. I'd guess the reason for -latomic still being necessary can be found in that vicinity.
Besides...
...so far I thought that any standard conformant, single file code can be compiled via the first command.
...-lm has been around for quite some time if you're using math functions.
I know that the standard doesn't say anything about compiler flags or libraries that have to be included
Right.
but so far I thought that any standard conformant, single file code can be compiled via the first command.
Well, no. As you just said, there is no particular reason to assume this. Consider also that GCC extensions are enabled by default.
That being said, it seems self-evident that the intention is to make -latomic a default part of the runtime when it's settled down a bit.
g++ is a wrapper for gcc which adds the correct C++ libraries. Clearly -latomic is missing from that list. Not a core compiler problem then, simply a minor bug in the wrapper.
I am having a strange issue with GCC (4.6.4, Ubuntu 12.04) sometimes, I am using it to compile a huge project (hundreds of files and hundreds of thousands of lines of code), but I recently spotted something. After certain compiles (seems to happen randomly), I get a specific piece of code compiled differently and erroneously, causing undefined behavior in my code:
class someDerivedClass : public someBaseClass
{
public:
struct anotherDerived : public anoterBaseClass
{
void SomeMethod()
{
someMember->someSetter(2);
}
}
}
Where "someSetter" is defined as:
void someSetter(varType varName) { someOtherMember = varName; }
Normally, SomeMethod() gets compiled to:
00000000019fd910 mov 0x20(%rdi),%rax
00000000019fd914 movl $0x2,0x278c(%rax)
00000000019fd91e retq
But sometimes it gets (wrongfully) compiled to:
000000000196e4ee mov 0x20(%rdi),%rax
000000000196e4f2 movl $0x2,0x27d4(%rax)
000000000196e4fc retq
The setter seems to get inlined, probably because of compile flags -O2:
-std=c++11 -m64 -O2 -ggdb3 -pipe -Wliteral-suffix -fpermissive -fno-fast-math -fno-strength-reduce -fno-delete-null-pointer-checks -fno-strict-aliasing
but that's not the issue. The real issue is the offset of the member someOtherMember, 0x278c is correct (first case) but 0x27d4 is incorrect (second case) and this obviously ends up modifying a totally different member of the class. Why is this happening? What am I missing? (also, I don't know what other relevant info I can post, so ask). Please keep in mind that this happens when compiling the project again (either full recompile or just compiling modified files only), without modifying the affected file (or files with the used classes). For example, just adding a simple printf() somewhere in a totally unrelated file might trigger this behavior or make it go away when it happens.
Should I simply blame this on the -O2? I can't reproduce it without optimization flag because this happens totally random.
I am using make -j 8, this happens even after cleaning build folder, but doesn't necessarily happen only after doing that
As stated in the comments, you probably have something that conditions the definition of your class differently in the various .cpp, for example a #pragma pack or something like that before the inclusion of your .h; when the linker has to choose, it may choose non-deterministically (since it expects all the definitions to be the same).
To narrow your search for the root of the problem, I would do something like this:
compile your whole project with debug symbols (-g);
use gdb to determine what is the offset of the "problematic" field according to each module
once you find where you have different values, you may use gcc -E to expand all the preprocessor stuff and look for your problem.
As an aid for step 2, you can use this bash one-liner (to be run in the directory where are the object files):
for i in ./*.o; do echo -n "$i: "; gdb -batch -q "$i" -ex "print &((YourClass*)0)->yourField"; done