va_list args = 0;
I found a code in my application as above, and it is compiling properly in following gcc version.
~ $ /usr/sfw/bin/gcc -v
Reading specs from /usr/sfw/lib/gcc/sparc-sun-solaris2.10/3.4.3/specs
Configured with: /sfw10/builds/build/sfw10-patch/usr/src/cmd/gcc/gcc-3.4.3/configure --prefix=/usr/sfw --with-as=/usr/ccs/
bin/as --without-gnu-as --with-ld=/usr/ccs/bin/ld --without-gnu-ld --enable-languages=c,c++ --enable-shared
Thread model: posix
gcc version 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
But when i compiled the same code in new machine, it is giving issue since the va_list args is initialized with zero. Hope va_list is typedef of something and i removed the initialization of va_list with zero and it is compiled fine in new machine.
But fortunately both old and new machine has same gcc version.
NEW MACHINE GCC VERSION:
Reading specs from /usr/sfw/lib/gcc/i386-pc-solaris2.10/3.4.3/specs
Configured with: /builds/sfw10-gate/usr/src/cmd/gcc/gcc-3.4.3/configure --prefix=/usr/sfw --with-as=/usr/sfw/bin/gas --with-gnu-as --with-ld=/usr/ccs/bin/ld --without-gnu-ld --enable-languages=c,c++ --enable-shared
Thread model: posix
gcc version 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
But i noticed that architecture of two machine is different. Is that causing any issue.
Since stdarg is an standard library. So why it is varying based on architecture?
va_list should never be initialized. It's standard in C and C++ that it is just left uninitialized until va_start() is called.
Your old code was broken. Just remove the =0 regardless of which platform you're on, and try again.
Since stdarg is an standard library. So why it is varying based on architecture?
Yes, it's standard, but it can only be used in the officially supported ways, and initialisation using 0 is not one of those ways.
va_list is not special in that respect, there are plenty of types and functions that are standard, but have implementation variations in their handling of invalid uses. A trivial example is printf(0);, which may silently work and do nothing on some implementations, but crash badly at runtime on others.
Unfortunately there isn't any fool-proof checker for invalid programs that happen to be accepted on your particular platform, nor can there be.
Related
I'm just trying to understand a concept used by g++. Here my very simple std::thread application:
#include <iostream>
#include <thread>
void func() {
std::cout << "Running..." << std::endl;
}
int main()
{
std::thread t(func);
t.join();
return 0;
}
I'm able to compile it on macOs/Xcode 9.0 setup with following command:
g++ main.cpp -std=c++11
But I'm unable to compile it on Linux with the same command, as far as I know I have to pass -pthread option too. Otherwise it gives me following error:
gcc version 7.1.1 20170622 (Red Hat 7.1.1-3)
main.o: In function `std::thread::thread<void (&)()>(void (&)())':
/usr/include/c++/5/thread:137: undefined reference to `pthread_create'
I think it's illogical and I shouldn't even know that it's implementing the std::thread class via pthread. Why do I have to pass -pthread option and link against pthread library? Isn't C++11 supposed to abstract me away from platform specific details? Or do I have any other alternative libraries such as pthread that I can link against for my std::thread usage? Or should I report that as a bug?
Thanks.
According to GCC's concurrency page, it's necessary to provide additional options to the compiler based on the features being used. You can verify that your version of GCC's threads rely on POSIX threads:
$ gcc -v 2>&1 | grep "Thread model"
Thread model: posix
See this bug report for a justification for the behavior:
The problem is that not all targets need -pthread, or some which do need it spell it differently, and not all platforms define _REENTRANT when the option is used, so there's no reliable way to do what you're asking for.
I am moving pthread to std:thread in C++11 and have encountered the same phenomenon you've been met, and I found this artical -- it may be the proper answer: https://developers.redhat.com/articles/2021/12/17/why-glibc-234-removed-libpthread#
A quick conclusion: It depends on the version of glibc. glibc with versions prior to 2.34 will require the -lpthead flag even the code does not use pthread explicitly.
To check the version of glibc we can use ldd --version command, on my Ubuntu 20.04, it returns like this: ldd (Ubuntu GLIBC 2.31-0ubuntu9.9) 2.31, so I still have to add the -lpthread flag to use std:thread.
pthread is an industry standard over the OS specific threads, using the OS specific calls.
std::thread is an abstraction in C++ that could be implemented using pthread or the OS's native threads. But to make it work on as many OS's as possible fast the std-library implementer could just implement it in posix as they should be good to go on all compliant OS's.
There are exceptions, some windows only std-library uses windows native threads instead.
When trying to compile Protobuf-2.6.1 on Solaris 10 SPARC 64, I get:
./google/protobuf/stubs/once.h: In function `void google::protobuf::GoogleOnceInit(google::protobuf::ProtobufOnceType*, void (*)())':
./google/protobuf/stubs/once.h:125: error: cannot convert `google::protobuf::ProtobufOnceType*' to `const volatile google::protobuf::internal::Atomic32*' for argument `1' to `google::protobuf::internal::Atomic32 google::protobuf::internal::Acquire_Load(const volatile google::protobuf::internal::Atomic32*)'
./google/protobuf/stubs/once.h: In function `void google::protobuf::GoogleOnceInit(google::protobuf::ProtobufOnceType*, void (*)(Arg*), Arg*)':
./google/protobuf/stubs/once.h:134: error: cannot convert `google::protobuf::ProtobufOnceType*' to `const volatile google::protobuf::internal::Atomic32*' for argument `1' to `google::protobuf::internal::Atomic32 google::protobuf::internal::Acquire_Load(const volatile google::protobuf::internal::Atomic32*)'
I followed the Official README, ./configure and make.
The compiler version(GCC):
$ gcc -v
Reading specs from /usr/sfw/lib/gcc/sparc-sun-solaris2.10/3.4.3/specs
Configured with: /sfw10/builds/build/sfw10-patch/usr/src/cmd/gcc/gcc-3.4.3/configure --prefix=/usr/sfw --with-as=/usr/ccs/bin/as --without-gnu-as --with-ld=/usr/ccs/bin/ld --without-gnu-ld --enable-languages=c,c++ --enable-shared
Thread model: posix
gcc version 3.4.3 (csl-sol210-3_4-branch+sol_rpath)
I also read the question protobuf generated files does not compile on Solaris SPARC 64 and tried, but it didn't work. That article works on Protobuf-2.4.1, but Protobuf-2.6.1 changes:
2014-10-20 version 2.6.1:
C++
* Added atomicops support for Solaris.
Is there any way to make GCC do force pointer conversion?
I solved the problem according to the github issue #789
The main reason is methioned in the 4th point of this issue. The predefined SOLARIS_64BIT_ENABLED macro takes no effect at all.
The problem can be solved simply by adding -m64 -DSOLARIS_64BIT_ENABLED to CXXFLAGS and CFLAGS. But it's better to do whole modification as the issue suggested.
this line of code isn't compiling for me on GCC 4.2.2
m_Pout->m_R[i][j] = MIN(MAX(unsigned short(m_Pin->m_R[i][j]), 0), ((1 << 15) - 1));
error: expected primary-expression before ‘unsigned’
however if I add braces to (unsigned short) it works fine.
can you please explain what type of casting (allocation) is being done here?
why isn't the lexical parser/compiler is able to understand this c++ code in GCC?
Can you suggest a "better" way to write this code? supporting GCC 4.2.2 (no c++11, and cross platform)
unsigned short(m_Pin->m_R[i][j]) is a declaration with initialisation of an anonymous temporary, and that cannot be part of an expression.
(unsigned short)(m_Pin->m_R[i][j]) is a cast, and is an expression.
So (1) cannot be used as an argument for MAX, but (2) can be.
I think Bathsheba's answer is at least misleading. short(m_Pin->m_R[i][j]) is a cast. Why is it that the extra unsigned messing things up? It's because unsigned short is not a simple-type-specifier. The cast syntax T(E) works only if T is a single token, and unsigned short is two tokens.
Other types which are spelled with more than one token are char* and int const, and therefore these are also not valid casts: char*(0) and int const(0).
With static_cast<>, the < > are balanced so the type can be named with a sequence of identifiers, even static_cast<int const*const>(0)
You could use the §2 in Bathsheba's answer but it is more idiomatic to use static_cast in C++:
static_cast<unsigned short>(m_Pin->m_R[i][j])
BTW, your error is not related to GCC. You'll get the same if using Clang/LLVM or any (C++99 or C++11) standard conforming C++ compiler.
But independently of that, you should use a much newer version of GCC. In july 2015 the current version is GCC 5.1 and your GCC 4.2.2 version is from 2007, which is very ancient.
Using a more recent version of GCC is worthwhile because:
it enables you to stick to a more recent version of C++, e.g. C++11 (compile with -std=c++11 or -std=gnu++11)
recent GCC have improved their diagnostics. Compiling with -Wall -Wextra will help a lot.
recent GCC are optimizing better, and you'll get more performance from your code
recent GCC have a better and more standard conforming standard C++ library
recent GCC are better for debugging (with a recent GDB), and have sanitizer options (-fsanitize=address, -fsanitize=undefined, other -fsanitize=.... options) which help finding bugs
recent GCC are more standard conforming
recent GCC are customizable thru plugins, including MELT
older GCC 4.2 is no more supported by the FSF, and you'll need to pay big bucks the few companies supporting them.
You don't need any root access to compile from its source code a GCC 5 compiler (or cross-compiler). Read the installation procedures. You'll build a GCC tailored to your particular libc (and you might even use musl-libc if you wanted to ....), perhaps by compiling outside of the source tree after having configured with a command like
...your-path-to/gcc-5/configure --prefix=$HOME/soft/ --program-suffix=-mine
then make then make install then add $HOME/soft/bin/ to your PATH and use gcc-mine and g++-mine
According to GCC 5 release changes page (https://gcc.gnu.org/gcc-5/changes.html):
A new implementation of std::string is enabled by default, using the small string optimization instead of copy-on-write reference counting
I decided to check it and wrote a simple program:
int main()
{
std::string x{"blah"};
std::string y = x;
printf("0x%X\n", x.c_str());
printf("0x%X\n", y.c_str());
x[0] = 'c';
printf("0x%X\n", x.c_str());
printf("0x%X\n", y.c_str());
}
And the result is:
0x162FC38
0x162FC38
0x162FC68
0x162FC38
Notice that the x.c_str() pointer changes after x[0] = 'c'. This means that the internal buffer is copied upon write. So it seems that COW is still in work. Why?
I use g++ 5.1.0 on Ubuntu.
Some distributions intentionally deviate from the FSF GCC choice to default to the new ABI. Here's an explanation of why Fedora 22 deviates from upstream GCC like that. In short:
In a program, it's best not to mix the old and the new ABIs, but to pick one and stick with it. Things break if one part of the program assumes a different internal representation for a type than another part of the program.
Therefore, if any C++ library is used that uses the old C++ ABI, then the programs using that library should also use the old C++ ABI.
Therefore, if any C++ library is used that was built with GCC 4.9 or earlier, then the programs using that library should also use the old C++ ABI.
Fedora 22 still provides (or provided?) a lot of libraries built with GCC 4.9, because there wasn't enough time to rebuild them all with GCC 5.1 before the Fedora 22 release. To allow programs to use those libraries, the GCC default was switched to the old ABI.
As far as I can tell, GCC 5 isn't the default compiler in Ubuntu yet (but will soon be), so if it's provided as an extra install, those same arguments from Fedora also apply to Ubuntu.
The following code crashes for me using GCC to build for ARM:
#include <vector>
using namespace std;
void foo(vector<bool>& bools) {
bools.push_back(true);
}
int main(int argc, char** argv) {
vector<bool> bools;
bool b = false;
bools.push_back(b);
}
My compiler is: arm_v5t_le-gcc (GCC) 3.4.3 (MontaVista 3.4.3-25.0.30.0501131 2005-07-23). The crash doesn't occur when building for debug, but occurs with optimizations set to -O2.
Yes, the foo function is necessary to reproduce the issue. This was very confusing at first, but I've discovered that the crash only happens when the push_back call isn't inlined. If GCC notices that the push_back method is called more than once, it won't inline it in each location. For example, I can also reproduce the crash by calling push_back twice inside of main. If you make foo static, then gcc can tell it is never called and will optimize it out, resulting in push_back getting inlined into main, resulting in the crash not occurring.
I've tried this on x86 with gcc 4.3.3, and it appears the issue is fixed for that version.
So, my questions are:
Has anyone else run into this? Perhaps there are some compiler flags I can pass in to prevent it.
Is this a bug with gcc's code generation, or is it a bug in the stl implementation (bits/stl_bvector.h)? (I plan on testing this out myself when I get the time)
If it is a problem with the compiler, is upgrading to 4.3.3 what fixes it, or is it switching to x86 from arm?
Incidentally, most other vector<bool> methods seem to work. And yes, I know that using vector<bool> isn't the best option in the world.
Can you build your own toolchain with gcc 3.4.6 and Montavista's patches? 3.4.6 is the last release of the 3.x line.
I can append some instructions for how to build an ARM cross-compiler from GCC sources if you want. I have to do it all the time, since nobody does prebuilt toolchains for Mac OS X.
I'd be really surprised if this is broken for ARM in gcc 4.x. But the only way to test is if you or someone else can try this out on an ARM-targeting gcc 4.x.
Upgrading to GCC 4 is a safe bet. Its code generation backend replaces the old RTL (Register Transfer Language) representation with SSA (Static Single Assignment). This change allowed a significant rewrite of the optimizer.