I was just compiling C/C++ code using __transaction_atomic but compiler error occurred
[Error] __transaction_atomic' without transactional memory support enabled
Code is:
#include <stdio.h>
int main()
{
int i = 0;
__transaction_atomic
{
i++;
}
return 0;
}
How to figure it out? My compiler is GCC 4.9
You should compile code with transaction memory support enabled.
From here
Compiling a TM program with GCC To enable the support for TM, the
'-fgnu-tm' compiler directive has to be added to the compilation
command line. Example: gcc -Wall -fgnu-tm -O3 -o ll ll.c Note that
with the optimization level 0 (-O0), some of the TM optimization are
disabled (RaR, RaW, RfW, WaR, WaW, optimized stack memory barriers).
Related
I have a very simple C++ code (it was a large one, but I stripped it to the essentials), but it's failing to compile. I'm providing all the details below.
The code
#include <vector>
const int SIZE = 43691;
std::vector<int> v[SIZE];
int main() {
return 0;
}
Compilation command: g++ -std=c++17 code.cpp -o code
Compilation error:
/var/folders/l5/mcv9tnkx66l65t30ypt260r00000gn/T//ccAtIuZq.s:449:29: error: unexpected token in '.section' directive
.section .data.rel.ro.local
^
GCC version: gcc version 12.1.0 (Homebrew GCC 12.1.0_1)
Operating system: MacOS Monterey, version 12.3, 64bit architecture (M1)
My findings and remarks:
The constant SIZE isn't random here. I tried many different values, and SIZE = 43691 is the first one that causes the compilation error.
My guess is that it is caused by stack overflow. So I tried to compile using the flag -Wl,-stack_size,0x20000000, and also tried ulimit -s 65520. But neither of them has any effect on the issue, the code still fails to compile once SIZE exceeds 43690.
I also calculated the amount of stack memory this code consumes when SIZE = 43690. AFAIK, vectors use 24B stack memory, so the total comes to 24B * 43690 = 1048560B. This number is very close to 220 = 1048576. In fact, SIZE = 43691 is the first time that the consumed stack memory exceeds 220B. Unless this is quite some coincidence, my stack memory is somehow limited to 220B = 2M. If that really is the case, I still cannot find any way to increase it via the compilation command arguments. All the solutions in the internet leads to the stack_size linker argument, but it doesn't seem to work on my machine. I'm wondering now if it's because of the M1 chip somehow.
I'm aware that I can change this code to use vector of vectors to consume memory from the heap, but I have to deal with other's codes very often who are used to coding like this.
Let me know if I need to provide any more details. I've been stuck with this the whole day. Any help would be extremely appreciated. Thanks in advance!
I had the same issue, and after adding the -O2 flag to the compilation command, it started working. No idea why.
So, something like this:
g++-12 -O2 -std=c++17 code.cpp -o code
It does seem to be an M1 / M1 Pro issue. I tested your code on two seperate M1 Pro machine with the same result as yours. One workaround I found is to use the x86_64 version of gcc under rosetta, which doesn't have these allocation problems.
Works on a M1 Max running Monterey 12.5.1 with XCode 13.4.1 and using clang 13.1.6 compiler:
% cat code.cpp
#include <vector>
const int SIZE = 43691;
std::vector<int> v[SIZE];
int main() {
return 0;
}
% cc -std=c++17 code.cpp -o code -lc++
% ./code
Also fails with gcc-12.2.0:
% g++-12 -std=c++17 code.cpp -o code
/var/tmp/ccEnyMCk.s:449:29: error: unexpected token in '.section' directive
.section .data.rel.ro.local
^
So it seems to be a gcc issue on M1 issue.
This is a gcc-12 problem on Darwin Aarch64 targets. It shall not emit such sections like .section .data.rel.ro.local. Section names on macOS shall start with __, eg.: .section __DATA,...
See Mach-O reference.
when I compile the following piece of code with gcc, it work fine and show correct output as I expected, but when it move to windows with visual c++, it report errors when compiling.
#include <stdio.h>
int fun(int numAttrib)
{
typedef struct {
int attribList[numAttrib];
}VADataFull;
printf("size=%ld\n", sizeof(VADataFull));
return 0;
}
int main(int i, char** args)
{
fun(i);
return 0;
}
Actually, I can understand why vc++ cannot work, because, as we have learned in shool, we cannot allocate memory dynamically in the stack, but gcc works fine, so I feel confused, could anyone tell me more about this issue, Thanks
GCC does not compile Standard C++ by default (for some crazy reason). It allows various non-standard extensions (like your variable length array). You have to set switches for standard version and pedantic mode to enforce it:
g++ -std=c++11 -pedantic-errors -o prog prog.cpp
It is a gcc extention to the language.
https://gcc.gnu.org/onlinedocs/gcc/Variable-Length.html
As can be seen here gcc won't compile it as well when given c++ mode instead of default gnu that allows lots of non standard exstentions to be used.
Your code is not Standard C++ that's why it compiled in gcc.
g++ supports a C99 feature that allows dynamically sized arrays.
I have read the gcc documentation for optimization options. They do not have examples.
One tedious method is to use godbolt and try so many options and see which option works for a specific optimization flag.
I have written the following trivial code:
#include <cmath>
double calculate(double x)
{
int y=x+sin(x);
return exp(x)+exp(-x);
}
int main(int argc,char *argv[])
{
return ceil(calculate(argc));
}
and I compiled it with
g++ -Q -v -O3 main.cpp
which prints all enabled optimization flag for me and not the used option flags. I also need to know the optimization flags for a specific function excluding the optimizations used for the libraries.
How is it possible for me to get the list of optimization flags used for compilation of calculate function?
Modern versions of GCC have the -fverbose-asm option that dumps the optimisation options enabled in a comment in the assembly file that you can get by compiling with -S or -save-temps
Given the following complete program:
#include <functional>
struct jobbie
{
std::function<void()> a;
};
void do_jobbie(jobbie j = {})
{
if (j.a)
j.a();
}
int main()
{
do_jobbie();
return 0;
}
compiling this on gcc (Ubuntu 4.8.1-2ubuntu1~12.04) 4.8.1:
boom!
richard#DEV1:~/tmp$ g++ -std=c++11 bug.cpp
bug.cpp: In function ‘int main()’:
bug.cpp:16:13: internal compiler error: in create_tmp_var, at gimplify.c:479
do_jobbie();
^
Please submit a full bug report,
with preprocessed source if appropriate.
See <file:///usr/share/doc/gcc-4.8/README.Bugs> for instructions.
Preprocessed source stored into /tmp/ccWGpb7M.out file, please attach this to your bugreport.
However, clang [Apple LLVM version 5.1 (clang-503.0.40) (based on LLVM 3.4svn)] is happy with it.
$ clang++ -std=c++11 bug.cpp
It seems to me that clang is correctly deducing that j defaults to a default-constructed jobbie object whereas gcc is (obviously) blowing up.
replacing line 8 with void do_jobbie(jobbie j = jobbie {}) fixes the problem on gcc.
Question - which of these is true:
clang is correct, gcc is faulty (ignoring the compiler blow-up)
clang is over-reaching the standard and it should not really compile
the standard does not make it clear?
An internal compiler error is always a compiler bug. Compilers should be able to process anything without crashing like that.
clang has similar handling for when it crashes, producing data for reporting the bug and pointing the user to clang's bug reporting web page.
I don't see anything tricky about this code. It seems straightforward to me that it should compile and run.
This indicates that it should work:
The default argument has the same semantic constraints as the initializer in a declaration of a variable of the parameter type, using the copy-initialization semantics.
(8.3.6, wording from draft n3936)
Surprisingly I found gcc can find this error when it compiles C. I simplified the code which still triggers the warning. I post the question for making clear the details of the techniques it uses. Below is my code of file a.c
int main(){
int a[1]={0};
return(a[1]);
}
My gcc version is gcc (Ubuntu/Linaro 4.7.3-1ubuntu1) 4.7.3. When using gcc a.c -Wall, there is no warning; when using gcc -O1 a.c -Wall, there is a warning:
warning: ‘a[1]’ is used uninitialized in this function [-Wuninitialized]
and when using gcc -O2 a.c -Wall (or -O3), there is another warning:
warning: array subscript is above array bounds [-Warray-bounds]
The most surprising thing is that, when I give a[1] a value, then none of the above compiling options gives any warning. There is no warning even when I change the index to a huge number (of course the compiled file offends the operating system and will be kicked out),
int main(){
int a[1]={0};
a[2147483648]=0;
return(a[2147483648]);
}
I think the above phenomenon is more of a function than a bug. I hope someone help me figure out what happens, and/or why the compiler is designed so. Many thanks!
Accessing memory past the end of the array results in undefined behaviour.
gcc is nice enough to go out of its way to detect, and warn you about, some of these errors. However, it is under no obligation to do so, and certainly cannot be expected to catch all such errors.
The compiler is not required to provide diagnostic for this kind of error, but gcc is often able to help; notice that these warnings often arise in part as a byproduct of the static analysis passes done for optimization purposes, which means that, as you noticed, such warnings often depend from the specified optimization level.