There is lots of changes in C++ 11 and afterwards. And I just came across this line of code, I thought I created an empty array which defaults to zero and I just added an element in the beginning of the array which is 99. But it prints 42. I am really confused.
int a1 []{};
a1[0] = 99;
cout<<"a1 is " << a1[0];
Console:
a1 is 42
This is not standard C++ program. Zero size arrays are not allowed in C & C++. You should use -pedantic-errors command line option if you are using g++ & clang++ compiler to strictly confirm to the standard & disable any compiler extensions.
See live demo here. Clang++ says
source_file.cpp:7:14: error: zero size arrays are an extension [-Werror,-Wzero-length-array]
int a1 []{};
^
1 error generated.
If you compile that code with g++ -std=c++11 -pedantic -W -Wall, you will get an error:
test.cpp:6:12: error: zero-size array ‘a1’
int a1 []{};
This code is invalid.
As an extension, some compilers offer (in a less compliant mode) zero sized arrays. In which case, you simply read/wrote the byte adjacent to the empty array, which just so happened to not crash...
Related
After updating to GCC 12.1, I got a array subscript ‘__m256d_u[0]’ is partly outside array bounds error (or rather warning with -Werror) in my project, so I tried isolating the problem.
Here's an MWE, which I also put on godbolt (vector type is __m512d_u instead, but otherwise it's the same error):
#include <Eigen/Dense>
#include <iostream>
using Eigen::Array;
Array<double, 3, 2> foo(){
Array<double, 2, 2> a;
a.setRandom();
Array<double, 3, 2> b;
b.col(0).tail(2) = a.col(1);
// b.col(0).template tail<2>() = a.col(1);
return b;
}
int main(){
std::cout << foo() << '\n';
return 0;
}
Relevant compile options are -Wall -Wextra -Werror -O3 -march=native, and the error message notes note: at offset [16, 24] into object ‘a’ of size 32.
The error does not occur under the following circumstances:
on GCC 11.3 or older,
when removing -march=native
when using -O1 or below
when replacing the line b.col(0).tail(2) = a.col(1); with b.col(0).template tail<2>() = a.col(1);
So it looks like GCC sees the 3x2 array and the 2x2 array, and doesn't realise that only two entries are accessed each.
My question now is: Who should this be reported to? GCC, Eigen? Or is it a user bug?
Bonus points for telling me what the 24 in the error note (offset [16, 24]) is. The 16 is the start, is the 24 the read size?
EDIT: Example can be further simplified by using Array3d and Array2d, see here.
int main()
{
int a = 10;
int b;
a = b;
}
Valgrind can not warn me that b is uninitialized.
Your compiler should warn you regaring the uninitialized variable. If it doesn't, maybe the warnings are turned off?
This is gcc (9.3.0) output (with -Wall -Wextra option) :
prog.cc: In function 'int main()':
prog.cc:3:9: warning: variable 'a' set but not used [-Wunused-but-set-variable]
3 | int a = 10;
| ^
prog.cc:5:7: warning: 'b' is used uninitialized in this function [-Wuninitialized]
5 | a = b;
| ~~^~~
and this clang (10.0.0):
prog.cc:5:9: warning: variable 'b' is uninitialized when used here [-Wuninitialized]
a = b;
^
prog.cc:4:10: note: initialize the variable 'b' to silence this warning
int b;
^
Compile it with -Wall flag
gcc a.c -Wall -o a
Valgrind will only output errors if there is some potential impact on the behaviour of your application. In this case it does not matter that b is uninitialized.
Valgrind is, however, tracking the state of the memory.
If you run
valgrind --vgdb-error=0 ./test_exe
Then open another terminal and follow the instructions that were printed by Valgrind in the 1st terminal, then you can run commands like
mo xb {addess of b} 4
See here for details.
Within the language, there is no way to check for indeterminate values.
In simple cases such as this, compilers can detect it and you can ask to be warned about them. See the manual of your compiler for available warning options.
Compilers also provide "sanitisers" which check for bugs at runtime and are not limited by the complexity of the program as much as the compiletime warnings are. For reads of indeterminate values, a memory sanitiser would be the choice. They don't catch everything though, and the ones I tested did not catch the bug in your program. They could detect it if the indeterminate value was used to control the flow of the program:
int a = 10;
int b;
if (b) // detected by memory sanitiser
b = a;
Visual Studio Community (free) warns:
error C4700: uninitialized local variable 'b' used
#include <string.h>
void test(char charArray [100])
{
strncpy(charArray, "some text", sizeof(charArray));
}
int main()
{
char charArray [100];
test(charArray);
// EDIT: According to comment from P0W I added this line - where is the difference?
strncpy(charArray, "some text", sizeof(charArray)); // compiles OK
}
Compiled with gcc 4.9.2 on SLES 11 SP2 with this command line g++ gcc-warning-bug-2.cpp -Wall -Wextra -c -Werror I get this warning. Due the -Werror flag I cannot compile the project:
gcc-warning-bug-2.cpp: In function ‘void test(char*)’:
gcc-warning-bug-2.cpp:5:40: error: argument to ‘sizeof’ in ‘char* strncpy(char*, const char*, size_t)’ call is the same expression as the destination; did you mean to provide an explicit length? [-Werror=sizeof-pointer-memaccess]
strncpy(charArray, "some text", sizeof(charArray));
^
cc1plus: all warnings being treated as errors
According to the actual gcc 4.9.2 documentation https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html
-Wsizeof-pointer-memaccess
Warn for suspicious length parameters to certain string and memory built-in functions if the argument uses sizeof.
This warning warns e.g. about memset (ptr, 0, sizeof (ptr)); if ptr is not an array, but a pointer, and suggests a possible fix, or about memcpy (&foo, ptr, sizeof (&foo));. This warning is enabled by -Wall.
this should be compiled fine because charArray is an array!
Bug? Should I report it to GNU gcc developer team?
You fell straight into the trap.
In C, C++, Objective-C, Objective-C++, a parameter with a declaration that looks like "array of T" actually has type T*.
Your parameter charArray has a declaration that looks like "array of 100 chars", but the declaration is in fact "pointer to char".
Therefore, your third parameter to strncpy has a value of (most likely) 4 or 8, and not the 100 that you seem to expecct.
BTW. strncpy is highly dangerous the way you use it.
Is it possible to see what is going on behind gcc and g++ compilation process?
I have the following program:
#include <stdio.h>
#include <unistd.h>
size_t sym1 = 100;
size_t *addr = &sym1;
size_t *arr = (size_t*)((size_t)&arr + (size_t)&addr);
int main (int argc, char **argv)
{
(void) argc;
(void) argv;
printf("libtest: addr of main(): %p\n", &main);
printf("libtest: addr of arr: %p\n", &arr);
while(1);
return 0;
}
Why is it possible to produce the binary without error with g++ while there is an error using gcc?
I'm looking for a method to trace what makes them behave differently.
# gcc test.c -o test_app
test.c:7:1: error: initializer element is not constant
# g++ test.c -o test_app
I think the reason can be in fact that gcc uses cc1 as a compiler and g++ uses cc1plus.
Is there a way to make more precise output of what actually has been done?
I've tried to use -v flag but the output is quite similar. Are there different flags passed to linker?
What is the easiest way to compare two compilation procedures and find the difference in them?
In this case, gcc produces nothing because your program is not valid C. As the compiler explains, the initializer element (expression used to initialize the global variable arr) is not constant.
C requires initialization expressions to be compile-time constants, so that the contents of local variables can be placed in the data segment of the executable. This cannot be done for arr because the addresses of variables involved are not known until link time and their sum cannot be trivially filled in by the dynamic linker, as is the case for addr1. C++ allows this, so g++ generates initialization code that evaluates the non-constant expressions and stores them in global variables. This code is executed before invocation of main().
Executables cc1 and cc1plus are internal details of the implementation of the compiler, and as such irrelevant to the observed behavior. The relevant fact is that gcc expects valid C code as its input, and g++ expects valid C++ code. The code you provided is valid C++, but not valid C, which is why g++ compiles it and gcc doesn't.
There is a slightly more interesting question lurking here. Consider the following test cases:
#include <stdint.h>
#if TEST==1
void *p=(void *)(unsigned short)&p;
#elif TEST==2
void *p=(void *)(uintptr_t)&p;
#elif TEST==3
void *p=(void *)(1*(uintptr_t)&p);
#elif TEST==4
void *p=(void *)(2*(uintptr_t)&p);
#endif
gcc (even with the very conservative flags -ansi -pedantic-errors) rejects test 1 but accepts test 2, and accepts test 3 but rejects test 4.
From this I conclude that some operations that are easily optimized away (like casting to an object of the same size, or multiplying by 1) get eliminated before the check for whether the initializer is a constant expression.
So gcc might be accepting a few things that it should reject according to the C standard. But when you make them slightly more complicated (like adding the result of a cast to the result of another cast - what useful value can possibly result from adding two addresses anyway?) it notices the problem and rejects the expression.
Edited to remove the first warning
The following code works as expected in g++ 4.4.0 under mingw32:
#include <cstdio>
int main()
{
long long x = 0xdeadbeefc0defaceLL ;
printf ("%llx\n", x) ;
}
But if I enable all warnings with -Wall, it says:
f.cpp: In function 'int main()':
f.cpp:5: warning: unknown conversion type character 'l' in format
f.cpp:5: warning: too many arguments for format
It's the same with %lld. Is this fixed in newer versions?
Edited again to add:
The warning doesn't go away if I specify -std=c++0x, even though (i) long long is a standard type, and (ii) %lld and %llx seem to be officially supported. For instance, from 21.5 Numeric conversions para 7:
Each function returns a string object holding the character representation of the value of
its argument that would be generated by calling sprintf(buf, fmt, val) with a format specifier of
"%d", "%u", "%ld", "%lu", "%lld", "%llu", "%f", "%f", or "%Lf", respectively, where buf designates
an internal character buffer of sufficient size.
So this is a bug, surely?
long long x = 0xdeadbeefc0defaceLL; // note LL in the end
And there is no ll length specifier for printf. The best you can get is:
printf ("%lx\n", x); // l is for long int
I've tested your sample on my g++, it compiles without errors even without -std=c++0x flag:
~$ g++ -Wall test.cpp
~$ g++ --version
g++ (Ubuntu 4.4.3-4ubuntu5) 4.4.3
So, yes, this fixed in newer versions.
For first warning I can say that you must use 0xdeadbeefc0defaceLL instead of 0xdeadbeefc0deface. After that other warnings may pass also.
I get the same warning compiling C using windows/mingw32.
warning: unknown conversion type character 'l' in format
So yes, probably a compiler/platform specific bug.
It's a Mingw-specific issue, because it calls the native Windows runtime for certain things, including this. See this answer.
%I64d works for me. In the answer linked above there is a more portable albeit less readable solution as well.