When I use the time() function (i.e., just randomize seed for rand() ) but not include the header file time.h, it works for C. For example:
#include <stdio.h>
#include <stdlib.h>
int main()
{
int i;
srand(time(NULL));
for(i=0;i<10;i++){
printf("\t%d",rand()%10);
}
printf("\n");
return 0;
}
When I try to compile the code above, g++ cannot compile it since time.h isn't included. But gcc can.
$gcc ra.c
$./a.out
4 5 2 4 8 7 3 8 9 3
$g++ ra.c
ra.c: In function ‘int main()’:
ra.c:8:20: error: ‘time’ was not declared in this scope
srand(time(NULL));
^
Is it related with version of gcc or just a difference between C/C++ ?
You should include <time.h> for time(2) and turn on the warnings. In C, a function with no visible prototype is assumed to return int (which has been deprecated since C99). So compiling with gcc seems fine while g++ doesn't.
Compile with:
gcc -Wall -Wextra -std=c99 -pedantic-errors file.c
and you'll see gcc also complains about it.
C89/C90 (commonly, but incorrectly, referred to as "ANSI C") had an "implicit int" rule. If you called a function with no visible declaration, the compiler would effectively create an implicit declaration assuming that the function takes arguments of the types that appear in the call and returns int.
The time function takes an argument of type time_t* and returns a value of type time_t. So given a call
time(NULL)
with no visible declaration, the compiler will generate code as if it took an argument of the type of NULL (which is likely to be int) and returns an int result. Given
srand(time(NULL))
the value returned by time(NULL) will then be implicitly converted from int to the `unsig
If int, time_t, and time_t* all happen to be, say, 32 bits the call is likely to work. If they're of different sizes,
Related
I am struggling with a port of an open-source tool to Solaris. Most things work, cmake/pkg-config/etc. are there, dependencies are found, gmake works, compiler and linker calls look all fine and tren, boom:
Undefined first referenced
symbol in file
std::qsort(void*, unsigned int, unsigned int, int (*)(const void*, const void*)) ...
This part I don't understand. At the first glance, std::qsort does not make sense, it is supposed to be part of C library, not STL. So I looked into stdlib.h and found a list of things like using std::sort; and dozens of other standard functions like free, malloc, atoi, etc., redirected in case of C++ context.
What is Oracle doing there and why? Which library am I supposed to link with if they do redirect it like this? Or why does CC command not pull that in automatically like GCC does?
I tried adding -lstdc++ but no luck.
Alternatively, the plain libc versions seem to be defined in <iso/stdlib_c99.h> (or <iso/stdlib_iso.h>). Is it legal to include one of those headers directly or will this wreak other random havoc at link time?
Edit:
since there are multiple suspicions of the build system weirdness, here is the basically the linking call from the gmake execution:
/opt/developerstudio12.6/bin/CC -std=c++11 -xO3 -DNDEBUG <i.e. bunch of object files> -o ../systest -L/opt/csw/lib/64 -lintl
I cannot see anything special there and I expect CC to figure out what to link to get the obligatory functionality.
The rule is that #include <xxx.h> puts names into the global namespace and is allowed to also put them in std. Conversely, #include <cxxx> puts names into std and is allowed to also put them into the global namespace. In practice, this means that there are two approaches to implementing the functions from the standard C library in C++: declare the standard C library names in the <xxx.h> headers and hoist those declarations into std in the cxxx headers, or declare the names in std in the headers and hoist those declarations into the global namespace in the <xxx.h> headers. With the former approach, the name of the function will be qsort; with the latter, it will be std::qsort. Either way, that error message usually indicates a setup problem with the compiler. The linker isn’t finding the standard library.
This compile command
/opt/developerstudio12.6/bin/CC -std=c++11 -xO3 -DNDEBUG ...
will produce a 32-bit executable. Per the Oracle CC man page:
On Oracle Solaris, -m32 is the default. On Linux systems supporting 64-bit programs, -m64 -xarch=sse2 is the default.
But this library option
-L/opt/csw/lib/64
is searching a directory full of 64-bit libraries.
Either add -m64 to the compile command or use the 32-bit library path.
Update
The question almost certainly would be answerable had it included the full error message, which is almost certainly something like this:
CC -g qsort.cc -o qsort
"qsort.cc", line 15: Error: Could not find a match for std::qsort(int[4], unsigned, unsigned, int(void*,void*)) needed in main(int, char**).
"/usr/include/iso/stdlib_iso.h", line 184: Note: Candidate 'std::qsort(void*, unsigned, unsigned, extern "C" int(*)(const void*,const void*))' is not viable: argument '4' can't be converted from 'int(void*,void*)' to 'extern "C" int(*)(const void*,const void*)'.
"/usr/include/iso/stdlib_iso.h", line 187: Note: Candidate 'std::qsort(void*, unsigned, unsigned, int(*)(const void*,const void*))' is not viable: argument '4' can't be converted from 'int(void*,void*)' to 'int(*)(const void*,const void*)'.
This code works just fine when compiled with Oracle Developer Studio 12.6 on Solaris 11.4:
#include <stdlib.h>
int compare( const void *p1, const void *p2 )
{
int i1 = *( ( int * ) p1 );
int i2 = *( ( int * ) p2 );
return( i1 - i2 );
}
int main( int argc, char **argv )
{
int array[ 4 ] = { 5, 8, 12, 4 };
qsort( array, sizeof( array ) / sizeof( array[ 0 ] ),
sizeof( array[ 0 ] ), &compare );
}
I have the following code
#include <iostream>
using namespace std;
int dmult(int a, int b){
return 2*a*b;
}
int main(void)
{
double a = 3.3;
double b = 2;
int c = dmult(a,b);
cout << c << endl;
return 0;
}
It compiles with MinGW without problems. The result is (as I thought) false. Is it a problem of the compiler that there is no warning, that a function expecting integers, but fed with doubles, can compile without warning even if the input type is wrong? Does it mean that C++ ignores the input type of a function? Shouldn't it realize that the function arguments have the wrong type?
double's are implicitly convertible to ints (and truncated), and the compiler is not forced by the standard to emit a warning (it tries its best to perform the conversion whenever possible). Compile with -Wconversion
g++ -Wconversion program.cpp
and you'll get your warning:
warning: conversion to 'int' from 'double' may alter its value [-Wfloat-conversion]
The typical warning flags -Wall -Wextra don't catch it since many times it is the programmer's intention to truncate double's to int's.
Live example here.
c++ automatically casts floats and doubles to integer literals by truncating them. so 3.3 becomes 3 when you call dmult(3.3,2)
1 #include <stdio.h>
2 #include <stdlib.h>
3
4 int main(int argc, char* argv[])
5 {
6 int bret = 1;
7 bret - 2;
8
9 printf("bret=%d",bret);
10 return 0;
11 }
In line:7, there was no left hand operator to reveice the value, still compiler was not generating any warning, GCC and g++ both. Is there anyintended purpore behind this?
[ADDED/EDIT]
As per comment I shall get warning only after using following flags: -Wall -Wextra
[debd#test]$gcc -Wall -Wextra test2.c
test2.c: In function 'main':
test2.c:7: warning: statement with no effect
test2.c:4: warning: unused parameter 'argc'
test2.c:4: warning: unused parameter 'argv'
[debd#test]$
As far as the language is concerned, there's no error - a statement is not required to have a side effect.
However, since a statement that does nothing is almost certainly a mistake, most compilers will warn about it. Yours will, but only if you enable that warning via the command line arguments.
You can enable just that warning with -Wunused-value, but I suggest you enable a decent set of warnings (including this one) with -Wall -Wextra.
As you found, this will also give a warning that the function parameters are unused. Since this is main, you can easily fix it by changing the signature to not have any parameters:
int main()
More generally, to avoid the warning if you need to ignore parameters, C++ allows you not to name them:
int main(int, char**)
and both languages allow you to explicitly use but ignore the value
(void)argc;
(void)argv;
bret - 2;
is an expression statement and it is has no side-effect.
C does not requires any warning for this statement. The compiler is free to add or not an informative warning to say the statement has no effect. The compiler can optimize out the statement.
Is it possible to see what is going on behind gcc and g++ compilation process?
I have the following program:
#include <stdio.h>
#include <unistd.h>
size_t sym1 = 100;
size_t *addr = &sym1;
size_t *arr = (size_t*)((size_t)&arr + (size_t)&addr);
int main (int argc, char **argv)
{
(void) argc;
(void) argv;
printf("libtest: addr of main(): %p\n", &main);
printf("libtest: addr of arr: %p\n", &arr);
while(1);
return 0;
}
Why is it possible to produce the binary without error with g++ while there is an error using gcc?
I'm looking for a method to trace what makes them behave differently.
# gcc test.c -o test_app
test.c:7:1: error: initializer element is not constant
# g++ test.c -o test_app
I think the reason can be in fact that gcc uses cc1 as a compiler and g++ uses cc1plus.
Is there a way to make more precise output of what actually has been done?
I've tried to use -v flag but the output is quite similar. Are there different flags passed to linker?
What is the easiest way to compare two compilation procedures and find the difference in them?
In this case, gcc produces nothing because your program is not valid C. As the compiler explains, the initializer element (expression used to initialize the global variable arr) is not constant.
C requires initialization expressions to be compile-time constants, so that the contents of local variables can be placed in the data segment of the executable. This cannot be done for arr because the addresses of variables involved are not known until link time and their sum cannot be trivially filled in by the dynamic linker, as is the case for addr1. C++ allows this, so g++ generates initialization code that evaluates the non-constant expressions and stores them in global variables. This code is executed before invocation of main().
Executables cc1 and cc1plus are internal details of the implementation of the compiler, and as such irrelevant to the observed behavior. The relevant fact is that gcc expects valid C code as its input, and g++ expects valid C++ code. The code you provided is valid C++, but not valid C, which is why g++ compiles it and gcc doesn't.
There is a slightly more interesting question lurking here. Consider the following test cases:
#include <stdint.h>
#if TEST==1
void *p=(void *)(unsigned short)&p;
#elif TEST==2
void *p=(void *)(uintptr_t)&p;
#elif TEST==3
void *p=(void *)(1*(uintptr_t)&p);
#elif TEST==4
void *p=(void *)(2*(uintptr_t)&p);
#endif
gcc (even with the very conservative flags -ansi -pedantic-errors) rejects test 1 but accepts test 2, and accepts test 3 but rejects test 4.
From this I conclude that some operations that are easily optimized away (like casting to an object of the same size, or multiplying by 1) get eliminated before the check for whether the initializer is a constant expression.
So gcc might be accepting a few things that it should reject according to the C standard. But when you make them slightly more complicated (like adding the result of a cast to the result of another cast - what useful value can possibly result from adding two addresses anyway?) it notices the problem and rejects the expression.
I understand that VLAs are not part of C++11 and I have seen this slip by GCC. It is part of the reason I switched to Clang. But now I am seeing it Clang too. I am using clang 3.2 (one behind the latest) and I am compiling with
-pedantic and -std=c++11
I expect my test to NOT compile yet it compiles and runs.
int myArray[ func_returning_random_int_definitely_not_constexpr( ) ];
Is this a compiler bug or an I missing something?
In response to the comment here is the random_int_function()
#include <random>
int random_int_function(int i)
{
std::default_random_engine generator;
std::uniform_int_distribution<int> distribution(1,100);
int random_int = distribution(generator);
return i + random_int;
}
Yes, variable length arrays are supported in clang 3.2/3.3 contrary to
the C++11 Standard (§ 8.3.4/1).
So as you say, a program such as:
#include <random>
int random_int_function(int i)
{
std::default_random_engine generator;
std::uniform_int_distribution<int> distribution(1,100);
int random_int = distribution(generator);
return i + random_int;
}
int main() {
int myArray[ random_int_function( 0 ) ];
(void)myArray;
return 0;
}
compiles and runs. However, with the options -pedantic; -std=c++11 that
you say you passed, clang 3.2/3,3 diagnoses:
warning: variable length arrays are a C99 feature [-Wvla]
The behaviour matches that of gcc (4.7.2/4.8.1), which warns more emphatically:
warning: ISO C++ forbids variable length array ‘myArray’ [-Wvla]
To make the diagnostic be an error, for either compiler, pass -Werror=vla.
Simply plugging the snippets you posted into IDEone, without putting the array declaration into a function, I get
prog.cpp:12:39: error: array bound is not an integer constant before ‘]’ token
Adding a main() function around it results in success, as you observed.
Since C++11 doesn't allow for array declarations that are legal in main but not namespace scope, and that is a property of VLAs, it is reasonable to conclude that is what you are seeing.
Update: courtesy Coliru.org, the message from Clang is
main.cpp:12:9: error: variable length array declaration not allowed at file scope
So that's fairly definite.
Use these options:
-Wvla to warning vla uses
-Werror=vla to consider vla an error.
This works in both the clang and gcc compilers.