Why does converting double to int in c++ gives a warning? - c++

i'm looking for the exact reason why does this conversion gives up a warning on c++ but not on c. does it connected to the fact that c++ is strongly type and c is weakly type ?
is it because the type in c can be determined while running so the compile does not points out a warning ?
thank you.

The presence or absence of a warning on a conversion from double to int has nothing to do with any difference between C and C++.
A warning (and you didn't tell us what the warning looks like; please update the question with that information) is probably valid. If the truncated double value is outside the representable range of int, the behavior is undefined. If it's within the range, but not mathematically equal to an integer, then the conversion will loose information.
Some compilers will warn about things like this, others won't -- and a given compiler may or may not issue a warning depending on what options you specify.

Related

Error: Old-style type declaration REAL*16 not supported

I was given some legacy code to compile. Unfortunately I only have access to a f95 compiler and have 0 knowledge of Fortran. Some modules compiled but others I was getting this error:
Error: Old-style type declaration REAL*16 not supported at (1)
My plan is to at least try to fix this error and see what else happens. So here are my 2 questions.
How likely will it be that my code written for Fortran 75 is compatible in the Fortran 95 compiler? (In /usr/bin my compiler is f95 - so I assume it is Fortran 95)
How do I fix this error that I am getting? I tried googling it but cannot see to find a clear crisp answer.
The error you are seeing is due to an old declaration style that was frequent before Fortran 90, but never became standard. Thus, the compiler does not accept the (formally incorrect) code.
In the olden days before Fortran 90, you had only two types of real numbers: REAL and DOUBLE PRECISION. These types were platform dependent, but most compilers nowadays map them to the IEEE754 formats binary32 and binary64.
However, some machines offered different formats, often with extra precision. In order to make them accessible to Fortran code, the type REAL*n was invented, with n an integer literal from a set of compiler-dependent values. This syntax was never standard, so you cannot be sure of what it will mean to a given compiler without reading its documentation.
In the real world, most compilers that have not been asked to be strictly standards-compliant (with some option like -std=f95) will recognize at least REAL*4 and REAL*8, mapping them to the binary32/64 formats mentioned before, but everything else is completely platform dependent. Your compiler may have a REAL*10 type for the 80-bit arithmetic used by the x86 387 FPU, or a REAL*16 type for some 128-bit floating point math. However, it is important to stress that since the syntax is not standard, the meaning of that type could change from compiler to compiler.
Finally, in Fortran 90 a way to refer to different kinds of real and integer types was made standard. The new syntax is REAL(n) or REAL(kind=n) and is supported by all standard-compliant compilers. However, The values of n are still compiler-dependent, but the standard provides three ways to obtain specific, repeatable results:
The SELECTED_REAL_KIND function, which allows you to query the system for the value of n to specify if you want a real type with certain precision and range requirements. Usually, what you do is ask for it once and store the result in an INTEGER, PARAMETER variable that you use when declaring the real variables in question. For example, you would declare a type with at least 15 digits of precision (decimal) and exponent range of at least 100 like this:
INTEGER, PARAMETER :: rk = SELECTED_REAL_KIND(15, 100)
REAL(rk) :: v
In Fortran 2003 and onwards, the ISO_C_BINDING module contains a series of constants that are meant to give you types guaranteed to be equivalent to the C types of the same compiler family (e.g. gcc for gfortran, icc for ifort, etc.). They are called C_FLOAT, C_DOUBLE and C_LONG_DOUBLE. Thus, you could declare a variable equivalent to a C double as REAL(C_DOUBLE) :: d.
In Fortran 2008 and onwards, the ISO_FORTRAN_ENV module contains a different series of constants REAL32, REAL64 and REAL128 that will give you a floating point type of the appropriate width - if some platform does not support one of those types, the constant will be a negative number. Thus, you can declare a 128-bit float as REAL(real128) :: q.
Javier has given an excellent answer to your immediate problem. However I'd just briefly like to address "How likely will it be that my code written for Fortran 77 is compatible in the Fortran 95 compiler?", with the obvious typo corrected.
Fortran, if the programmer adheres to the standard, is amazingly backward compatible. With very, very few exceptions standard conforming Fortran 77 is Fortran 2008 conforming. The problem is that it appears in your case the original programmer has not adhered to the international standard: real*8 and similar is not, and has never been part of any such standard and the problems you are seeing are precisely why such forms should never be, and should never have been used. That said if the original programmer only made this one mistake it may well be that the rest of the code will be OK, however without the detail it is impossible to tell
TL;DR: International standards are important, stick to them!
When we are at guessing instead of requesting proper code and full details I will venture to say that the other two answers are not correct.
The error message
Error: Old-style type declaration REAL*16 not supported at (1)
DOES NOT mean that the REAL*n syntax is not supported.
The error message is misleading. It actually means that the 16-byte reals are no supported. Had the OP requested the same real by the kind notation (in any of the many ways which return the gfortran's kind 16) the error message would be:
Error: Kind 16 not supported for type REAL at (1)
That can happen in some gfortran versions. Especially in MS Windows.
This explanation can be found with just a very quick google search for the error message: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=56850#c1
It does not not complain that the old syntax is the problem, it just mentions it, because mentioning kinds might be confusing even more (especially for COMPLEX*32 which is kind 16).
The main message is: We really should be closing these kinds of questions and wait for proper details instead of guessing and upvoting the question where the OP can't even copy the full message the compiler has spit.
If you are using GNU gfortran, try "real(kind=10)", which will give you 10 bytes (80-bits) extended precision. I'm converting some older F77 code to run floating point math tests - it has a quad-precision ( the real*16 you mention) defined, but not used, as the other answers provided correctly point out (extended precision formats tend to be machine/compiler specific). The gfortran 5.2 I am using does not support real(kind=16), surprisingly. But gfortran (available for 64-bit MacOS and 64-bit Linux machines), does have the "real(kind=10)" which will give you precision beyond the typical real*8 or "double precision" as it was called in some Fortran compilers.
Be careful if your old Fortran code is calling C programs and/or maybe making assumptions about how precision is handled and floating point numbers are represented. You may have to get deep into exactly what is happening, and check the code carefully, to ensure things are working as expected, especially if fortran and C routines are calling each other. Here is url for the GNU gfortran info on quad-precision: https://people.sc.fsu.edu/~jburkardt/f77_src/gfortran_quadmath/gfortran_quadmath.html
Hope this helps.

C++: need warning for: unsigned int i = -1;

We had a bug in our code coming from the line
unsigned int i = -1;
When the code was originally written, is was i = 0 and thus correct.
Using -Wall -Wextra, I was a bit surprised that gcc didn't warn me here because -1 does not fit in an unsigned int.
Only when turning on -Wsign-conversion this line becomes a warning - but with it many many false positives. I am using a third party library which does array-like operations with signed int's (although they cannot be < 0), so whenever I mix that with e.g. vector, I get warnings - and I don't see the point in adding millions of casts (and even the 3rd party headers produce a lot of warnings). So it is too many warnings for me. All these warnings are that the conversion "may change the sign". That's fine because I know it doesn't in almost all of the cases.
But with the assignment mentioned above, I get the same "may change" warning. Shouldn't this be "Will definitely change sign!" instead of "may change"? Is there any way to emit warnings only for these "will change" cases, not for the maybe cases?
Initialize it with curly braces :
unsigned int i{-1};
GCC outputs :
main.cpp:3:22: error: narrowing conversion of '-1'
from 'int' to 'unsigned int' inside { } [-Wnarrowing]
unsigned int i{-1};
Note that it does not always cause an error, it might be a warning or disabled altogether. You should try it with your actual toolchain.
But with the assignment mentioned above, I get the same "may change" warning. Shouldn't this be "Will definitely change sign!" instead of "may change"?
That's odd. I tested a few versions of gcc in the range of (4.6 - 5.2) and they did give a different warning for unsigned int i = -1;
warning: negative integer implicitly converted to unsigned type [-Wsign-conversion]
That said, they are indeed controlled by the same option as the may change sign warnings, so...
Is there any way to emit warnings only for these "will change" cases, not for the maybe cases?
As far as I know, that's not possible. I'm sure it would be possible to implement in the compiler, so if you want a separate option to enable the warning for assigning a negative number - known at compile time - to an unsigned variable, then you can submit a feature request. However, because assigning -1 to an unsigned variable is such a common and usually perfectly valid thing to do, I doubt such feature would be considered very important.

Why are arguments which do not match the conversion specifier in printf undefined behavior?

In both C (n1570 7.21.6.1/10) and C++ (by inclusion of the C standard library) it is undefined behavior to provide an argument to printf whose type does not match its conversion specification. A simple example:
printf("%d", 1.9)
The format string specifies an int, while the argument is a floating point type.
This question is inspired by the question of a user who encountered legacy code with an abundance of conversion mismatches which apparently did no harm, cf. undefined behaviour in theory and in practice.
Declaring a mere format mismatch UB seems drastic at first. It is clear that the output can be wrong, depending on things like the exact mismatch, argument types, endianness, possibly stack layout and other issues. This extends, as one commentator there pointed out, also to subsequent (or even previous?) arguments. But that is far from general UB. Personally, I never encountered anything else but the expected wrong output.
To venture a guess, I would exclude alignment issues. What I can imagine is that providing a format string which makes printf expect large data together with small actual arguments possibly lets printf read beyond the stack, but I lack deeper insight in the var args mechanism and specific printf implementation details to verify that.
I had a quick look at the printf sources, but they are pretty opaque to the casual reader.
Therefore my question: What are the specific dangers of mis-matching conversion specifiers and arguments in printf which make it UB?
printf only works as described by the standard if you use it correctly. If you use it incorrectly, the behaviour is undefined. Why should the standard define what happens when you use it wrong?
Concretely, on some architectures floating point arguments are passed in different registers to integer arguments, so inside printf when it tries to find an int matching the format specifier it will find garbage in the corresponding register. Since those details are outside the scope of the standard there is no way to deal with that kind of misbehaviour except to say it's undefined.
For an example of how badly it could go wrong, using a format specifier of "%p" but passing a floating point type could mean that printf tries to read a pointer from a register or stack location which hasn't been set to a valid value and could contain a trap representation, which would cause the program to abort.
Some compilers may implement variable-format arguments in a way that allows the
types of arguments to be validated; since having a program trap on incorrect
usage may be better than possibly having it output seemingly-valid-but-wrong
information, some platforms may choose to do that.
Because the behavior of traps is outside the realm of the C Standard, any action
which might plausibly trap is classified as invoking Undefined Behavior.
Note that the possibility of implementations trapping based on incorrect formatting means that behavior is considered undefined even in cases where the expected type and the actual passed type have the same representation, except that signed and unsigned numbers of the same rank are interchangeable if the values they hold are within the range which is common to both [i.e. if a "long" holds 23, it may be output with "%lX" but not with "%X" even if "int" and "long" are the same size].
Note also that the C89 committee introduced a rule by fiat, which remains to this day, which states that even if "int" and "long" have the same format, the code:
long foo=23;
int *u = &foo;
(*u)++;
invokes Undefined Behavior since it causes information which was written as type "long" to be read as type "int" (behavior would also be Undefined if it was type "unsigned int"). Since a "%X" format specifier would cause data to be read as type "unsigned int", passing the data as type "long" would almost certainly cause the data to be stored somewhere as "long" but subsequently read as type "unsigned int", such behavior would almost likely violate the aforementioned rule.
Just to take your example: suppose that your architecture's procedure call standard says that floating-point arguments are passed in floating-point registers. But printf thinks you are passing an integer, because of the %d format specifier. So it expects an argument on the call stack, which isn't there. Now anything can happen.
Any printf format/argument mismatch will cause erroneous output, so you cannot rely on anything once you do that. It is hard to tell which will have dire consequences beyond garbage output because it depends completely no the specifics of the platform you are compiling for and the actual details of the printf implementation.
Passing invalid arguments to a printf instance that has a %s format can cause invalid pointers to be dereferenced. But invalid arguments for simpler types such as int or double can cause alignment errors with similar consequences.
I'll start by asking you to be aware of the fact that long is 64-bit for 64-bit versions of OS X, Linux, the BSD clones, and various Unix flavors if you aren't already aware. 64-bit Windows, however, kept long as 32-bit.
What does this have to do with printf() and UB with respect to its conversion specifications?
Internally, printf() will use the va_arg() macro. If you use %ld on 64-bit Linux and only pass an int, the other 32 bits will be retrieved from adjacent memory. If you use %d and pass a long on 64-bit Linux, the other 32 bits will still be on the argument stack. In other words, the conversion specification indicates the type (int, long, whatever) to va_arg(), and the size of the corresponding type determines the number of bytes by which va_arg() adjusts its argument pointer. Whereas it will just work on Windows since sizeof(int)==sizeof(long), porting it to another 64-bit platform can cause trouble, especially when you have a int *nptr; and try to use %ld with *nptr. If you don't have access to the adjacent memory, you'll likely get a segfault. So the possible concrete cases are:
adjacent memory is read, and output is messed up from that point on
adjacent memory is attempted to be read, and there's a segfault due to a protection mechanism
the size of long and int are the same, so it just works
the value fetched is truncated, and output is messed up from that point on
I'm not sure if alignment is an issue on some platforms, but if it is, it would depend upon the implementation of passing function parameters. Some "intelligent" compiler-specific printf() with a short argument list might bypass va_arg() altogether and represent the passed data as a string of bytes rather than working with a stack. If that happened, printf("%x %lx\n", LONG_MAX, INT_MIN); has three possibilities:
the size of long and int are the same, so it just works
ffffffff ffffffff80000000 is printed
the program crashes due to an alignment fault
As for why the C standard says that it causes undefined behavior, it doesn't specify exactly how va_arg() works, how function parameters are passed and represented in memory, or the explicit sizes of int, long, or other primitive data types because it doesn't unnecessarily constrain implementations. As a result, whatever happens is something the C standard cannot predict. Just looking at the examples above should be an indication of that fact, and I can't imagine what else other implementations exist that might behave differently altogether.

Comparison between signed and unsigned integer expressions S

I get this warning when compiling a source in c ++ with gcc in FreeBSD.
Could someone explain and help me solve the issue I'm having?
Following is a link to the entire source code, it was placed on pastebin because it contains 7000 lines of code. Source char.cpp
Here is the warning message :
In member function 'void CHARACTER::PointChange(BYTE, int, bool, bool)':
Expanding on #KaliG s answer:
On the line 3065 you are declaring:
DWORD exp = GetExp();
Now what is a DWORD? Well, it stands for double word, and a "word" is 16 bits on this C++ implementation (Win32). It is a typedef and is actually an unsigned integer.
http://en.wikipedia.org/wiki/Word_%28computer_architecture%29
The other variable, amount is an argument defined as the type (signed) int.
So you are comparing a signed and unsigned integer - which causes the warning.
You can solve this by simply casting amount to an unsigned int (or "DWORD") since you have verified already that it is in fact positive.
So change the line to:
if (amount < 0 && exp < (DWORD) -amount)
This should work - but I have no idea how your method works other than that.
Sidenote: Hungarian notation is really ghastly stuff - so you should really dig into what the different type names they use actually are. http://en.wikipedia.org/wiki/Hungarian_notation
Sidenote 2: Don't use ALLCAPS class names... developers are used to think that those identifiers are constants, so you confuse other people who might read your code.
Sidenote 3: Read up on 2s complement to understand what the ALU (http://en.wikipedia.org/wiki/Arithmetic_logic_unit) inside the CPU is actually doing: http://en.wikipedia.org/wiki/Two%27s_complement
From the error thrown, I would say it is because 'exp' is either an unsigned or signed variable while 'amount' is the opposite, hence the reason you get the comparison error thrown.
Please post the lines of code where you declare these variables. :)
(Verify if you declared either of these 2 variables as a signed/unsigned by mistake.)

how casting of return value of main() function works?

I am using Visual studio 2008.
For below code
double main()
{
}
I get error:
error C3874: return type of 'main'
should be 'int' instead of 'double'
But if i use below code
char main()
{
}
No errors.
After running and exiting the output window displays
The program '[5856] test2.exe: Native'
has exited with code -858993664
(0xcccccc00).
Question: Is compiler doing implicit cast from default return value of zero(integer) to char ?
how the code 0xcccccc00 got generated ?
Looks like last byte in that code seem to be the actual returned value. Why 0xcccccc is coming ?
The correct way to do it, per the C++ standard is:
int main()
{
...
}
Don't change the return type to anything else or your code will not be C++ and you're just playing with compiler specific functionality. Those ccccc values in your example are just unitialized bytes (which the C allocator sets to 0xCC) being returned.
The value returned from the main function becomes the exit status of the process, though the C standard only ascribes specific meaning to two values: EXIT_SUCCESS (traditionally zero) and EXIT_FAILURE. The meaning of other possible return values is implementation-defined. However there is no standard for how non-zero codes are interpreted.
You may refer to an useful post:
What should main() return in C and C++?
Yet another MSVC extension/bug!
The answer to your first question is sort of yes. A char is essentially a very small integral type, so the compiler is being (extremely) lenient. Double isn't acceptable because it's not an integral type. The 0xCCCCCC is memory that never got initialized (except for the purposes of debugging). Since ASCII characters can only have two hex digits, the conversion failed to set the first 24 bits at all (and just set the last 8 bits to 0). What an odd and undesirable compiler trick.
About main function, $3.6.1/2 - "It
shall have a return type of type int,
but otherwise its type is
implementation-defined."
As I understand, anything that mentions 'shall' in the standard document AND is not ahdered to by the code is an instant condition required to be diagnosed by the compiler, unless the Standard specifically says that such diagnosis is not required.
So I guess VS has a bug if it allows such a code.
The main function is supposed to return an int. Not doing that means you're out in undefined territory. Don't forget to wave at the standard on your way past. Your char return probably works because char can be easily converted to an int. Doubles can certainly not. Not only are they longer (double the length) but they're floating point, which means you'll have 1s in wonky places.
Short answer: don't do that.
It is probably because char will implicitly cast to an int, however double won't as there would be data loss.
(See here: http://msdn.microsoft.com/en-us/library/y5b434w4%28v=VS.71%29.aspx for more info)
However you don't see the conversion problem because the compiler catches the worst sin (as stated in the other answers) of using a non standard return type.