C++ warning [-Wunused-value] - c++

i have a problem with my c++ code, there is no errors but only warning which is preventing my code to work as it should. I would like to multiply screen size by percentage and than print it.
this is in my .h file:
SmartWatch* multiply(SmartWatch* second, double percentage);
And this is in my .cpp file:
SmartWatch* SmartWatch::multiply(SmartWatch* second, double percentage){
second->getScreen_size() * percentage;
return second;
}
and this is in main:
SmartWatch *multiplied = &watch[0];
multiplied = multiplied ->multiply(&watch[1], 0.23);
multiplied->print();
i get this warning:
smartwatch.cpp:69:31: warning: expression result unused [-Wunused-value]
second->getScreen_size() * percentage;
I am new at this, so i don't know what i am doing wrong.

You are computing the product of second->getScreen_size() and percentage.
The compiler is telling you that the result is not being used.
Computing a product is only useful if you do something with the result, like storing it in a variable. If you do not do anything with it, the compiler will just remove it to improve the speed of your program.
Since you programmed something that will never, ever, be actually done, your compiler is telling you that you may have made a mistake there. Since this is not a technical error, this is only considered a warning by the compiler.

You don't actually store the value of the multiplication in the multiply method anywhere. The compiler is warning you because the line of code second->getScreen_size() * percentage; doesn't store a result or change a value. The result of the multiplication will be discarded.
To fix the warning, you should store the result back into the SmartWatch* second pointer somewhere. I'm not sure what your class design looks like, but you could do something like:
second->setScreen_size(second->getScreen_size() * percentage);
to remove the warning and then actually accomplish something with the method you've written.

Related

memory steps while defining variable - are they true?

Lets say we are defining a variable :
float myFloat{3};
I assume that these steps are done in memory while defining a variable, but I am not certainly sure.
Initial Assume: Memory is consist of addresses and correspond values. And values are kept as binary codes.
1- create binary code(value) for literal 3 in an address1.
2- turn this integer binary code of 3 to float binary code of 3 in the address2. (type conversion)
3- copy this binary code(value) from address2 to the memory part created for myFloat.
Are these steps accurate ? I would like to hear from you. Thanks..
Conceptually that’s accurate, but with any optimization, the compiler will probably generate the 3.0f value at compile time, making it just a load of that constant to the right stack address. Furthermore, the optimizer may well optimize it out entirely. If the next line says myFloat *= 0.0f; return myFloat;, the compiler will turn the whole function into essentially return 0.0f; although it may spell it in a funny way. Check out Compiler Explorer to get a sense of it.

Initializing floats with integer immediate values & warnings

In my code, I initialize a lot of floats with 0, 1 and 2 values (or other small ints). While GCC produces no warnings for this, MSVC does. So I replaced all 0's by 0.f, 1's by 1.f, etc... Also initializing a float with 0.5 issues a warning, and I replaced it by 0.5f.
While I fully understand that doing float f=someInt or float f=someDouble should produce a warning as in some cases precision is lost, the compiler should be smart enough to know that 0, 1, 2 and 0.5 are exact float values. And my code is much less readable like that...
Is MSVC not using some standard? Should I let him complain or make my code less readable?
Thanks!
[...] the compiler should be smart enough to know that 0, 1, 2 and 0.5 are exact float values.
That may be the case, but do you really want the compiler to use that knowledge to suppress warnings? Consider the following code snippet:
double fun()
{
float calculated = UNIVERSAL_BASE_VALUE;
// Do some calculations.
return calculated;
}
Suppose UNIVERSAL_BASE_VALUE is a constant defined in a header file somewhere. And maybe its type is double. But maybe its value is 0.5, which is an exact float value, so the compiler could use its knowledge to suppress a warning in this case.
Now fast-forward a few years. This fun function has not been touched in the interim, but businesses change, and someone wants to try changing the definition of UNIVERSAL_BASE_VALUE from 0.5 to 0.51. Suddenly there is a compiler warning for that function that has been stable for years. Why is this? There was no logical change, just a small data change. Yet that data change gave UNIVERSAL_BASE_VALUE a value that cannot be exactly represented in a float. The compiler no longer stays quiet about the conversion. After investigating, it is discovered that the type of calculated had been wrong for all those years, causing fun() to return imprecise results. Time to blame the compiler for being too smart? :)
Note that you get a similar situation if you replace UNIVERSAL_BASE_VALUE with a literal 0.5. It just makes the ending less dramatic while the overall point still holds: being smart could let a bug slip through.
Compiler warnings are intended to alert you to potential bugs. This is not an exact science, as programmer intent can be hard to deduce, especially when coding styles vary. There is no comprehensive standard covering all warnings a compiler may choose to emit. When false positives arise, it is up to the programmers to make a judgement call for their specific case. I can think of four basic approaches to choose between.
Turn off a warning (because it doesn't really help your code).
Accept that warnings are generated when compiling (not a good choice).
Change the coding style to accommodate the warnings (time consuming).
Whine to the compiler's developers (err… ask nicely for changes; don't whine).
Do not choose option 2. This would mean accumulating warnings that humans learn to ignore. Once you accumulate enough "accepted" warnings, it becomes difficult to spot other warnings that pop up. They get lost in the crowd, hence fail to achieve their intended purpose.
Note that compilers tend to support suppressing certain warnings for just certain lines of certain files. This gives a compromise between options 1 and 2 that might be acceptable.
In your case, you would need to evaluate this warning for your code base. If it provides value by spotting bugs, you should go with what you call "less readable". (It's not really less readable, but it does take time to get used to it.) If the warning does not provide enough value to warrant a style change, turn the warning off.
I suggest you write the code to be warning free.
If that makes it hard to read, this points to a problem somewhere else, like an unhealthy mix of float and double, which easily leads to a loss of precision or arithmetically unstable results.
Once upon a time I had a program which crashed with a coredump when it finished. The compiler gave some warnings because I had an unhealthy mix of char* and char[].
When I fixed these warnings, the program was suddenly stable (no more memory corruption).
So, turn on all warnings and change the code to compile warning free.
The compiler just wants to help you!

Why does this code compile without warnings?

I have no idea why this code complies :
int array[100];
array[-50] = 100; // Crash!!
...the compiler still compiles properly, without compiling errors, and warnings.
So why does it compile at all?
array[-50] = 100;
Actually means here:
*(array - 50) = 100;
Take into consideration this code:
int array[100];
int *b = &(a[50]);
b[-20] = 5;
This code is valid and won't crash. Compiler has no way of knowing, whether the code will crash or not and what programmer wanted to do with the array. So it does not complain.
Finally, take into consideration, that you should not rely on compiler warnings while finding bugs in your code. Compilers will not find most of your bugs, they barely try to make some hints for you to ease the bugfixing process (sometimes they even may be mistaken and point out, that valid code is buggy). Also, the standard actually never requires the compiler to emit warning, so these are only an act of good will of compiler implementers.
It compiles because the expression array[-50] is transformed to the equivalent
*(&array[0] + (-50))
which is another way of saying "take the memory address &array[0] and add to it -50 times sizeof(array[0]), then interpret the contents of the resulting memory address and those following it as an int", as per the usual pointer arithmetic rules. This is a perfectly valid expression where -50 might really be any integer (and of course it doesn't need to be a compile-time constant).
Now it's definitely true that since here -50 is a compile-time constant, and since accessing the minus 50th element of an array is almost always an error, the compiler could (and perhaps should) produce a warning for this.
However, we should also consider that detecting this specific condition (statically indexing into an array with an apparently invalid index) is something that you don't expect to see in real code. Therefore the compiler team's resources will be probably put to better use doing something else.
Contrast this with other constructs like if (answer = 42) which you do expect to see in real code (if only because it's so easy to make that typo) and which are hard to debug (the eye can easily read = as ==, whereas that -50 immediately sticks out). In these cases a compiler warning is much more productive.
The compiler is not required to catch all potential problems at compile time. The C standard allows for undefined behavior at run time (which is what happens when this program is executed). You may treat it as a legal excuse not to catch this kind of bugs.
There are compilers and static program analyzers that can do catch trivial bugs like this, though.
True compilers do (note: need to switch the compiler to clang 3.2, gcc is not user-friendly)
Compilation finished with warnings:
source.cpp:3:4: warning: array index -50 is before the beginning of the array [-Warray-bounds]
array[-50] = 100;
^ ~~~
source.cpp:2:4: note: array 'array' declared here
int array[100];
^
1 warning generated.
If you have a lesser (*) compiler, you may have to setup the warning manually though.
(*) ie, less user-friendly
The number inside the brackets is just an index. It tells you how many steps in memory to take to find the number you're requesting. array[2] means start at the beginning of array, and jump forwards two times.
You just told it to jump backwards 50 times, which is a valid statement. However, I can't imagine there being a good reason for doing this...

Getting rid of warning messages related to int32_t conversion from float C++

I have gotten hold of some C++ game sources, I am pretty new to C++ and I have compiled the sources successfully and it appears to work fine, however there are some annoying warnings I just can't get to solve.
My C++ programming skills are very basic, and I have big problems with these template variable thingys. Especially this modular template called int32_t, which appears to be used pretty much everywhere in my sources.
Documentation I read upon int32_t are not exacly noob friendly, they are very formal and really hard to understand for those who do not know how to use it. (And ummh, I feel I might actually be looking at the wrong places).
To the point:
Here is the function I am having problems with:
int32_t Weapons::getMaxMeleeWeaponDamage(int32_t attackSkill, int32_t attackValue, float attackFactor)
{
return ((int32_t)std::ceil(((attackValue * 0.05) * attackSkill) + (attackValue)) / attackFactor);
}
Warnings given:
170 C:\compiling\GameSources\weapons.cpp [Warning] converting to int32_t' fromfloat'
The clue is, I do want it to calculate taking float into consideration, and return an int value.
(So damage calculations are accurate, but the damage display on the game client returns the integer representation of the damage).
So as far as I know, this warning is merely telling me what I want to hear. But how can I get rid of this warning? (Beside telling my compiler to ignore warnings, I dont want that).
You are converting the return value of ceil to an int. Then you divide that result by a float. What you have now, the value you return, is a float. The compile then has to convert that to an int. It is the latter conversion the compiler complains about.
I would say you have misplaced the type casting.
As a style issue, I would suggest that you use C++ casting instead of the old C-style casting.

Can cout alter variables somehow?

So I have a function that looks something like this:
float function(){
float x = SomeValue;
return x / SomeOtherValue;
}
At some point, this function overflows and returns a really large negative value. To try and track down exactly where this was happening, I added a cout statement so that the function looked like this:
float function(){
float x = SomeValue;
cout << x;
return x / SomeOtherValue;
}
and it worked! Of course, I solved the problem altogether by using a double. But I'm curious as to why the function worked properly when I couted it. Is this typical, or could there be a bug somewhere else that I'm missing?
(If it's any help, the value stored in the float is just an integer value, and not a particularly big one. I just put it in a float to avoid casting.)
Welcome to the wonderful world of floating point. The answer you get will likely depend on the floating point model you compiled the code with.
This happens because of the difference between the IEEE spec and the hardware the code is running on. Your CPU likely has 80 bit floating point registers that get use to hold the 32-bit float value. This means that there is far more precision while the value stays in a register than when it is forced to a memory address (also known as 'homing' the register).
When you passed the value to cout the compiler had to write the floating point to memory, and this results in a lost of precision and interesting behaviour WRT overflow cases.
See the MSDN documentation on VC++ floating point switches. You could try compiling with /fp:strict and seeing what happens.
Printing a value to cout should not change the value of the paramter in any way at all.
However, I have seen similar behaviour, adding debugging statements causes a change in the value. In those cases, and probably this one as well my guess was that the additional statements were causing the compiler's optimizer to behave differently, so generate different code for your function.
Adding the cout statement means that the vaue of x is used directly. Without it the optimizer could remove the variable, so changing the order of the calculation and therefore changing the answer.
As an aside, it's always a good idea to declare immutable variables using const:
float function(){
const float x = SomeValue;
cout << x;
return x / SomeOtherValue;
}
Among other things this will prevent you from unintentionally passing your variables to functions that may modify them via non-const references.
cout causes a reference to the variable, which often will cause the compiler to force it to spill it to the stack.
Because it is a float, this likely causes its value to be truncated from the double or long double representation it would normally have.
Calling any function (non-inlined) that takes a pointer or reference to x should end up causing the same behavior, but if the compiler later gets smarter and learns to inline it, you'll be equally screwed :)
I dont think the cout has any effect on the variable, the problem would have to be somewhere else.