cout skipping the numbers after the decimal spaces - c++

I have assigned an integer to a double variable, but cout prints the double variable as an int. not as double . If I introduce cout << showpoint; in the code then I m able to see the decimal values at the out put . Why is it so in the first case ? Here is the code .
#include <iostream>
using namespace std;
template <class T>
T sum(T a,T b)
{
T retval;
retval=a+b;
return retval;
}
int main()
{
double a,x;
float y,v=4.66;
int z(3);
x=z;
y=(double)z;
a=sum(5,6);
//cout << showpoint;
cout<<"The value of a is : "<<a<<endl;
cout<<"The value of x is : "<<x<<endl;
cout<<"The value of y is : "<<y<<endl;
}
The output in first case is
The value of a is : 11
The value of x is : 3
The value of y is : 3
The output after enabling cout<<showpoint in the second case is
The value of a is : 11.0000
The value of x is : 3.00000
The value of y is : 3.00000

By default, floating point types are only displayed with a decimal point if they need one. If they have an integer value, they are displayed without one.
As you found, you can change this behaviour with showpoint (and change it back with noshowpoint), if you want.

The answer seems to be in the same link posted by you. Just that cpp standards (i mean std streams) have the printing of trailing zeros disabled by default.

The fundamental reason is only because that's what the standard
says. For historical reasons, C++ output formatting is defined
in terms of C and printf formatting. By default, floating
point is output using the %g format, which is an adaptive
format, which changes according to the values involved, as well
as according to various formatting flags. For similar
historical reasons: the default format will suppress trailing
zeros after the point, and if there are no digits after the
point, it will suppress the point as well. If you specify
showpoint, the results are the equivalent of %#g, which not
only causes the point to be displayed, regardless, but also
causes trailing zeros to be displayed.
In practice, this default format is almost never what you want
for real program output; its only real use is for debugging, and
various "informational" output. If you want fixed point, with
a fixed number of decimals after the point, you have to specify
it:
std::cout.setf( std::ios_base::fixed, std::ios_base::floatfield );
Normally, this will be done in some sort of hand written
manipulator, so that the format for a specific semantic value is
specified once (in the manipulator), and then you use the
manipulator to specify the signification of the value to be
output, something like:
std::cout << weight << someWeight;
For quick, throw away code, it's often convenient to have some
sort of generic specifiers as well; I've got something like:
class FFmt
{
int myWidth;
int myPrecision;
public:
FFmt( int width, int precision = 6 )
: myWidth( width )
, myPrecision( precision )
{
}
friend std::ostream& operator<<( std::ostream& dest, FFmt const& fmt )
{
dest.setf( std::ios_base::fixed, std::ios_base::floatfield );
dest.precision( myPrecision );
dest.setw( myWidth );
return dest;
}
};
(Actually, my version is more complex, because it derives from
a base class which saves the current formatting options in the
<< operator, and restores them in the destructor; since such
classes are used almost exclusively as temporaries, this means
at the end of the full expression.)
This supports writing things like:
std::cout << FFmt( 9, 6 ) << x;
Not something you'd want in production code (since you don't want to
specify the format at the point you're outputting the data), but
quite useful for quick, one time programs.

Related

How do I perform a narrowing conversion from double to float safely?

I am getting some -Wnarrowing conversion errors when doubles are narrowed to floats. How can I do this in a well defined way? Preferably with an option in a template I can toggle to switch behavior from throwing exceptions, to clamping to the nearest value, or to simple truncation. I was looking at the gsl::narrow cast, but it seems that it just performs a static cast under the hood and a comparison follow up: Understanding gsl::narrow implementation. I would like something that is more robust, as according to What are all the common undefined behaviours that a C++ programmer should know about? static_cast<> is UB if the value is unpresentable in the target type. I also really liked this implementation, but it also relies on a static_cast<>: Can a static_cast<float> from double, assigned to double be optimized away? I do not want to use boost for this. Are there any other options? It's best if this works in c++03, but c++0x(experimental c++11) is also acceptable... or 11 if really needed...
Because someone asked, here's a simple toy example:
#include <iostream>
float doubleToFloat(double num) {
return static_cast<float>(num);
}
int main( int, char**){
double source = 1; // assume 1 could be any valid double value
try{
float dest = doubleToFloat(source);
std::cout << "Source: (" << source << ") Dest: (" << dest << ")" << std::endl;
}
catch( std::exception& e )
{
std::cout << "Got exception error: " << e.what() << std::endl;
}
}
My primary interest is in adding error handling and safety to doubleToFloat(...), with various custom exceptions if needed.
As long as your floating-point types can store infinities (which is extremely likely), there is no possible undefined behavior. You can test std::numeric_limits<float>::has_infinity if you really want to be sure.
Use static_cast to silence the warning, and if you want to check for an overflow, you can do something like this:
template <typename T>
bool isInfinity(T f) {
return f == std::numeric_limits<T>::infinity()
|| f == -std::numeric_limits<T>::infinity();
}
float doubleToFloat(double num) {
float result = static_cast<float>(num);
if (isInfinity(result) && !isInfinity(num)) {
// overflow happened
}
return result;
}
Any double value that doesn't overflow will be converted either exactly or to one of the two nearest float values (probably the nearest). You can explicitly set the rounding direction with std::fesetround.
It depends on what you mean by "safely". There will most likely be a drop of precision in most cases. Do you want detect if this happens? Assert, or just know about it and notify the user?
A possible path of solution would be to static casting the double to a float, and then back to double, and compare the before and after. Equality is unlikely, but you could assert that the loss of precision is within your tolerance.
float doubleToFloat(double a_in, bool& ar_withinSpec, double a_tolerance)
{
auto reducedPrecision = static_cast<float>(a_in);
auto roundTrip = static_cast<double>(reducedPrecision);
ar_withinSpec = (roundTrip < a_tolerance);
return reducedPrecision;
}

Strange behavior when initializing template structure

I have a structure:
template < class L, class R > struct X {
X()
{ }
friend std::ostream& operator<<(std::ostream& str, X& __x)
{
return str << '(' << __x.__val1 << ", " << __x.__val2 << ')';
}
private:
L __val1;
R __val2;
};
and create it without initializing anything:
X<std::size_t, std::string> x;
std::cout << x << std::endl;
It always gives output: (2, )
But, when i do:
X<std::string, std::size_t> x;
std::cout << x << std::endl;
I have "right" behaviour with uninitialized variable: (, 94690864442656).
Why?
There is no "right" value of an uninitialized variable.
The value is said to be "indeterminate". Using an indeterminate value leads to undefined behavior. Your program could output anything or nothing.
Assuming (, 94690864442656) to be "right", because it looks like some "uninitialized value" while (2, ) looks like something was initialized, is wrong.
2 is just as wrong as 94690864442656. When the behavior of your code is undefined, then it is undefined.
If it helps, think of it like this: You are supposed to calculate the result of 2*3. Instead of actually carrying out the calculation you call the number that comes to your mind in that moment. Most of the time you will say the wrong result. Once in a while you will answer with a result that looks meaningful, because you correctly guessed 6, or you said 5 or 7 which is just off by one. However, getting the expected result sometimes, does not imply that your way of getting the result is correct.
Or consider this: (but be careful with the use of randomness here. Uninitialized values are not random!) Suppose instead of calculating the result of 2*3 you use the wrong way of rolling a dice (instead of actually calculating the number). Now assume you roll a 6. Would you be surprised to get the "correct" result, even though your algorithm is wrong?
If you really care why you get 2 in one case and 94690864442656 in the other, you need to study the assembly generated by the compiler, because C++ does not specify what is the outcome of compiling code with undefined behavior. It just says: It is undefined.
Note that also using identifiers that contain a double underscore is not allowed, as such names are reserved (https://en.cppreference.com/w/cpp/language/identifiers).

How to display the enum value for something within a class?

I am currently building a poor version of the game "Battleship" and have to use an array of Enums to display the board. For my header I have created:
enum class PlayerPiece {
AIRCRAFT,
BATTLESHIP,
CRUISER,
SUBMARINE,
PATROL,
EMPTY,
};
class Board {
public:
PlayerPiece playerBoard[100];
PlayerPiece enemyBoard[100];
void reset();
void display() const;
};
When I get to my source code, I try displaying the board as numbers. As of right now the board is EMPTY after I run my reset command. But after I want to display the array, I get an error saying "no operator << matches these operands ....", I understand that means I need to overload the << command to display properly, but why doesn't it just display the '5' that was assigned? Isn't that the point of Enums? What I have so far is:
void Board::reset(){
for (int i = 0; i < 100; ++i){
playerBoard[i] = PlayerPiece::EMPTY;
enemyBoard[i] = PlayerPiece::EMPTY;
};
}
void Board::display() const{
for (int i = 0; i < 100; ++i){
cout << playerBoard[i] << endl; //
};
}
I have made other codes where I don't have to overload the << operator to display the number attached with ENUM. Am I missing something? Any help would be appreciated.
If you want to see the number associated with the scoped enum type, use a static_cast like this:
cout << static_cast<int>(playerBoard[i]) << endl;
Normal (unscoped) enums don't need this cast, as their types implicitly cast to int, or whatever underlying type you specified. That's probably why this hasn't happened to you before.
If you remove the class and write enum with Unscoped enumeration
enum PlayerPiece {
AIRCRAFT,
BATTLESHIP,
CRUISER,
SUBMARINE,
PATROL,
EMPTY,
};
You can print the number you wanted.
The difference between scoped and unscoped(from cplusplus.com):
Before C++11, all enums were just, basically, integers. And you could use them like that. It made it too easy to give bad values to functions expecting a restricted set of values. For example:
1
2
3
4
5
6
7
8
9
10
enum round_mode { round_half_up, round_half_down, round_bankers };
double round( double x, round_mode = round_half_up )
{
...
};
int main()
{
double x = round( 2.5, 42 );
}
Edit & Run
It compiles, but it isn't pretty.
With C++11, the compiler now knows all kinds of things about your enums, and doesn't let you blithely mix them with incorrect values.
Essentially, it promotes an enum to a first-class object -- it isn't just an integer.
The other issue is that the name for each enumeration bleeds into the containing scope. So the following would be a name conflict:
1
2
enum color_masks { red = 0xFF0000, green = 0x00FF00, blue = 0x0000FF };
int red = 0xFF0000;
You can't have both the identifier 'red' as an enum value and as an integer variable name in the same scope.
While the example here is contrived, it isn't too far off from things that happen all the time -- and that programmers have to take pains to avoid.
(Some identifier names are common. For example, 'max'. If you #include , there's a 'max' macro in there, which plays havoc if you also #include and try to use the 'max' function, or #include and try to find the numeric_limit ::max(). I know that's a macro problem, but it's the first name conflict I could come up with...)

C++ type converting issue

Consider following code:
#include <iostream>
using namespace std;
int aaa(int a) {
cout << a * 0.3 << endl;
return a * 0.3;
}
int main()
{
cout << aaa(35000);
}
It prints out:
10500
10499
Why output differs?
I have a workaround to use "return a * 3 / 10;" but I don't like it.
Edit:
Found that doing "return float(a * 0.3);" gives expected value;
The result of 0.3*35000 is a floating point number, just slightly less than 10500. When printed it is rounded to 10500, but when coerced into an int the fractional digits are discarded, resulting in 10499.
int * double expression yields double, that's what the first thing prints.
Then you convert to int chopping the remaining part (even if it's almost there, sitting at 10500-DBL_EPSILON), and pass that back. The second prints that value.
float-int conversions should be made with care, better not at all.
a * 0.3 has type double. The call inside aaa calls
ostream& operator<< (double val);
whereas the one outside calls
ostream& operator<< (int val);
You'd get a warning (if you turn them on - I suggest you do) that the implicit cast from double to int isn't recommended.

printing negative value in C++ base 8 or 16

How is C++ supposed to print negative values in base 8 or 16? I know I can try what my current compiler/library does (it prints the bit pattern, without a minus in front) but I want to know what is should do, preferrably with a reference.
It seems that none of the standard output facilities support signed formatting for non-decimals. So, try the following workaround:
struct signprint
{
int n;
signprint(int m) : n(m) { }
};
std::ostream & operator<<(std::ostream & o, const signprint & s)
{
if (s.n < 0) return o << "-" << -s.n;
return o << s.n;
}
std::cout << std::hex << signprint(-50) << std::endl;
You can insert an 0x in the appropriate location if you like.
From §22.2.2.2.2 (yes, really) of n1905, using ios_base::hex is equivalent to the stdio format specifier %x or %X.
From §7.21.6.1 of n1570, the %x specifier interprets its argument as an unsigned integer.
(Yes, I realize that those are wacky choices for standards documents. I'm sure you can find the text in your favorite copy if you look hard enough.)