Why compiler warns about implicit conversion in setprecision? - c++

When I compile the following code, compiler gives me the warning:
"Implicit conversion loses integer precision: 'std::streamsize' (aka 'long') to 'int'".
I'm a little bit confused about this warning since I just try to save the current value of the precision to set it back to the original value later.
#include <iomanip>
#include <iostream>
int main() {
std::streamsize prec = std::cout.precision();
std::cout << std::setprecision(prec);
}
What is the right way to save the precision value and set it back later in this case?

It looks like it's just an oversight in the standard specification.
ios_base::precision has two overloads, one that gets and one that sets the precision:
// returns current precision
streamsize precision() const;
// sets current precision and returns old value
streamsize precision(streamsize prec) const;
So this code will not give you warnings:
#include <iostream>
int main() {
std::streamsize prec = std::cout.precision(); // gets
std::cout.precision(prec); // sets
}
However, the setprecision() function simply takes a plain old int:
unspecified-type setprecision(int n);
and returns an unspecified functor, which when consumed by a stream str has the effect of:
str.precision(n);
In your case, streamsize is not an int (and does not have to be), hence the warning. The standard should probably be changed so that setprecision's parameter is not int, but streamsize.
You can either just call precison() yourself, as above, or assume int is sufficient and cast.
#include <iomanip>
#include <iostream>
int main() {
std::streamsize prec = std::cout.precision();
std::cout << std::setprecision(static_cast<int>(prec));
}
Edit: Apparently it was submitted to be fixed and reached no concensus (closed as not-a-defect).

Related

Convert "void*" to int without warning

I need to convert "void*" to int, but compiler keeps giving me warning.
Wonder if there is a way to change the code so that compiler will not complain. This occurs a lot in the code base, especially when passing an argument to starting a new thread.
$ g++ -fpermissive te1.cc
te1.cc: In function ‘void dummy(void*)’:
te1.cc:4:15: warning: cast from ‘void*’ to ‘int’ loses precision [-fpermissive]
int x = (int)p;
^
Here is the simple code "te1.cc":
#include <stdio.h>
extern void someFunc(int);
void dummy(int type, void *p) {
if (type == 0) {
int x = (int)p;
someFunc(x);
} else if (type == 1) {
printf("%s\n", (char*)p);
}
}
int main(int argc, char *argv[]) {
void *p = (void*)5;
dummy(p);
return 0;
}
UDPATE1
I understand that I will lose precision. It's intended sometimes. What I need is to have a way to remove the warning in places I know for sure it's safe. Sorry for not making it clear earlier.
UDPATE2
Updated the code snippet to be a little less non-trivial to illustrate the point. The parameter needs to pass different type of values. I need a way to cast without generating warning.
I need to convert "void*" to int
no you don't.
I really do...
no, you need to represent a pointer as some kind of integer type which is guaranteed not to lose information.
#include <cstdio>
#include <cstdint>
#include <iostream>
#include <cstring>
#include <utility>
#include <cinttypes>
void dummy(void *p) {
std::intptr_t x = reinterpret_cast<std::intptr_t>(p);
printf("x = %" PRIiPTR "\n", x);
// ^^ see here: http://en.cppreference.com/w/cpp/types/integer
}
int main(int argc, char *argv[]) {
void *p = (void*)5;
dummy(p);
return 0;
}
ok, what I really want to do is work with 32-bit values in a standards-compliant way.
This is what std::uint32_t is for:
#include <cstdint>
#include <iostream>
void dummy(std::uint32_t x) {
std::cout << x << '\n';
}
int main(int argc, char *argv[]) {
auto x = std::uint32_t(5);
dummy(x);
return 0;
}
std::uint32_t - guaranteed to be unsigned 32 bits
std::int32_t - guaranteed to be signed 32 bits
You are probably looking for something along the lines of
int x = static_cast<int>(reinterpret_cast<std::uintptr_t>(p));
This is not strictly guaranteed to work: perhaps surprisingly, the standard guarantees that a pointer converted to a large enough integer and back to a pointer results in the same value; but doesn't provide a similar guarantee for when an integer is converted to a pointer and back to the integer. All it says about the latter case is
[expr.reinterpret.cast]/4 A pointer can be explicitly converted to any integral type large enough to hold it. The mapping function is implementation-defined. [ Note: It is intended to be unsurprising to those who know the addressing structure of the underlying machine. —end note ]
Hopefully, you know the addressing structure of your machine, and won't be surprised.

C++ Function supposed to return Long, returning Integer like value instead

While working on a fairly large project, I happened to notice that one of my functions that is supposed to return a Long value is either returning an Integer. I reproduced the error in a very small environment thinking that it would make the problem clear to me, but I'm still not seeing the issue. The input is 1.123, and the return value is 1. If I input any Long, for example; 123.456, it will only return 123. What am I not seeing?
Source1.cpp
#ifndef HEADER_H
#define HEADER_H
using namespace std;
class testClass
{
private:
long m_testLong = 0.0;
public:
long getTestLong();
void setTestLong(long sn);
};
#endif
Header.h
#include "Source1.cpp"
#include <string.h>
void testClass::setTestLong(long sn)
{
m_testLong = sn;
}
long testClass::getTestLong()
{
return m_testLong;
}
Source.cpp
#include <iostream>
#include "Source1.cpp"
#include "Header.h"
using namespace std;
int main(void)
{
testClass *myClass = new testClass;
cout << "Setting test long value using setTestLong function -- 1.123" << endl;
myClass->setTestLong(1.123);
long tempTestLong = 0.0;
tempTestLong = myClass->getTestLong();
cout << tempTestLong << endl;
system("Pause");
return 0;
}
OK, so the answer was painfully simple. I hadn't worked with longs before, but I thought I knew what they were. I didn't.
So longs and integers both are whole numbers, and having the type listed as long made me assume an integer wouldn't work, and I tested the function with a double because of my misunderstanding. Thanks for the help!
The long and int types are integral types, they can only hold whole numbers like 7 or 42.
You should be using float or double as a type, preferably the latter for increased range and precision. That will allow you to hold real numbers such as 3.141592653589 or 2.718281828459.
Long is an integer. Assigning a floating point value to integer causes rounding.
You want double or float.

C++ type converting issue

Consider following code:
#include <iostream>
using namespace std;
int aaa(int a) {
cout << a * 0.3 << endl;
return a * 0.3;
}
int main()
{
cout << aaa(35000);
}
It prints out:
10500
10499
Why output differs?
I have a workaround to use "return a * 3 / 10;" but I don't like it.
Edit:
Found that doing "return float(a * 0.3);" gives expected value;
The result of 0.3*35000 is a floating point number, just slightly less than 10500. When printed it is rounded to 10500, but when coerced into an int the fractional digits are discarded, resulting in 10499.
int * double expression yields double, that's what the first thing prints.
Then you convert to int chopping the remaining part (even if it's almost there, sitting at 10500-DBL_EPSILON), and pass that back. The second prints that value.
float-int conversions should be made with care, better not at all.
a * 0.3 has type double. The call inside aaa calls
ostream& operator<< (double val);
whereas the one outside calls
ostream& operator<< (int val);
You'd get a warning (if you turn them on - I suggest you do) that the implicit cast from double to int isn't recommended.

Issue with vector<bool> and printf

#include <vector>
#include <iostream>
#include <stdio.h>
using namespace std;
int main(int argc, const char *argv[])
{
vector<bool> a;
a.push_back(false);
int t=a[0];
printf("%d %d\n",a[0],t);
return 0;
}
This code give output "5511088 1". I thought it would be "0 0".
Anyone know why is it?
The %d format specifier is for arguments the size of integers, therefore the printf function is expecting two arguments both the size of an int. However, you're providing it with one argument that isn't an int, but rather a special object returned by vector<bool> that is convertible to bool.
This is basically causing the printf function to treat random bytes from the stack as part of the values, while in fact they aren't.
The solution is to cast the first argument to an int:
printf("%d %d\n", static_cast<int>(a[0]), t);
An even better solution would be to prefer streams over printf if at all possible, because unlike printf they are type-safe which makes it impossible for this kind of situation to happen:
cout << a[0] << " " << t << endl;
And if you're looking for a type-safe alternative for printf-like formatting, consider using the Boost Format library.
%d format specifier is for int type. So, try -
cout << a[0] << "\t" << t << endl;
The key to the answer is that vector isn't really a vector of bools. It's really a vector of proxy objects, which are translatable into ints & bools. This allows each bool to be stored as a single bit, for greater space efficiency (at the cost of speed efficiency), but causes a number of problems like the one seen here. This requirement was voted into the C++ Standard in a rash moment, and I believe most committee members now believe it was a mistake, but it's in the Standard and we're kind-of stuck with it.
The problem is triggered by the specialization for bool of vectors.
The Standard Library defines a specialization of the vector template for bool. The description of this specialization indicates that the implementation should pack the elements so that every bool only uses one bit of memory. This is widely considered a mistake.
Basically std::bool use 1 bit instead of 1 byte, so you face undefined behavior regarding printf.
If you are really willing to use printf, you can solve this issue by defining std::bool as char and print it as integer %d (implicit conversion, 1 for true and 0 for false).
#include <vector>
#include <iostream>
#include <stdio.h>
#define bool char // solved
using namespace std;
int main(int argc, const char *argv[])
{
vector<bool> a;
a.push_back(false);
int t = a[0];
printf("%d %d\n", a[0], t);
return 0;
}

stringstream unsigned conversion broken?

Consider this program:
#include <iostream>
#include <string>
#include <sstream>
#include <cassert>
int main()
{
std::istringstream stream( "-1" );
unsigned short n = 0;
stream >> n;
assert( stream.fail() && n == 0 );
std::cout << "can't convert -1 to unsigned short" << std::endl;
return 0;
}
I tried this on gcc (version 4.0.1 Apple Inc. build 5490) on OS X 10.5.6 and the assertion is true; it fails to convert -1 to an unsigned short.
In Visual Studio 2005 (and 2008) however, the assertion fails and the resulting value of n is the same as what you would expect from an compiler generated implicit conversion - i.e "-1" is 65535, "-2" is 65534, etc. But then it gets weird at "-32769" which converts to 32767.
Who's right and who's wrong here? (And what the hell's going on with -32769??)
The behaviour claimed by GCC in Max Lybbert's post is based on the tables om the C++ Standard that map iostream behaviour onto printf/scanf converters (or at least that;'s my reading). However, the scanf behaviour of g++ seems to be different from the istream behavior:
#include <iostream>
#include <cstdio>
using namespace std;;
int main()
{
unsigned short n = 0;
if ( ! sscanf( "-1", "%hu", &n ) ) {
cout << "conversion failed\n";
}
else {
cout << n << endl;
}
}
actually prints 65535.
First, reading the string "-1" as a negative number is locale dependent (it would be possible for a locale to identify negative numbers by enclosing them in parenthesis). Your default standard is the "classic" C locale:
By far the dominant use of locales is implicitly, in stream I/O. Each istream and ostream has its own locale. The locale of a stream is by default the global locale at the time of the stream’s creation (page 6). ...
Initially, the global locale is the standard C locale, locale::classic() (page 11).
According to the GCC guys, numeric overflow is allowed to fail the stream input operation (talking about negative numbers that overflowed a signed int):
[T]he behaviour of libstdc++-v3 is strictly standard conforming. ... When the read is attempted it does not fit in a signed int i, and it fails.
Thanks to another answer, a bug was filed and this behavior changed:
Oops, apparently we never parsed correctly negative values for unsigned. The
fix is simple. ...
Fixed in mainline, will be fixed in 4.4.1 too.
Second, although integer overflow is generally predictable, I believe it's officially undefined behavior, so while I can't say why -32769" converts to 32767, I think it's allowed.
Try this code:
#include <iostream>
#include <string>
#include <sstream>
#include <cassert>
int main()
{
std::istringstream stream( "-1" );
std::cout << "flags: " << (unsigned long)stream.flags() << std::endl;
return 0;
}
I tried this on my VS2005:
flags: 513
and on codepad.org (which I think uses g++) this gives:
flags: 4098
This tells me that gcc uses a different default fmtflags. Since fmtflags control what conversions are possible you are getting different results.