Why is this double value printed as "-0"? - c++

double a = 0;
double b = -42;
double result = a * b;
cout << result;
The result of a * b is -0, but I expected 0. Where did I go wrong?

The bit representation of -0.0 and 0.0 are different, but they are same value, so -0.0==0.0 would return true. In your case, result is -0.0, because one of the operand is negative.
See this demo:
#include <iostream>
#include <iomanip>
void print_bytes(char const *name, double d)
{
unsigned char *pd = reinterpret_cast<unsigned char*>(&d);
std::cout << name << " = " << std::setw(2) << d << " => ";
for(int i = 0 ; i < sizeof(d) ; ++i)
std::cout << std::setw(-3) << (unsigned)pd[i] << " ";
std::cout << std::endl;
}
#define print_bytes_of(a) print_bytes(#a, a)
int main()
{
double a = 0.0;
double b = -0.0;
std::cout << "Value comparison" << std::endl;
std::cout << "(a==b) => " << (a==b) <<std::endl;
std::cout << "(a!=b) => " << (a!=b) <<std::endl;
std::cout << "\nValue representation" << std::endl;
print_bytes_of(a);
print_bytes_of(b);
}
Output (demo#ideone):
Value comparison
(a==b) => 1
(a!=b) => 0
Value representation
a = 0 => 0 0 0 0 0 0 0 0
b = -0 => 0 0 0 0 0 0 0 128
As you can see yourself, the last byte of -0.0 is different from the last byte of 0.0.
Hope that helps.

Related

Inserting elements to vector in c++

I need to insert for every element of vector it's opposite.
#include <iostream>
#include <vector>
int main() {
std::vector < int > vek {1,2,3};
std::cout << vek[0] << " " << vek[1] << " " << vek[2] << std::endl;
for (int i = 0; i < 3; i++) {
std::cout << i << " " << vek[i] << std::endl;
vek.insert(vek.begin() + i + 1, -vek[i]);
}
std::cout << std::endl;
for (int i: vek) std::cout << i << " ";
return 0;
}
OUTPUT:
1 2 3
0 1 // it should be 0 1 (because vek[0]=1)
1 -1 // it should be 1 2 (because vek[1]=2)
2 1 // it should be 2 3 (because vek[2]=3)
1 -1 1 -1 2 3 // it should be 1 -1 2 -2 3 -3
Could you explain me why function insert doesn't insert the correct value of vector? What is happening here?
Note: Auxiliary vectors (and other data types) are not allowed
During the for loop, you are modifying the vector:
After the first iteration which inserts -1, the vector becomes [1, -1, 2, 3]. Therefore, vec[1] becomes -1 rather than 2. The index of 2 becomes 2. And after inserting -2 into the vector, the index of the original value 3 becomes 4.
In the for loop condition, you need to add index i by 2, instead of 1:
#include <iostream>
#include <vector>
int main() {
std::vector < int > vek {1,2,3};
std::cout << vek[0] << " " << vek[1] << " " << vek[2] << std::endl;
for (int i = 0; i < 3 * 2; i+=2) {
std::cout << i << " " << vek[i] << std::endl;
vek.insert(vek.begin() + i + 1, -vek[i]);
}
std::cout << std::endl;
for (int i: vek) std::cout << i << " ";
return 0;
}

Integer overflow and std::stoi

if x > INT_MAX or if x > INT_MIN the function will return 0... or that's what i'm trying to do :)
in my test case i pass in a value that is INT_MAX + 1... 2147483648 ... to introduce integer overflow to see how the program handles it.
i step through... my IDE debugger says that the value immediately goes to -2147483648 upon overflow and for some reason the program executes beyond both of these statements:
if (x > INT_MAX)
if (x < INT_MIN)
and keeps crashes at int revInt = std::stoi(strNum);
saying out of range
Must be something simple, but it's got me stumped. Why isn't the program returning before it ever gets to that std::stoi() given x > INT_MAX? Any help appreciated. Thanks! Full listing of function and test bed below: (sorry having trouble with the code insertion formatting..)
#include <iostream>
#include <algorithm>
#include <string> //using namespace std;
class Solution {
public: int reverse(int x)
{
// check special cases for int and set flags:
// is x > max int, need to return 0 now
if(x > INT_MAX)
return 0;
// is x < min int, need to return 0 now
if(x < INT_MIN)
return 0;
// is x < 0, need negative sign handled at end
// does x end with 0, need to not start new int with 0 if it's ploy numeric and the functions used handle that for us
// do conversion, reversal, output:
// convert int to string
std::string strNum = std::to_string(x);
// reverse string
std::reverse(strNum.begin(), strNum.end());
// convert reversed string to int
int revInt = std::stoi(strNum);
// multiply by -1 if x was negative
if (x < 0)
revInt = revInt * -1;
// output reversed integer
return revInt;
}
};
Main:
#include <iostream>
int main(int argc, const char * argv[]) {
// test cases
// instance Solution and call it's method
Solution sol;
int answer = sol.reverse(0); // 0
std::cout << "in " << 0 << ", out " << answer << "\n";
answer = sol.reverse(-1); // -1
std::cout << "in " << -1 << ", out " << answer << "\n";
answer = sol.reverse(10); // 1
std::cout << "in " << 10 << ", out " << answer << "\n";
answer = sol.reverse(12); // 21
std::cout << "in " << 12 << ", out " << answer << "\n";
answer = sol.reverse(100); // 1
std::cout << "in " << 100 << ", out " << answer << "\n";
answer = sol.reverse(123); // 321
std::cout << "in " << 123 << ", out " << answer << "\n";
answer = sol.reverse(-123); // -321
std::cout << "in " << -123 << ", out " << answer << "\n";
answer = sol.reverse(1024); // 4201
std::cout << "in " << 1024 << ", out " << answer << "\n";
answer = sol.reverse(-1024); // -4201
std::cout << "in " << -1024 << ", out " << answer << "\n";
answer = sol.reverse(2147483648); // 0
std::cout << "in " << 2147483648 << ", out " << answer << "\n";
answer = sol.reverse(-2147483648); // 0
std::cout << "in " << -2147483648 << ", out " << answer << "\n";
return 0;
}
Any test like (x > INT_MAX) with x being of type int will never evaluate to true, since the value of x cannot exceed INT_MAX.
Anyway, even if 2147483647 would be a valid range, its reverse 7463847412 is not.
So I think its better to let stoi "try" to convert the values and "catch" any out_of_range-exception`. The following code illustrates this approach:
int convert() {
const char* num = "12345678890123424542";
try {
int x = std::stoi(num);
return x;
} catch (std::out_of_range &e) {
cout << "invalid." << endl;
return 0;
}
}

C++ how to print out array values next to each other rather than underneath

I have made this section of code and would like it to print out like so:
Element Value
0 0
1 0
2 0
3 0
4 0
5 0
6 0
7 0
8 0
9 0
Instead of being underneath of each other. Could any one point me in the correct direction?
#include <iostream>
int main(){
int n[10] = {1,2,3,4,5,6,7,8,9,10};
int x[10] = {0,0,0,0,0,0,0,0,0,0};
std::cout << "Element" <<std::endl;
for(int i = 0; i < 10; i++)
{
std::cout <<n[i] << std::endl;
}
std::cout << "Value" <<std::endl;
for(int y = 0; y <10; y++)
{
std::cout << x[y] << std::endl;
}
std::cout << "size of array: " << sizeof(n) << std::endl;
}
Print the values next to each other in one loop instead of using two loops (I changed the contents of x to make sure we see what's going on):
#include <iostream>
int main(){
int x[10] = {5,8,1,2,4,6,7,1,0,9};
std::cout << "Element Value" << std::endl;
for(int i = 0; i < 10; i++) {
std::cout << " " << i << " " << x[i] << std::endl;
}
std::cout << "size of array: " << sizeof(x) << std::endl;
return 0;
}
https://ideone.com/vAaLyl
Output:
Element Value
0 5
1 8
2 1
3 2
4 4
5 6
6 7
7 1
8 0
9 9
size of array: 40
Other than that, there's no need for the array n; use your index as the index.

copy member variable into byte vector

I want to copy a 64-bit member variable into a vector byte by byte.
Please avoid telling me to use bit operation to extract each byte and then copy them into vector.
I want to do this by one line.
I use memcpy and copy methods, but both of them failed.
Here is the sample code:
#include <iostream>
#include <vector>
#include <cstdint>
#include <cstring>
using namespace std;
class A {
public:
A()
: eight_bytes_data(0x1234567812345678) {
}
void Test() {
vector<uint8_t> data_out;
data_out.reserve(8);
memcpy(data_out.data(),
&eight_bytes_data,
8);
cerr << "[Test]" << data_out.size() << endl;
}
void Test2() {
vector<uint8_t> data_out;
data_out.reserve(8);
copy(&eight_bytes_data,
(&eight_bytes_data) + 8,
back_inserter(data_out));
cerr << "[Test2]" << data_out.size() << endl;
for (auto value : data_out) {
cerr << hex << value << endl;
}
}
private:
uint64_t eight_bytes_data;
};
int main() {
A a;
a.Test();
a.Test2();
return 0;
}
As the others already showed where you were getting wrong, there is a one line solution that is dangeurous.
First you need to make sure that you vector has enough size to receive 8 bytes. Something like this:
data_out.resize(8);
The you can do a reinterpret_cast to force your compiler to interpret those 8 bytes from the vector to be seen as an unique type of 8 bytes, and do the copy
*(reinterpret_cast<uint64_t*>(data_out.data())) = eight_bytes_data;
I can't figure out all the possibilities of something going wrong. So use at your own risk.
If you want to work with the bytes of another type structure, you could use a char* to manipulate each byte:
void Test3()
{
vector<uint8_t> data_out;
char* pbyte = (char*)&eight_bytes_data;
for(int i = 0; i < sizeof(eight_bytes_data); ++i)
{
data_out.push_back(pbyte[i]);
}
cerr << "[Test]" << data_out.size() << endl;
}
Unfortunately, you requested a one-line-solution, which I don't think is viable.
If you are interested in more generic version:
namespace detail
{
template<typename Byte, typename T>
struct Impl
{
static std::vector<Byte> impl(const T& data)
{
std::vector<Byte> bytes;
bytes.resize(sizeof(T)/sizeof(Byte));
*(T*)bytes.data() = data;
return bytes;
}
};
template<typename T>
struct Impl<bool, T>
{
static std::vector<bool> impl(const T& data)
{
std::bitset<sizeof(T)*8> bits(data);
std::string string = bits.to_string();
std::vector<bool> vector;
for(const auto& x : string)
vector.push_back(x - '0');
return vector;
}
};
}
template<typename Byte = uint8_t,
typename T>
std::vector<Byte> data_to_bytes(const T& data)
{
return detail::Impl<Byte,T>::impl(data);
}
int main()
{
uint64_t test = 0x1111222233334444ull;
for(auto x : data_to_bytes<bool>(test))
std::cout << std::hex << uintmax_t(x) << " ";
std::cout << std::endl << std::endl;
for(auto x : data_to_bytes(test))
std::cout << std::hex << uintmax_t(x) << " ";
std::cout << std::endl << std::endl;
for(auto x : data_to_bytes<uint16_t>(test))
std::cout << std::hex << uintmax_t(x) << " ";
std::cout << std::endl << std::endl;
for(auto x : data_to_bytes<uint32_t>(test))
std::cout << std::hex << uintmax_t(x) << " ";
std::cout << std::endl << std::endl;
for(auto x : data_to_bytes<uint64_t>(test))
std::cout << std::hex << uintmax_t(x) << " ";
std::cout << std::endl << std::endl;
}
Output:
0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 0 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0 1
0 0 0 1 0 0 0 1 0 0 0 1 0 0
44 44 33 33 22 22 11 11
4444 3333 2222 1111
33334444 11112222
1111222233334444

Significant figures in C++

I've written a program that calculates values in a series and all of the values are particularly lengthy doubles. I want to print these values each displaying 15 significant figures. Here's some code that illustrates the issue I'm having:
#include <iostream>
#include <iomanip>
using namespace std;
int main()
{
double x = 0.12345678901234567890;
double y = 1.12345678901234567890;
cout << setprecision(15) << fixed << x << "\t" << y << "\n";
return 0;
}
With just setprecision trailing zeros are not shown so I added fixed as I have seen in other answers on this site. However, now I just seem to have 15 decimal places and for values that aren't 0.something this is not what I want. You can see this from the output of the above:
0.123456789012346 1.123456789012346
The first number has 15 sig figs but the second has 16. What can I do to resolve this?
EDIT: I have been specifically asked to use setprecision, so I am unable to try cout.precision.
You can simply use scientific (note the 14 instead of 15):
std::cout << std::scientific << std::setprecision(14) << -0.123456789012345678 << std::endl;
std::cout << std::scientific << std::setprecision(14) << -1.234567890123456789 << std::endl;
-1.23456789012346e-01
-1.23456789012346e+00
or you can use a function:
#include <iostream>
#include <vector>
#include <iomanip>
#include <string>
#include <sstream>
enum vis_opt { scientific, decimal, decimal_relaxed };
std::string figures(double x, int nfig, vis_opt vo=decimal) {
std::stringstream str;
str << std::setprecision(nfig-1) << std::scientific << x;
std::string s = str.str();
if ( vo == scientific )
return s;
else {
std::stringstream out;
std::size_t pos;
int ileft = std::stoi(s,&pos);
std::string dec = s.substr(pos + 1, nfig - 1);
int e = std::stoi(s.substr(pos + nfig + 1));
if ( e < 0 ) {
std::string zeroes(-1-e,'0');
if ( ileft < 0 )
out << "-0." << zeroes << -ileft << dec;
else
out << "0." << zeroes << ileft << dec;
} else if ( e == 0) {
out << ileft << '.' << dec;
} else if ( e < ( nfig - 1) ) {
out << ileft << dec.substr(0,e) << '.' << dec.substr(e);
} else if ( e == ( nfig - 1) ) {
out << ileft << dec;
} else {
if ( vo == decimal_relaxed) {
out << s;
} else {
out << ileft << dec << std::string(e - nfig + 1,'0');
}
}
return out.str();
}
}
int main() {
std::vector<double> test_cases = {
-123456789012345,
-12.34567890123456789,
-0.1234567890123456789,
-0.0001234,
0,
0.0001234,
0.1234567890123456789,
12.34567890123456789,
1.234567890123456789,
12345678901234,
123456789012345,
1234567890123456789.0,
};
for ( auto i : test_cases) {
std::cout << std::setw(22) << std::right << figures(i,15,scientific);
std::cout << std::setw(22) << std::right << figures(i,15) << std::endl;
}
return 0;
}
My output is:
-1.23456789012345e+14 -123456789012345
-1.23456789012346e+01 -12.3456789012346
-1.23456789012346e-01 -0.123456789012346
-1.23400000000000e-04 -0.000123400000000000
0.00000000000000e+00 0.00000000000000
1.23400000000000e-04 0.000123400000000000
1.23456789012346e-01 0.123456789012346
1.23456789012346e+01 12.3456789012346
1.23456789012346e+00 1.23456789012346
1.23456789012340e+13 12345678901234.0
1.23456789012345e+14 123456789012345
1.23456789012346e+18 1234567890123460000
I've found some success in just computing the integer significant figures, and then setting the floating significant figures to be X - <integer sig figs>:
Edit
To address Bob's comments, I'll account for more edge cases. I've refactored the code somewhat to adjust the field precision based on leading and trailing zeros. There would still be an edge case I believe for very small values (like std::numeric_limits<double>::epsilon:
int AdjustPrecision(int desiredPrecision, double _in)
{
// case of all zeros
if (_in == 0.0)
return desiredPrecision;
// handle leading zeros before decimal place
size_t truncated = static_cast<size_t>(_in);
while(truncated != 0)
{
truncated /= 10;
--desiredPrecision;
}
// handle trailing zeros after decimal place
_in *= 10;
while(static_cast<size_t>(_in) == 0)
{
_in *= 10;
++desiredPrecision;
}
return desiredPrecision;
}
With more tests:
double a = 0.000123456789012345;
double b = 123456789012345;
double x = 0.12345678901234567890;
double y = 1.12345678901234567890;
double z = 11.12345678901234567890;
std::cout.setf( std::ios::fixed, std:: ios::floatfield);
std::cout << "a: " << std::setprecision(AdjustPrecision(15, a)) << a << std::endl;
std::cout << "b: " << std::setprecision(AdjustPrecision(15, b)) << b << std::endl;
std::cout << "x " << std::setprecision(AdjustPrecision(15, x)) << x << std::endl;
std::cout << "y " << std::setprecision(AdjustPrecision(15, y)) << y << std::endl;
std::cout << "z: " << std::setprecision(AdjustPrecision(15, z)) << z << std::endl;
Output:
a: 0.000123456789012345
b: 123456789012345
x 0.123456789012346
y 1.12345678901235
z: 11.1234567890123
Live Demo
int GetIntegerSigFigs(double _in)
{
int toReturn = 0;
int truncated = static_cast<int>(_in);
while(truncated != 0)
{
truncated /= 10;
++toReturn;
}
return toReturn;
}
(I'm sure there are some edge cases I'm missing)
And then using it:
double x = 0.12345678901234567890;
double y = 1.12345678901234567890;
std::cout << td::setprecision(15-GetIntegerSigFigs(x)) << x
<< "\t" << std::setprecision(15-GetIntegerSigFigs(y)) << y << "\n";
Prints:
0.123456789012346 1.12345678901235
Live Demo