Only 2 digits in exponent in scientific ofstream - c++

So according to cplusplus.com when you set the format flag of an output stream to scientific notation via
of.setf(ios::scientific)
you should see 3 digits plus and a sign in the exponent. However, I only seem to get 2 in my output. Any ideas? Compiled on Mac OS using GCC 4.0.1.
Here's the actual code I am using:
of.setf(ios::scientific);
of.precision(6);
for (int i=0;i<dims[0];++i) {
for (int j=0;j<dims[1];++j) {
of << setw(15) << data[i*dims[1]+j];
}
of << endl;
}
and an example line of output:
1.015037e+00 1.015037e+00 1.395640e-06 -1.119544e-06 -8.333264e-07
Thanks

I believe cplusplus.com is incorrect, or at least is documenting a particular implementation - I can't see any other online docs which specifically state the number of exponent digits which are displayed - I can't even find it in the C++ specification.
Edit:
The C++ Standard Library: A Tutorial and Reference doesn't explicitly state the number of exponent digits; but all it's examples display two exponent digits.

It's implementation specific.

This is a bug in M$ implementation AFAIK
http://groups.google.com/group/comp.lang.c++/browse_thread/thread/624b679a4faf03d

I'm getting 3 in MSVC++08 and g++ 4.4.0 with this code:
#include <algorithm>
#include <cstdlib>
#include <iomanip>
#include <iostream>
#include <iterator>
#include <vector>
typedef float NumberType;
double generate_number(void)
{
return static_cast<NumberType>(std::rand()) / RAND_MAX;
}
void print_number(NumberType d)
{
std::cout << std::setw(15) << d << std::endl;
};
int main(void)
{
std::vector<NumberType> data;
std::generate_n(std::back_inserter(data), 10, generate_number);
// print
std::cout.setf(std::ios::scientific);
std::cout.precision(6);
std::for_each(data.begin(), data.end(), print_number);
}
You can easily change the number type it uses. It gives me three places with both float and double, and the standard says nothing on the actual formatting, so I'd go with mgb's answer.

I have just had a thought - since I am printing floats, why would it display 3 exponent values since the max/min exponent is ~38. I bet if the data array were type double there would be 3.

Related

Concatenate float with string and round to 2 decimal places

So I have a function below formatted as polymorphic void display(string& outStr). The output from this function should basically be formatted into one large string, which will be saved to the outStr parameter and returned to the calling function.
I have successfully formatted my large string into multiple lines but I would like to round my float value to 2 decimal places but I can't figure out how with the way I'm currently appending my strings. I tried using the round() and ceil() functions as some posts online have suggested, but 6 zeros still appear after each decimal place. I would appreciate some help with this as I've been looking for solutions for a while but none of them have worked.
Additionally, I was wondering if the to_string() function I used to convert my float to a string would compile and execute correctly in C++98? I'm using C++11 but my teacher is using C++98 and I'm extremely worried that it won't compile on her end.
If not, can anyone suggest how else I could achieve the same result of turning a float into a string while still formatting multiple lines into the outStr string parameter and returning it to the function? I am not allowed to change the function's parameters, it must stay as display(string& outStr)
My output is a lot longer and complex but I simplified the example for the sake of getting a short and easy solution.
Again, I would appreciate any help!
#include <iostream>
using namespace std;
#include <string>
#include <sstream>
#include <cmath>
#include "Math.h"
void Math::display(string& outStr){
float numOne = 35;
float numTwo = 33;
string hello = "Hello, your percent is: \n";
outStr.append(hello);
string percent = "Percent: \n";
outStr.append(percent);
float numPercent = ceil(((numOne / numTwo) * 100) * 100.0) / 100.0;
outStr.append(to_string(numPercent));
outStr.append("\n");
}
Output should look like:
Hello, your percent is:
Number:
106.06%
There is no need to do any crazy conversions. Since the function is called display, my guess is that it's actually supposed to display the value instead of just save it to a string.
The following code demonstrates how that can be accomplished by just formatting your printing.
#include <cstdio>
#include <iomanip>
#include <iostream>
int main() {
double percentage = 83.1415926;
std::cout << "Raw: " << percentage << "%\n";
std::cout << "cout: " << std::fixed << std::setprecision(2) << percentage << "%\n";
printf("printf: %.2f\%%\n", percentage); // double up % to print the actual symbol
}
Output is:
Raw: 83.1416%
cout: 83.14%
printf: 83.14%
If the function is as backwards as you describe it, there are two possibilities. You don't understand what's actually required and are giving us a bad explanation (my guess given that function signature), or the assignment itself is pure garbage. As much as SO likes to rag on professors, I find it difficult to believe that what you've described and written is what the professor wants. It makes no sense.
A couple notes: there is nothing polymorhpic about the code you've shown. to_string() exists as of C++11, which is easily seen by looking up the function (Link). There is also a discrepancy between what your code attempts to print versus what your output is, and that's before we even get to the number formatting portion. "Percent" or "Number"?

C++ Convert string to float

I am trying to convert a string based number to float. Unfortunately I am getting either the rounded off value or truncated value. How can I fix this.
std::string text = "199102.92";
float v = std::stof(text);
std::cout<<v<<std::endl;
This results in 199103
Even if I use setprecision and fixed then it only affects the output stream but the value passed into the float variable remains 199103. How can i resort this problem.
I have also used stringstream in c++ but results seem to be the same except it just displays off well.
I need to preserve the decimal upto 2 places.
I have used stof,stod, they all do the same thing.
You may assume that I am working with currencies.
I assume that you use std::setprecision and std::fixed incorrectly.
Following works for me:
#include <iostream>
#include <iomanip>
#include <string>
string text = "199102.92";
float v = std::stof(text);
std::cout << std::setprecision(2) << std::fixed << v << std::endl;
The result is 199102.92
Compiler info: g++ 5.4.0, --std=c++11.

G++ floating point precision

I have these lines in a C++ program,
auto f = log (FLT_MAX / 4);
printf("%e", f);
cout << f;
The printf result is 8.733654e+1, but cout gives me 87.3365. I checked the 32-bit hex values, they're respectively 0x3f5f94e0 and 0x3f5f94d9, meaning, there seems to be enough precision to represent the value exactly.
Do you know why cout is truncating that floating point value?
Do you know why cout is truncating that floating point value?
Because the default precision C++ streams are set to is 6.
You can change the precision with std::setprecision.
This has nothing to do with g++.
What you should do is this:
#include <limits>
#include <iomanip>
std::cout << std::setprecision(std::numeric_limits<double>::digits10+1) << f;
You can also use long double instead of double to get the maximum precision available.
Documentation
std::setprecision
std::numeric_limits

Converting two's complement output to signed decimal

I apologize if this question has been answered already but I have not been able to find what I am looking for.
I am working in c++ with an SPI device. The SPI device outputs data in 16 bit words in 2's complement form. I am trying to convert this data into decimal for use with a filter.
I've attached some sample code that asks the user to input a number in twos complement and then outputs the signed decimal version.
#include <iostream>
#include <stdlib.h>
#include <cstdint>
#include <cmath>
#include <bitset>
using std::cout;
using std::endl;
using std::cin;
using std::hex;
using std::dec;
using std::bitset;
int main () {
uint16_t x2=0;
cout<<"Please enter the number you would like to convert from 2's complement. "<<endl;
cin>>x2;
int diff=0x0000-x2;
cout<<"The number you have entered is: "<<dec<<diff<<endl;
return 0;
}
When I run this program and input something like 0x3B4A it always outputs 0. I'm not entirely sure what is going on and I'm very new to c++ so please excuse me if this is a stupid question. Also, please ignore anything extra in the header. This is partof a large project and I couldn't remember what parts of the header go with this specific section of code.
Thanks!
Edit: This is mostly for Ben. After reading your most recent comment I made the following changes but am still simply getting the decimal equivalent of the hexadecimal number I entered
#include <iostream>
#include <stdlib.h>
#include <cstdint>
#include <cmath>
#include <bitset>
using std::cout;
using std::endl;
using std::cin;
using std::hex;
using std::dec;
using std::bitset;
int main () {
int16_t x2=0;
cout<<"Please enter the number you would like to convert from 2's complement. "<<endl;
cin>>hex>>x2;
int flags= (x2>>14) & 3;
int16_t value=(x2 << 2) >> 2;
cout<<"The number you have entered is: "<<dec<<value<<endl;
return 0;
}
I'm not sure it is necessary for the OP's question, but for anybody who is just looking for the formula for converting a 16bit 2's complement unsigned integer to a signed integer, I think a variant of it looks like this (for input val):
(0x8000&val ? (int)(0x7FFF&val)-0x8000 : val)
This amounts to:
if first bit is 1, it is a negative number with all other bits in 2's complement
extract negative part by subtracting off the 0x8000
otherwise the lower bits are just the positive integer value
Probably a good idea to wrap this in a function and do some basic error checking (can also enforce that the input is actually an unsigned 16 bit integer).
You asked cin to read as decimal (by making no format changes) so as soon as it reads the x, which is not a 0-9 digit, it stops, leaving you with zero.
Just add hex to your cin line: cin >> hex >> x2;
The standard library function strtol converts string input to a number, and supports the 0x prefix as long as you pass 0 as the radix argument.
Since int16_t is almost certainly 16-bit two's complement signed, you can just use that.

Precise floating-point<->string conversion

I am looking for a library function to convert floating point numbers to strings, and back again, in C++. The properties I want are that str2num(num2str(x)) == x and that num2str(str2num(x)) == x (as far as possible). The general property is that num2str should represent the simplest rational number that when rounded to the nearest representable floating pointer number gives you back the original number.
So far I've tried boost::lexical_cast:
double d = 1.34;
string_t s = boost::lexical_cast<string_t>(d);
printf("%s\n", s.c_str());
// outputs 1.3400000000000001
And I've tried std::ostringstream, which seems to work for most values if I do stream.precision(16). However, at precision 15 or 17 it either truncates or gives ugly output for things like 1.34. I don't think that precision 16 is guaranteed to have any particular properties I require, and suspect it breaks down for many numbers.
Is there a C++ library that has such a conversion? Or is such a conversion function already buried somewhere in the standard libraries/boost.
The reason for wanting these functions is to save floating point values to CSV files, and then read them correctly. In addition, I'd like the CSV files to contain simple numbers as far as possible so they can be consumed by humans.
I know that the Haskell read/show functions already have the properties I am after, as do the BSD C libraries. The standard references for string<->double conversions is a pair of papers from PLDI 1990:
How to read floating point numbers accurately, Will Klinger
How to print floating point numbers accurately, Guy Steele et al
Any C++ library/function based on these would be suitable.
EDIT: I am fully aware that floating point numbers are inexact representations of decimal numbers, and that 1.34==1.3400000000000001. However, as the papers referenced above point out, that's no excuse for choosing to display as "1.3400000000000001"
EDIT2: This paper explains exactly what I'm looking for: http://drj11.wordpress.com/2007/07/03/python-poor-printing-of-floating-point/
I am still unable to find a library that supplies the necessary code, but I did find some code that does work:
http://svn.python.org/view/python/branches/py3k/Python/dtoa.c?view=markup
By supplying a fairly small number of defines it's easy to abstract away the Python integration. This code does indeed meet all the properties I outline.
I think this does what you want, in combination with the standard library's strtod():
#include <stdio.h>
#include <stdlib.h>
int dtostr(char* buf, size_t size, double n)
{
int prec = 15;
while(1)
{
int ret = snprintf(buf, size, "%.*g", prec, n);
if(prec++ == 18 || n == strtod(buf, 0)) return ret;
}
}
A simple demo, which doesn't bother to check input words for trailing garbage:
int main(int argc, char** argv)
{
int i;
for(i = 1; i < argc; i++)
{
char buf[32];
dtostr(buf, sizeof(buf), strtod(argv[i], 0));
printf("%s\n", buf);
}
return 0;
}
Some example inputs:
% ./a.out 0.1 1234567890.1234567890 17 1e99 1.34 0.000001 0 -0 +INF NaN
0.1
1234567890.1234567
17
1e+99
1.34
1e-06
0
-0
inf
nan
I imagine your C library needs to conform to some sufficiently recent version of the standard in order to guarantee correct rounding.
I'm not sure I chose the ideal bounds on prec, but I imagine they must be close. Maybe they could be tighter? Similarly I think 32 characters for buf are always sufficient but never necessary. Obviously this all assumes 64-bit IEEE doubles. Might be worth checking that assumption with some kind of clever preprocessor directive -- sizeof(double) == 8 would be a good start.
The exponent is a bit messy, but it wouldn't be difficult to fix after breaking out of the loop but before returning, perhaps using memmove() or suchlike to shift things leftwards. I'm pretty sure there's guaranteed to be at most one + and at most one leading 0, and I don't think they can even both occur at the same time for prec >= 10 or so.
Likewise if you'd rather ignore signed zero, as Javascript does, you can easily handle it up front, e.g.:
if(n == 0) return snprintf(buf, size, "0");
I'd be curious to see a detailed comparison with that 3000-line monstrosity you dug up in the Python codebase. Presumably the short version is slower, or less correct, or something? It would be disappointing if it were neither....
The reason for wanting these functions is to save floating point values to CSV files, and then read them correctly. In addition, I'd like the CSV files to contain simple numbers as far as possible so they can be consumed by humans.
You cannot have conversion double → string → double and in the same time having the string human readable.
You need to need to choose between an exact conversion and a human readable string. This is the definition of max_digits10 and digits10:
difference explained by stackoverflow
digits10
max_digits10
Here is an implementation of num2str and str2num with two different contexts from_double (conversion double → string → double) and from_string (conversion string → double → string):
#include <iostream>
#include <limits>
#include <iomanip>
#include <sstream>
namespace from_double
{
std::string num2str(double d)
{
std::stringstream ss;
ss << std::setprecision(std::numeric_limits<double>::max_digits10) << d;
return ss.str();
}
double str2num(const std::string& s)
{
double d;
std::stringstream ss(s);
ss >> std::setprecision(std::numeric_limits<double>::max_digits10) >> d;
return d;
}
}
namespace from_string
{
std::string num2str(double d)
{
std::stringstream ss;
ss << std::setprecision(std::numeric_limits<double>::digits10) << d;
return ss.str();
}
double str2num(const std::string& s)
{
double d;
std::stringstream ss(s);
ss >> std::setprecision(std::numeric_limits<double>::digits10) >> d;
return d;
}
}
int main()
{
double d = 1.34;
if (from_double::str2num(from_double::num2str(d)) == d)
std::cout << "Good for double -> string -> double" << std::endl;
else
std::cout << "Bad for double -> string -> double" << std::endl;
std::string s = "1.34";
if (from_string::num2str(from_string::str2num(s)) == s)
std::cout << "Good for string -> double -> string" << std::endl;
else
std::cout << "Bad for string -> double -> string" << std::endl;
return 0;
}
Actually I think you'll find that 1.34 IS 1.3400000000000001. Floating point numbers are not precise. You can't get around this. 1.34f is 1.34000000333786011 for example.
As stated by others. Floating-point numbers are not that accurate its an artifact on how they store the value.
What you are really looking for is a Decimal number representation.
Basically this uses an integer to store the number and has a specific accuracy after the decimal point.
A quick Google got this:
http://www.codeproject.com/KB/mcpp/decimalclass.aspx