removing trailing zeroes for a float value c++ - c++

I am trying to set up a nodemcu module to collect data from a temperature sensor, and send it using mqtt pubsubclient to my mqtt broker, but that is not the problem.
I am trying to send the temperature in a format that only has one decimal, and at this point I've succesfully made it round up or down, but the format is not right. as of now it rounds the temp to 24.50, 27.80, 23.10 etc. I want to remove the trailing zereos, so it becomes 24.5, 27.8, 23.1 etc.
I have this code set up so far:
#include <math.h>
#include <PubSubClient.h>
#include <ESP8266WiFi.h>
float temp = 0;
void loop {
float newTemp = sensors.getTempCByIndex(0);
temp = roundf((newTemp * 10)) / 10;
serial.println(String(temp).c_str())
client.publish("/test/temperature", String(temp).c_str(), true);
}
I'm fairly new to c++, so any help would be appreciated.

It's unclear what your API is. Seems like you want to pass in the C string. In that case just use sprintf:
#include <stdio.h>
float temp = sensors.getTempCByIndex(0);
char s[30];
sprintf(s, "%.1f", temp);
client.publish("/test/temperature", s, true);

Regardless of what you do to them, floating-point values always have the same precision. To control the number of digits in a text string, change the way you convert the value to text. In normal C++ (i.e., where there is no String type <g>), you do that with a stream:
std::ostrstream out;
out << std::fixed << std::setprecision(3) << value;
std::string text = out.str();
In the environment you're using, you'll have to either use standard streams or figure out what that environment provides for controlling floating-point to text conversions.

The library you are using is not part of standard C++. The String you are using is non-standard.
As Pete Becker noted in his answer, you won't be able to control the trailing zeros by changing the value of temp. You need to either control the precision when converting it to String, or do the conversion and then tweak the resultant string.
If you read the documentation for the String type you are using, there may be options do do one or both of;
control the precision when writing a float to a string; or
examine characters in a String and manually remove trailing zeros.
Or you could use a std::ostrstream to produce the value in a std::string, and work with that instead.

Related

Concatenate float with string and round to 2 decimal places

So I have a function below formatted as polymorphic void display(string& outStr). The output from this function should basically be formatted into one large string, which will be saved to the outStr parameter and returned to the calling function.
I have successfully formatted my large string into multiple lines but I would like to round my float value to 2 decimal places but I can't figure out how with the way I'm currently appending my strings. I tried using the round() and ceil() functions as some posts online have suggested, but 6 zeros still appear after each decimal place. I would appreciate some help with this as I've been looking for solutions for a while but none of them have worked.
Additionally, I was wondering if the to_string() function I used to convert my float to a string would compile and execute correctly in C++98? I'm using C++11 but my teacher is using C++98 and I'm extremely worried that it won't compile on her end.
If not, can anyone suggest how else I could achieve the same result of turning a float into a string while still formatting multiple lines into the outStr string parameter and returning it to the function? I am not allowed to change the function's parameters, it must stay as display(string& outStr)
My output is a lot longer and complex but I simplified the example for the sake of getting a short and easy solution.
Again, I would appreciate any help!
#include <iostream>
using namespace std;
#include <string>
#include <sstream>
#include <cmath>
#include "Math.h"
void Math::display(string& outStr){
float numOne = 35;
float numTwo = 33;
string hello = "Hello, your percent is: \n";
outStr.append(hello);
string percent = "Percent: \n";
outStr.append(percent);
float numPercent = ceil(((numOne / numTwo) * 100) * 100.0) / 100.0;
outStr.append(to_string(numPercent));
outStr.append("\n");
}
Output should look like:
Hello, your percent is:
Number:
106.06%
There is no need to do any crazy conversions. Since the function is called display, my guess is that it's actually supposed to display the value instead of just save it to a string.
The following code demonstrates how that can be accomplished by just formatting your printing.
#include <cstdio>
#include <iomanip>
#include <iostream>
int main() {
double percentage = 83.1415926;
std::cout << "Raw: " << percentage << "%\n";
std::cout << "cout: " << std::fixed << std::setprecision(2) << percentage << "%\n";
printf("printf: %.2f\%%\n", percentage); // double up % to print the actual symbol
}
Output is:
Raw: 83.1416%
cout: 83.14%
printf: 83.14%
If the function is as backwards as you describe it, there are two possibilities. You don't understand what's actually required and are giving us a bad explanation (my guess given that function signature), or the assignment itself is pure garbage. As much as SO likes to rag on professors, I find it difficult to believe that what you've described and written is what the professor wants. It makes no sense.
A couple notes: there is nothing polymorhpic about the code you've shown. to_string() exists as of C++11, which is easily seen by looking up the function (Link). There is also a discrepancy between what your code attempts to print versus what your output is, and that's before we even get to the number formatting portion. "Percent" or "Number"?

Issues saving double as binary in c++

In my simulation code for a particle system, I have a class defined for particles, and each particle has a property of pos containing its position, which is a double pos[3]; as there are 3 coordinate components per particle. So with particle object defined by particles = new Particle[npart]; (as we have npart many particles), then e.g. the y-component of the 2nd particle would be accessed with double dummycomp = particles[1].pos[1];
To save the particles to file before using binary I would use (saved as txt, with float precision of 10 and one particle per line):
#include <iostream>
#include <fstream>
ofstream outfile("testConfig.txt", ios::out);
outfile.precision(10);
for (int i=0; i<npart; i++){
outfile << particle[i].pos[0] << " " << particle[i].pos[1] << " " << particle[i].pos[2] << endl;
}
outfile.close();
But now, to save space, I am trying to save the configuration as a binary file, and my attempt, inspired from here, has been as follows:
ofstream outfile("test.bin", ios::binary | ios::out);
for (int i=0; i<npart; i++){
outfile.write(reinterpret_cast<const char*>(particle[i].pos),streamsize(3*sizeof(double)));
}
outfile.close();
but I am facing a segmentation fault when trying to run it. My questions are:
Am I doing something wrong with reinterpret_cast or rather in the argument of streamsize()?
Ideally, it would be great if the saved binary format could also be read within Python, is my approach (once fixed) allowing for that?
working example for the old saving approach (non-binary):
#include <iostream>
#include <fstream>
using namespace std;
class Particle {
public:
double pos[3];
};
int main() {
int npart = 2;
Particle particles[npart];
//initilizing the positions:
particles[0].pos[0] = -74.04119568;
particles[0].pos[1] = -44.33692582;
particles[0].pos[2] = 17.36278231;
particles[1].pos[0] = 48.16310086;
particles[1].pos[1] = -65.02325252;
particles[1].pos[2] = -37.2053818;
ofstream outfile("testConfig.txt", ios::out);
outfile.precision(10);
for (int i=0; i<npart; i++){
outfile << particles[i].pos[0] << " " << particles[i].pos[1] << " " << particles[i].pos[2] << endl;
}
outfile.close();
return 0;
}
And in order to save the particle positions as binary, substitute the saving portion of the above sample with
ofstream outfile("test.bin", ios::binary | ios::out);
for (int i=0; i<npart; i++){
outfile.write(reinterpret_cast<const char*>(particles[i].pos),streamsize(3*sizeof(double)));
}
outfile.close();
2nd addendum: reading the binary in Python
I managed to read the saved binary in python as follows using numpy:
data = np.fromfile('test.bin', dtype=np.float64)
data
array([-74.04119568, -44.33692582, 17.36278231, 48.16310086,
-65.02325252, -37.2053818 ])
But given the doubts cast in the comments regarding non-portability of binary format, I am not confident this type of reading in Python will always work! It would be really neat if someone could elucidate on the reliability of such approach.
The trouble is that base 10 representation of double in ascii is flawed and not guaranteed to give you the correct result (especially if you only use 10 digits). There is a potential for a loss of information even if you use all std::numeric_limits<max_digits10> digits as the number may not be representable in base 10 exactly.
The other issue you have is that the binary representation of a double is not standardized so using it is very fragile and can lead to code breaking very easily. Simply changing the compiler or compiler sittings can result in a different double format and changing architectures you have absolutely no guarantees.
You can serialize it to text in a non lossy representation by using the hex format for doubles.
stream << std::fixed << std::scientific << particles[i].pos[0];
// If you are using C++11 this was simplified to
stream << std::hexfloat << particles[i].pos[0];
This has the affect of printing the value with the same as "%a" in printf() in C, that prints the string as "Hexadecimal floating point, lowercase". Here both the radix and mantissa are converted into hex values before being printed in a very specific format. Since the underlying representation is binary these values can be represented exactly in hex and provide a non lossy way of transferring data between systems. IT also truncates proceeding and succeeding zeros so for a lot of numbers is relatively compact.
On the python side. This format is also supported. You should be able to read the value as a string then convert it to a float using float.fromhex()
see: https://docs.python.org/3/library/stdtypes.html#float.fromhex
But your goal is to save space:
But now, to save space, I am trying to save the configuration as a binary file.
I would ask the question do you really need to save space? Are you running on a low powered low resource environment? Sure then space saving can definitely be a thing (but that is rare nowadays (but these environments do exist)).
But it seems like you are running some form of particle simulation. This does not scream low resource use case. Even if you have tera bytes of data I would still go with a portable easy to read format over binary. Preferably one that is not lossy. Storage space is cheap.
I suggest using a library instead of writing a serialization/deserialization routine from scratch. I find cereal really easy to use, maybe even easier than boost::serialization. It reduces the opportunity for bugs in your own code.
In your case I'd go about serializing doubles like this using cereal:
#include <cereal/archives/binary.hpp>
#include <fstream>
int main() {
std::ofstream outfile("test.bin", ios::binary);
cereal::BinaryOutputArchive out(outfile);
double x, y, z;
x = y = z = 42.0;
out(x, y, z);
}
To deserialize them you'd use:
#include <cereal/archives/binary.hpp>
#include <fstream>
int main() {
std::ifstream infile("test.bin", ios::binary);
cereal::BinaryInputArchive in(infile);
double x,y,z;
in(x, y, z);
}
You can also serialize/deserialize whole std::vector<double>s in the same fashion. Just add #include <cereal/types/vector.hpp> and use in / out like in the given example on a single std::vector<double> instead of multiple doubles.
Ain't that swell.
Edit
In a comment you asked, whether it'd be possible to read a created binary file like that with Python.
Answer:
Serialized binary files aren't really meant to be very portable (things like endianness could play a role here). You could easily adapt the example code I gave you to write a JSON file (another advantage of using a library) and read that format in Python.
Oh and cereal::JSONOutputArchive has an option for setting precision.
Just curious if you ever investigated the idea of converting your data to vectored coordinates instead of Cartesian X,Y,Z? It would seem that this would potentially reduce the size of your data by about 30%: Two coordinates instead of three, but perhaps needing slightly higher precision in order to convert back to your X,Y,Z.
The vectored coordinates could still be further optimized by using the various compression techniques above (text compression or binary conversion).

Cast a string from Glib::ustring to double - gtkm 2

I am developing an c++ app in gtkmm 2
I have a problem to cast the string from an entryfield to a double (or int).
I get the following compilation error
cannot convert from Glib::ustring to double
The entryfield
interestrate.set_max_length(50);
interestrate.set_text(interestrate.get_text() );
interestrate.select_region(0, interestrate.get_text_length());
m_box1.pack_start(interestrate);
interestrate.show();
the button
m_button3.signal_clicked().connect(sigc::bind<-1, Glib::ustring>(
sigc::mem_fun(*this, &HelloWorld::on_button_clicked), "OK"));
m_box1.pack_start(m_button3);
m_button3.show();
and the eventhandler
void HelloWorld::on_button_clicked(Glib::ustring data)
{
std::cout << "interestrate: " << interestrate.get_text() << std::endl;
}
so i want to get a double of the returnvalue from
interestrate.get_text()
I didnt beleive it could be so easy
std::string s = interestrate.get_text();
double d = atof(s.c_str());
Your suggestion would work for valid C locale input.
If you want to deal with bad number formats and locale considerations you have to do a little bit more; atof returns 0 on error, but 0 may be a valid input, and here in Germany users would perhaps enter a comma as the decimal point.
I would think (from reading the glib docs and this answer: How can I convert string to double in C++?), that you should get the proper localized std::string first via Glib::locale_from_utf8() and then create a stringstream from that and read your double out of that. The stream gives you error information and the conversion/operator>>() will deal with locale issues if you have "imbued" a locale.

How to work with large numbers when writing and reading a file?

I have written a codes to write my data from one input file to another output file, I used to read all lines of my input file
while (!inputfile.eof())
but in my output file, the last line is missing. So I would like to know, how to prevent this error?
My second question is: for writing data into file, I used
Outputfile.write((char*)&a,sizeof(double));
Outputfile.write((char*)&b,sizeof(double));
here a = 289814.150 and b = 4320978.613 but in the output file, it shows like
289814 4.32098e+006
(value of a is rounded and b value shows with e values) so what is the reason for this and how to fixed this problem?
Here i tried to use cout.setf(ios::fixed);, but if this works for data written on the screen, I don’t know how to fix this to write double data inside my file.
I want to write real values with 3 decimals only in my output file. Please anyone can help thanks.
Okay, based on comments, the intent here has (at least I hope) become reasonably clear: to convert pairs of numbers in text format to binary format, and be able to verify that the converted numbers accurately represent the originals.
There are a number of ways to do that, but the first thing to keep in mind is that no matter what else you do, converting floating point numbers to/from text (decimal) format can and normally will lead to some degree of inaccuracy. The problem is fairly simple: floating point is (normally) done in binary. This means it can only represent fractions in which the denominator is a power of 2 (or a sum of powers of 2). Decimal, obviously enough, uses base 10, so fractions can be composed of a sum of powers of 2 and powers of 5. Any of those that involves a power of 2 (e.g., 0.2) can only be approximated in binary -- pretty much like trying to represent 1/3rd in decimal.
This means your only reasonable choice is to allow some discrepancy between the decimal and binary versions. The best you can hope for is to keep the errors to a minimum. To test for that, what you probably need/want to do is convert the binary floating point back to decimal in the original format, and check whether it's close to the original (e.g., ignore errors in the final digit, at least errors of +/- 1).
The conversion itself should be pretty trivial:
#include <fstream>
int main(int argc, char **argv) {
// checking argc omitted for clarity.
std::ifstream infile(argv[1]);
std::ofstream outfile(argv[2], std::ios::binary);
double a, b;
while (infile >> a && infile >> b) {
outfile.write((char const *)&a, sizeof(a));
outfile.write((char const *)&b, sizeof(b));
}
return 0;
}
Verifying the data isn't nearly so easy. One possibility would be something like this (starting from the two files, one binary and one text):
#include <iostream>
#include <fstream>
#include <sstream>
#include <iomanip>
int main(int argc, char **argv) {
std::string text;
std::ostringstream converter;
std::ifstream text_file(argv[1]);
std::ifstream bin_file(argv[2], std::ios::binary);
double bin_value;
while (text_file >> text) {
bin_file.read((char *)&bin_value, sizeof(bin_value));
// the manipulators will probably need tweaking to match original format.
converter << std::fixed << std::setw(3) << std::setprecision(3) << bin_value;
if (converter.str() != text)
;// they're identical
else if (converter.str().substr(0,3) == text.substr(0,3))
;// the first three digits are equal
else
;// bigger error
}
return 0;
}
That's much more likely to need some tweaking to work the way you want, but the general idea should be in the ballpark as long as you're sure the original numbers are all formatted consistently.

String manipulation using Arduino and C++

I am trying to manipulate a string in C++. I am working with an Arduino board so I am limited on what I can use. I am also still learning C++ (Sorry for any stupid questions)
Here is what I need to do:
I need to send miles per hour to a 7 segment display. So if I have a number such as 17.812345, I need to display 17.8 to the 7 segment display. What seems to be most efficient way is to first multiply by 10 (This is to shift the decimal point right one place), then cast 178.12345 to an int (to chop decimal points off). The part I am stuck on is how to break apart 178. In Python I could slice the string, but I can't find anything on how to do this in C++ (or at least, I can't find the right terms to search for)
There are four 7 segment displays and a 7 segment display controller. It will measure up to tenths of a mile per hour. Thank you very much for an assistance and information you can provide me.
It would probably be easiest to not convert it to a string, but just use arithmetic to separate the digits, i.e.
float speed = 17.812345;
int display_speed = speed * 10 + 0.5; // round to nearest 0.1 == 178
int digits[4];
digits[3] = display_speed % 10; // == 8
digits[2] = (display_speed / 10) % 10; // == 7
digits[1] = (display_speed / 100) % 10; // == 1
digits[0] = (display_speed / 1000) % 10; // == 0
and, as pointed out in the comments, if you need the ASCII value for each digit:
char ascii_digits[4];
ascii_digits[0] = digits[0] + '0';
ascii_digits[1] = digits[1] + '0';
ascii_digits[2] = digits[2] + '0';
ascii_digits[3] = digits[3] + '0';
This a way you can do it in C++ without modulus math (either way seems fine to me):
#include "math.h"
#include <stdio.h>
#include <iostream.h>
int main( ) {
float value = 3.1415;
char buf[16];
value = floor( value * 10.0f ) / 10.0f;
sprintf( buf, "%0.1f", value );
std::cout << "Value: " << value << std::endl;
return 0;
}
If you actually want to be processing this stuff as strings, I would recommend looking into stringstream. It can be used much the same as any other stream, such as cin and cout, except instead of sending all output to the console you get an actual string out of the deal.
This will work with standard C++. Don't know much about Arduino, but some quick googling suggests it won't support stringstreams.
A quick example:
#include <sstream> // include this for stringstreams
#include <iostream>
#include <string>
using namespace std; // stringstream, like almost everything, is in std
string stringifyFloat(float f) {
stringstream ss;
ss.precision(1); // set decimal precision to one digit.
ss << fixed; // use fixed rather than scientific notation.
ss << f; // read in the value of f
return ss.str(); // return the string associated with the stream.
}
int main() {
cout << stringifyFloat(17.812345) << endl; // 17.8
return 0;
}
You can use a function such as this toString and work your way up from there, like you would in Python, or just use modulo 10,100,1000,etc to get it as numbers. I think manipulating it as a string might be easier for you, but its up to you.
You could also use boost::lexical_cast, but it will probably be hard to get boost working in an embedded system like yours.
A good idea would be to implement a stream for the display. That way the C++ stream syntax could be used and the rest of the application would remain generic. Although this may be overkill for an embedded system.
If you still want to use std::string you may want to use a reverse iterator. This way you can start at the right most digit (in the string) and work towards the left, one character at a time.
If you have access to the run-time library code, you could set up a C language I/O for the display. This is easier to implement than a C++ stream. You could then use fprint, fputs to write to the display. I implemented a debug port in this method, and it was easier for the rest of the developers to use.