Cast a string from Glib::ustring to double - gtkm 2 - c++

I am developing an c++ app in gtkmm 2
I have a problem to cast the string from an entryfield to a double (or int).
I get the following compilation error
cannot convert from Glib::ustring to double
The entryfield
interestrate.set_max_length(50);
interestrate.set_text(interestrate.get_text() );
interestrate.select_region(0, interestrate.get_text_length());
m_box1.pack_start(interestrate);
interestrate.show();
the button
m_button3.signal_clicked().connect(sigc::bind<-1, Glib::ustring>(
sigc::mem_fun(*this, &HelloWorld::on_button_clicked), "OK"));
m_box1.pack_start(m_button3);
m_button3.show();
and the eventhandler
void HelloWorld::on_button_clicked(Glib::ustring data)
{
std::cout << "interestrate: " << interestrate.get_text() << std::endl;
}
so i want to get a double of the returnvalue from
interestrate.get_text()

I didnt beleive it could be so easy
std::string s = interestrate.get_text();
double d = atof(s.c_str());

Your suggestion would work for valid C locale input.
If you want to deal with bad number formats and locale considerations you have to do a little bit more; atof returns 0 on error, but 0 may be a valid input, and here in Germany users would perhaps enter a comma as the decimal point.
I would think (from reading the glib docs and this answer: How can I convert string to double in C++?), that you should get the proper localized std::string first via Glib::locale_from_utf8() and then create a stringstream from that and read your double out of that. The stream gives you error information and the conversion/operator>>() will deal with locale issues if you have "imbued" a locale.

Related

removing trailing zeroes for a float value c++

I am trying to set up a nodemcu module to collect data from a temperature sensor, and send it using mqtt pubsubclient to my mqtt broker, but that is not the problem.
I am trying to send the temperature in a format that only has one decimal, and at this point I've succesfully made it round up or down, but the format is not right. as of now it rounds the temp to 24.50, 27.80, 23.10 etc. I want to remove the trailing zereos, so it becomes 24.5, 27.8, 23.1 etc.
I have this code set up so far:
#include <math.h>
#include <PubSubClient.h>
#include <ESP8266WiFi.h>
float temp = 0;
void loop {
float newTemp = sensors.getTempCByIndex(0);
temp = roundf((newTemp * 10)) / 10;
serial.println(String(temp).c_str())
client.publish("/test/temperature", String(temp).c_str(), true);
}
I'm fairly new to c++, so any help would be appreciated.
It's unclear what your API is. Seems like you want to pass in the C string. In that case just use sprintf:
#include <stdio.h>
float temp = sensors.getTempCByIndex(0);
char s[30];
sprintf(s, "%.1f", temp);
client.publish("/test/temperature", s, true);
Regardless of what you do to them, floating-point values always have the same precision. To control the number of digits in a text string, change the way you convert the value to text. In normal C++ (i.e., where there is no String type <g>), you do that with a stream:
std::ostrstream out;
out << std::fixed << std::setprecision(3) << value;
std::string text = out.str();
In the environment you're using, you'll have to either use standard streams or figure out what that environment provides for controlling floating-point to text conversions.
The library you are using is not part of standard C++. The String you are using is non-standard.
As Pete Becker noted in his answer, you won't be able to control the trailing zeros by changing the value of temp. You need to either control the precision when converting it to String, or do the conversion and then tweak the resultant string.
If you read the documentation for the String type you are using, there may be options do do one or both of;
control the precision when writing a float to a string; or
examine characters in a String and manually remove trailing zeros.
Or you could use a std::ostrstream to produce the value in a std::string, and work with that instead.

Convert string with thousands (and decimal) separator into double

User can enter double into textbox. Number might contain thousands separators. I want to validate user input, before inserting entered number into database.
Is there a C++ function that can convert such input (1,555.99) into double? If there is, can it signal error if input is invalid (I do not want to end up with function similar to atof)?
Something like strtod, but must accept input with thousands separators.
Convert the input to double using a stream that's been imbued with a locale that accepts thousand separators.
#include <locale>
#include <iostream>
int main() {
double d;
std::cin.imbue(std::locale(""));
std::cin >> d;
std::cout << d;
}
Here I've used the un-named locale, which retrieves locale information from the environment (e.g., from the OS) and sets an appropriate locale (e.g., in my case it's set to something like en_US, which supports commas as thousands separators, so:
Input: 1,234.5
Output: 1234.5
Of course, I could also imbue std::cout with some locale that (for example) uses a different thousands separator, and get output tailored for that locale (but in this case I've used the default "C" locale, which doesn't use thousands separators, just to make the numeric nature of the value obvious).
When you need to do this with something that's already "in" your program as a string, you can use an std::stringstream to do the conversion:
std::string input = "1,234,567.89";
std::istringstream buffer(input);
buffer.imbue(std::locale(""));
double d;
buffer >> d;
A C solution would be to use setlocale & sscanf:
const char *oldL = setlocale(LC_NUMERIC, "de_DE.UTF-8");
double d1 = 1000.43, d2 = 0;
sscanf("1.000,43", "%'lf", &d2);
if ( std::abs(d1-d2) < 0.01 )
printf("OK \n");
else
printf("something is wrong! \n");
setlocale(LC_NUMERIC, oldL);

C++ converting string to double using atof

I cannot get the atof() function to work. I only want the user to input values (in the form of decimal numbers) until they enter '|' and it to then break out the loop. I want the values to initially be read in as strings and then converted to doubles because I found in the past when I used this method of input, if you input the number '124' it breaks out of the loop because '124' is the code for the '|' char.
I looked around and found out about the atof() function which apparently converts strings to doubles, however when I try to convert I get the message
"no suitable conversion function from std::string to const char exists".
And I cannot seem to figure why this is.
void distance_vector(){
double total = 0.0;
double mean = 0.0;
string input = " ";
double conversion = 0.0;
vector <double> a;
while (cin >> input && input.compare("|") != 0 ){
conversion = atof(input);
a.push_back(conversion);
}
keep_window_open();
}
You need
atof(input.c_str());
That would be the "suitable conversion function" in question.
std::string::c_str Documentation:
const char* c_str() const;
Get C string equivalent
Returns a pointer to an array that contains a null-terminated sequence of characters (i.e., a C-string) representing the current value of the string object.
You can also use the strtod function to convert a string to a double:
std::string param; // gets a value from somewhere
double num = strtod(param.c_str(), NULL);
You can look up the documentation for strtod (e.g. man strtod if you're using Linux / Unix) to see more details about this function.

C++ stringstreams with std::hex

I am looking into code at work. I am having following code. In following code what is the meaning of the last statement?
bOptMask = true;
std::string strMask;
strMask.append(optarg);
std::stringstream(strMask) >> std::hex >> iMask >> std::dec;
In addition to the above question: I have string input and I need to know how to convert it to an integer using C++ streams as above instead of atoi().
The problem I am facing is if I give input
strOutput.append(optarg);
cout << "Received option for optarg is " << optarg << endl;
std::stringstream(strOutput) >> m_ivalue ;
cout << "Received option for value is " << m_ivalue << endl;
For the above code, if I am running with argument "a" I am having an output with first line as "a" and a second line output as 0. I am not sure why, can any one explain?
The last statement creates a temporary stringstream and then uses it to parse the string as hexadecimal format into iMask.
There are flaws with it though, as there is no way to check that the streaming succeeded, and the last stream achieves nothing as you are dealing with a temporary.
Better would be to create the stringstream as a non-temporary, ideally using istringstream as you are only using it to parse string to int, and then checking whether the conversion succeeds.
std::istringstream iss( strMask );
iss >> std::hex;
if(!( iss >> iMask ))
{
// handle the error
}
You only need to set the mode back to decimal if your stringstream is now about to parse a decimal integer. If it is going to parse more hex ones you can just read those in too, eg if you have a bunch of them from a file.
How you handle errors is up to you.
std::hex and std::dec are part of the <iomanip> part of streams that indicate the way text should be formatted. hex means "hexadecimal" and dec means "decimal". The default is to use decimal for integers and hexadecimal for pointers. For reasons unknown to me there is no such thing as a hex representation for printing float or double, i.e. no "hexadecimal point" although C99 sort-of supports it.
The code takes the string optarg and, treating it as hex, converts it to an integer and stores it in iMask.
If you remove the std::hex modifier you can parse the input as decimal. However, I usually use boost's lexical_cast for this. For example:
int iMask = boost::lexical_cast< int >( strMask );
This code uses manipulators to set the stream to expect integers to be read in base 16 (hexadecimal, using the digits 0123456789ABCDEF), then extracts a hexadecimal number from the string, storing it in iMask, and uses another manipulator to set the string stream back to the default of expecting integers to be written in decimal form.

Convert string to int and get the number of characters consumed in C++ with stringstream

I am new to C++ (coming from a C# background) and am trying to learn how to convert a string to an int.
I got it working by using a stringstream and outputting it into a double, like so:
const char* inputIndex = "5+2";
double number = 0;
stringstream ss(inputIndex);
ss >> number;
// number = 5
This works great. The problem I'm having is that the strings I'm parsing start with a number, but may have other, not digit characters after the digits (e.g. "5+2", "9-(3+2)", etc). The stringstream parses the digits at the beginning and stops when it encounters a non-digit, like I need it to.
The problem comes when I want to know how many characters were used to parse into the number. For example, if I parse 25+2, I want to know that two characters were used to parse 25, so that I can advance the string pointer.
So far, I got it working by clearing the stringstream, inputting the parsed number back into it, and reading the length of the resulting string:
ss.str("");
ss << number;
inputIndex += ss.str().length();
While this does work, it seems really hacky to me (though that might just be because I'm coming from something like C#), and I have a feeling that might cause a memory leak because the str() creates a copy of the string.
Is there any other way to do this, or should I stick with what I have?
Thanks.
You can use std::stringstream::tellg() to find out the current get position in the input stream. Store this value in a variable before you extract from the stream. Then get the position again after you extract from the stream. The difference between these two values is the number of characters extracted.
double x = 3435;
std::stringstream ss;
ss << x;
double y;
std::streampos pos = ss.tellg();
ss >> y;
std::cout << (ss.tellg() - pos) << " characters extracted" << std::endl;
The solution above using tellg() will fail on modern compilers (such as gcc-4.6).
The reason for this is that tellg() really shows the position of the cursor, which is now out of scope. See eg "file stream tellg/tellp and gcc-4.6 is this a bug?"
Therefore you need to also test for eof() (meaning the entire input was consumed).