User defined literals definitions - c++

I was taking a look at the cppreference page for user defined literals, and I think I understand everything except a few examples
template <char...> double operator "" _π(); // OK
How does this operator work? How can you call it?
double operator"" _Z(long double); // error: all names that begin with underscore
// followed by uppercase letter are reserved
double operator""_Z(long double); // OK: even though _Z is reserved ""_Z is allowed
What is the difference between the above two functions? What would be the difference in calling the first function as opposed to the second if the first were not an error?
Thanks!

template <char...> double operator "" _π(); // OK
How does this operator work? How can you call it?
1.234_π will call operator "" _π<'1', '.', '2', '3', '4'>(). This form allows you to detect differences in spelling that would ordinarily be undetectable (1.2 vs 1.20, for example), and allows you to avoid rounding issues due to 1.2 not being exactly representable in even long double.
double operator"" _Z(long double); // error: all names that begin with underscore
// followed by uppercase letter are reserved
double operator""_Z(long double); // OK: even though _Z is reserved ""_Z is allowed
What is the difference between the above two functions?
The C++ standard defines the grammar in terms of tokens, which you can sort of interpret as words. "" _Z is two tokens, "" and _Z. ""_Z is a single token.
This distinction matters: given #define S " world!", and then "Hello" S, the whitespace is what makes S a standalone token, preventing it from being seen as a user-defined literal suffix.
For easier coding, both "" _Z and ""_Z syntaxes are generally allowed when defining these functions, but the "" _Z syntax requires _Z to be seen as an identifier. This can cause problems when an implementation predefines _Z as a macro, or declares it as a custom keyword.

As far as I understand there is not difference between the two signitures.
The issue is that the identifier _Z is technically reserved by the standard. The main difference is that there is a space:
double operator""/*space*/_Z(long double);
double operator""_Z(long double);
Removing the space is basically a workaround that in theory would suppress the error (or more likely a warning).
As far as how you use them, did you look at the examples from the link you listed?
#include <iostream>
// used as conversion
constexpr long double operator"" _deg ( long double deg )
{
return deg*3.141592/180;
}
// used with custom type
struct mytype
{
mytype ( unsigned long long m):m(m){}
unsigned long long m;
};
mytype operator"" _mytype ( unsigned long long n )
{
return mytype(n);
}
// used for side-effects
void operator"" _print ( const char* str )
{
std::cout << str;
}
int main(){
double x = 90.0_deg;
std::cout << std::fixed << x << '\n';
mytype y = 123_mytype;
std::cout << y.m << '\n';
0x123ABC_print;
}
The idea behind the user defined literals is to allow the creation of an operator that can be applied to built in types that can convert the built in literal to another type.
EDIT:
To call one of these operators you just need to append the operator as a suffix to a value literal. So given:
// used as conversion
constexpr long double operator"" _deg ( long double deg )
{
return deg*3.141592/180;
}
The calling code could be for example:
long double d = 45_deg;
As far as using template <char...> double operator "" _π(); Maybe take a look at this.

Related

Appending long double literal suffix to user inputs in c++

I have a class that has a long double vector:
MyClass{
vector<long double> myvec;
public:
MyClass(){ //Constructor }
// Some memeber functions that operate on the vector
};
I have overloaded the input operator an I'm taking input from a user that are then pushed into the vector. The problem that I'm having is if the user inputs a number that is out of range of double, the code should append append the long double suffix to the input with out the user having too. This is what I have tried so far:
long double input;
...
input = (long double)(input + "L");
myvec.push_back(input);
I thought of using scanf, but I'm not sure how safe that is to use when overloading the input operator.
Use std::stold to convert input text to long double. There is no need for a suffix; stold will do it right. The suffix is needed in source code to tell the compiler what type the text represents. When you're reading from an external source the compiler isn't involved, so you have to sort out the type yourself.
Suffixes are only for literal values, e.g. auto x = 12345.0L. You use them to prevent implicit conversions since the default type of a floating point literal is double.
You can't use them on a named variable.
The question is how you get your input?

Converting string to integer, double, float without having to catch exceptions

I have a string which can be either a double, float or int. I would like to convert the string to the data type by making function calls. I am currently using functions such as stof and stoi which throw exceptions when the input is not a float or int. Is there another way to convert the strings without having to catch exceptions? Perhaps some function that passes a a pointer to a float as argument and just returns a boolean which represents the success of the function of call. I would like to avoid using any try catch statements in any of my code.
Use a std::stringstream and capture the result of operator>>().
For example:
#include <string>
#include <iostream>
#include <sstream>
int main(int, char*[])
{
std::stringstream sstr1("12345");
std::stringstream sstr2("foo");
int i1(0);
int i2(0);
//C++98
bool success1 = sstr1 >> i1;
//C++11 (previous is forbidden in c++11)
success1 = sstr1.good();
//C++98
bool success2 = sstr2 >> i2;
//C++11 (previous is forbidden in c++11)
success2 = sstr2.good();
std::cout << "i1=" << i1 << " success=" << success1 << std::endl;
std::cout << "i2=" << i2 << " success=" << success2 << std::endl;
return 0;
}
Prints:
i1=12345 success=1
i2=0 success=0
Note, this is basically what boost::lexical_cast does, except that boost::lexical_cast throws a boost::bad_lexical_cast exception on failure instead of using a return code.
See: http://www.boost.org/doc/libs/1_55_0/doc/html/boost_lexical_cast.html
For std::stringstream::good, see: http://www.cplusplus.com/reference/ios/ios/good/
To avoid exceptions, go back to a time when exceptions didn't exist. These functions were carried over from C but they're still useful today: strtod and strtol. (There's also a strtof but doubles will auto-convert to float anyway). You check for errors by seeing if the decoding reached the end of the string, as indicated by a zero character value.
char * pEnd = NULL;
double d = strtod(str.c_str(), &pEnd);
if (*pEnd) // error was detected
Mark Ransom, you hit the nail on the head. I understand RagHaven because in certain situations exceptions are a nuisance, and converting alphanumeric chains to doubles should be something light and fast, not subject to the exception handling mechanism. I found that a five-alphanumeric string sorting algorithm took more than 3 seconds because exceptions were thrown in the process and somewhere in the software something was complaining.
In my search for conversion functions without launching exceptions I found this (I work with C++ Builder):
double StrToFloatDef(string, double def);
That function tries to return a float, and, if it does not succeed, instead of launching an exception it returns the value that is passed in the 2nd argument (which could for example be put to std::numeric_limits<double>::max()). Checking if the return value matches 'def' you can control the result without exceptions.
Mark's proposal, std::strtod, is just as good but much faster, standard and safe. A function like the one RagHaven asks for could look like this:
bool AsDouble(const char* s, double& v) const noexcept
{
char* pEnd = nullptr;
v = std::strtod(s, &pEnd);
return *pEnd == 0;
}

Macro string concatenation

I use macros to concatenate strings, such as:
#define STR1 "first"
#define STR2 "second"
#define STRCAT(A, B) A B
which having STRCAT(STR1 , STR2 ) produces "firstsecond".
Somewhere else I have strings associated to enums in this way:
enum class MyEnum
{
Value1,
Value2
}
const char* MyEnumString[] =
{
"Value1String",
"Value2String"
}
Now the following does not work:
STRCAT(STR1, MyEnumString[(int)MyEnum::Value1])
I was just wondering whether it possible to build a macro that concatenate a #defined string literal with a const char*? Otherwise, I guess I'll do without macro, e.g. in this way (but maybe you have a better way):
std::string s = std::string(STR1) + MyEnumString[(int)MyEnum::Value1];
The macro works only on string literals, i.e. sequence of characters enclosed in double quotes. The reason the macro works is that C++ standard treats adjacent string literals like a single string literal. In other words, there is no difference to the compiler if you write
"Quick" "Brown" "Fox"
or
"QuickBrownFox"
The concatenation is performed at compile time, before your program starts running.
Concatenation of const char* variables needs to happen at runtime, because character pointers (or any other pointers, for that matter) do not exist until the runtime. That is why you cannot do it with your CONCAT macro. You can use std::string for concatenation, though - it is one of the easiest solutions to this problem.
It's only working for char literals that they can be concatenated in this way:
"A" "B"
This will not work for a pointer expression which you have in your sample, which expands to a statement like
"STR1" MyEnumString[(int)MyEnum::Value1];
As for your edit:
Yes I would definitely go for your proposal
std::string s = std::string(STR1) + MyEnumString[(int)MyEnum::Value1];
Your macro is pretty unnecessary, as it can only work with string literals of the same type. Functionally it does nothing at all.
std::string s = STRCAT("a", "b");
Is exactly the same as:
std::string s = "a" "b";
So I feel that it's best to just not use the macro at all. If you want a runtime string concatenating function, a more C++-canonical version is:
inline std::string string_concat(const std::string& a, const std::string& b)
{
return a + b;
}
But again, it seems almost pointless to have this function when you can just do:
std::string a = "a string";
std::string ab = a + "b string";
I can see limited use for a function like string_concat. Maybe you want to work on arbitrary string types or automatic conversion between UTF-8 and UTF-16...

Reverse preprocessor stringizing operator

There is a lot of wide string numeric constants defined in one include file in one SDK, which I cannot modify, but which gets often updated and changed. So I cannot declare the numeric define with the numbers because It is completely different each few days and I don't want ('am not allowed) to apply any scripting for updating
If it would be the other way round and the constant would be defined as a number, I can simply make the string by # preprocessor operator.
I don't won't to use atoi and I don't want to make any variables, I just need the constants in numeric form best by preprocessor.
I know that there is no reverse stringizing operator, but isn't there any way how to convert string to token (number) by preprocessor?
There is no way to "unstringify" a string in the preprocessor. However, you can get, at least, constant expressions out of the string literals using user-defined literals. Below is an example initializing an enum value with the value taken from a string literal to demonstrate that the decoding happens at compile time, although not during preprocessing:
#include <iostream>
constexpr int make_value(int base, wchar_t const* val, std::size_t n)
{
return n? make_value(base * 10 + val[0] - L'0', val + 1, n -1): base;
}
constexpr int operator"" _decode(wchar_t const* val, std::size_t n)
{
return make_value(0, val, n);
}
#define VALUE L"123"
#define CONCAT(v,s) v ## s
#define DECODE(d) CONCAT(d,_decode)
int main()
{
enum { value = DECODE(VALUE) };
std::cout << "value=" << value << "\n";
}

How to change return type based on inner code? (string to number conversion)

For example, I have this code which converts from string to number:
#include <sstream>
template <typename T>
T string_to_num( const string &Text, T defValue = T() )
{
stringstream ss;
for ( string::const_iterator i=Text.begin(); i!=Text.end(); ++i )
if ( isdigit(*i) || *i=='e' || *i=='-' || *i=='+' || *i=='.' )
ss << *i;
T result;
return ss >> result ? result : defValue;
}
Problem is it requires two arguments, the second which gives it a clue as to what type of number I am returning (an int or a float etc.).
How can I make it so that if the string contains a decimal '.' it returns a decimal datatype (eg. float), otherwise an whole datatype (eg. int)?
Unless someone has a better code they can share to do this..?
The question is why you need this. I think you want it, just to get rid of indicating the type before calling string_to_num:
????? number = string_to_num<double>("123.21");
^^^^^
do_something(number);
But, you already are indicating the type by <double>. A simple syntax sugar, auto is what you want. (It's compile time)
Otherwise, you need a variant type, and it's far from your string_to_num definition. It has a lot of overhead.
You're code is already OK, and the output is based on T. So, in real programs you have no problem.