Enum co-variance/polymorphism? - c++

Imagine I have an enum defining a common option like:
enum valueState
{
uninitialized,
min,
max
}
Now imagine I have a more specific value state for specific value types, lets say:
enum floatValueState
{
nan
}
Would there be any way to pass the valueState options as floatValueStates ? Or is there some other way to achieve this kind of abstraction of choices at compile time without too much template noise.
EDIT:
Ofc they can implicitly convert to the type of enum, but how would you combine the two enums without overlapping values (like 0 = uninitialized, 4 = non) without specifying how many enums I will combine.

Because old style enums (non-class) are just integers (implicit conversion), you can pass them around with disregard to which you're actually using. This is dangerous, but allows you to do what you want easily. You would need to micro-manage the values though, to make sure nan and uninitialized are not the same thing:
enum floatValueState {
nan = 3
}
Anything accepting a floatValueState will implicitly accept a ValueState, with no extra work - what you want, but again, dangerous. Specifically, anything accepting a ValueState will also accept a floatValueState - and that could break stuff.

Whilst it is technically possible to solve this (see the other answer) - there is some theoretical problem: this is probably a violation of the Liskov Substitution Principle. That principle guides us how to use inheritance in sound OO design.
And it states: each "usage" of the "base type" - you can insert the "derived type" there.
Given that context - how would you "embed" that "additional information nan" here?
In other words (pseudo code):
valueState value = ...
if (value == min || max) {
... } else {
value must be uninitialized
But what if value would now be floatValueState?!

Related

C++ std Chrono - How did they manage to let us declare values as 1s, 1000ms, etc? [duplicate]

C++11 introduces user-defined literals which will allow the introduction of new literal syntax based on existing literals (int, hex, string, float) so that any type will be able to have a literal presentation.
Examples:
// imaginary numbers
std::complex<long double> operator "" _i(long double d) // cooked form
{
return std::complex<long double>(0, d);
}
auto val = 3.14_i; // val = complex<long double>(0, 3.14)
// binary values
int operator "" _B(const char*); // raw form
int answer = 101010_B; // answer = 42
// std::string
std::string operator "" _s(const char* str, size_t /*length*/)
{
return std::string(str);
}
auto hi = "hello"_s + " world"; // + works, "hello"_s is a string not a pointer
// units
assert(1_kg == 2.2_lb); // give or take 0.00462262 pounds
At first glance this looks very cool but I'm wondering how applicable it really is, when I tried to think of having the suffixes _AD and _BC create dates I found that it's problematic due to operator order. 1974/01/06_AD would first evaluate 1974/01 (as plain ints) and only later the 06_AD (to say nothing of August and September having to be written without the 0 for octal reasons). This can be worked around by having the syntax be 1974-1/6_AD so that the operator evaluation order works but it's clunky.
So what my question boils down to is this, do you feel this feature will justify itself? What other literals would you like to define that will make your C++ code more readable?
Updated syntax to fit the final draft on June 2011
At first sight, it seems to be simple syntactic sugar.
But when looking deeper, we see it's more than syntactic sugar, as it extends the C++ user's options to create user-defined types that behave exactly like distinct built-in types. In this, this little "bonus" is a very interesting C++11 addition to C++.
Do we really need it in C++?
I see few uses in the code I wrote in the past years, but just because I didn't use it in C++ doesn't mean it's not interesting for another C++ developer.
We had used in C++ (and in C, I guess), compiler-defined literals, to type integer numbers as short or long integers, real numbers as float or double (or even long double), and character strings as normal or wide chars.
In C++, we had the possibility to create our own types (i.e. classes), with potentially no overhead (inlining, etc.). We had the possibility to add operators to their types, to have them behave like similar built-in types, which enables C++ developers to use matrices and complex numbers as naturally as they would have if these have been added to the language itself. We can even add cast operators (which is usually a bad idea, but sometimes, it's just the right solution).
We still missed one thing to have user-types behave as built-in types: user-defined literals.
So, I guess it's a natural evolution for the language, but to be as complete as possible: "If you want to create a type, and you want it to behave as much possible as a built-in types, here are the tools..."
I'd guess it's very similar to .NET's decision to make every primitive a struct, including booleans, integers, etc., and have all structs derive from Object. This decision alone puts .NET far beyond Java's reach when working with primitives, no matter how much boxing/unboxing hacks Java will add to its specification.
Do YOU really need it in C++?
This question is for YOU to answer. Not Bjarne Stroustrup. Not Herb Sutter. Not whatever member of C++ standard committee. This is why you have the choice in C++, and they won't restrict a useful notation to built-in types alone.
If you need it, then it is a welcome addition. If you don't, well... Don't use it. It will cost you nothing.
Welcome to C++, the language where features are optional.
Bloated??? Show me your complexes!!!
There is a difference between bloated and complex (pun intended).
Like shown by Niels at What new capabilities do user-defined literals add to C++?, being able to write a complex number is one of the two features added "recently" to C and C++:
// C89:
MyComplex z1 = { 1, 2 } ;
// C99: You'll note I is a macro, which can lead
// to very interesting situations...
double complex z1 = 1 + 2*I;
// C++:
std::complex<double> z1(1, 2) ;
// C++11: You'll note that "i" won't ever bother
// you elsewhere
std::complex<double> z1 = 1 + 2_i ;
Now, both C99 "double complex" type and C++ "std::complex" type are able to be multiplied, added, subtracted, etc., using operator overloading.
But in C99, they just added another type as a built-in type, and built-in operator overloading support. And they added another built-in literal feature.
In C++, they just used existing features of the language, saw that the literal feature was a natural evolution of the language, and thus added it.
In C, if you need the same notation enhancement for another type, you're out of luck until your lobbying to add your quantum wave functions (or 3D points, or whatever basic type you're using in your field of work) to the C standard as a built-in type succeeds.
In C++11, you just can do it yourself:
Point p = 25_x + 13_y + 3_z ; // 3D point
Is it bloated? No, the need is there, as shown by how both C and C++ complexes need a way to represent their literal complex values.
Is it wrongly designed? No, it's designed as every other C++ feature, with extensibility in mind.
Is it for notation purposes only? No, as it can even add type safety to your code.
For example, let's imagine a CSS oriented code:
css::Font::Size p0 = 12_pt ; // Ok
css::Font::Size p1 = 50_percent ; // Ok
css::Font::Size p2 = 15_px ; // Ok
css::Font::Size p3 = 10_em ; // Ok
css::Font::Size p4 = 15 ; // ERROR : Won't compile !
It is then very easy to enforce a strong typing to the assignment of values.
Is is dangerous?
Good question. Can these functions be namespaced? If yes, then Jackpot!
Anyway, like everything, you can kill yourself if a tool is used improperly. C is powerful, and you can shoot your head off if you misuse the C gun. C++ has the C gun, but also the scalpel, the taser, and whatever other tool you'll find in the toolkit. You can misuse the scalpel and bleed yourself to death. Or you can build very elegant and robust code.
So, like every C++ feature, do you really need it? It is the question you must answer before using it in C++. If you don't, it will cost you nothing. But if you do really need it, at least, the language won't let you down.
The date example?
Your error, it seems to me, is that you are mixing operators:
1974/01/06AD
^ ^ ^
This can't be avoided, because / being an operator, the compiler must interpret it. And, AFAIK, it is a good thing.
To find a solution for your problem, I would write the literal in some other way. For example:
"1974-01-06"_AD ; // ISO-like notation
"06/01/1974"_AD ; // french-date-like notation
"jan 06 1974"_AD ; // US-date-like notation
19740106_AD ; // integer-date-like notation
Personally, I would choose the integer and the ISO dates, but it depends on YOUR needs. Which is the whole point of letting the user define its own literal names.
Here's a case where there is an advantage to using user-defined literals instead of a constructor call:
#include <bitset>
#include <iostream>
template<char... Bits>
struct checkbits
{
static const bool valid = false;
};
template<char High, char... Bits>
struct checkbits<High, Bits...>
{
static const bool valid = (High == '0' || High == '1')
&& checkbits<Bits...>::valid;
};
template<char High>
struct checkbits<High>
{
static const bool valid = (High == '0' || High == '1');
};
template<char... Bits>
inline constexpr std::bitset<sizeof...(Bits)>
operator"" _bits() noexcept
{
static_assert(checkbits<Bits...>::valid, "invalid digit in binary string");
return std::bitset<sizeof...(Bits)>((char []){Bits..., '\0'});
}
int
main()
{
auto bits = 0101010101010101010101010101010101010101010101010101010101010101_bits;
std::cout << bits << std::endl;
std::cout << "size = " << bits.size() << std::endl;
std::cout << "count = " << bits.count() << std::endl;
std::cout << "value = " << bits.to_ullong() << std::endl;
// This triggers the static_assert at compile time.
auto badbits = 2101010101010101010101010101010101010101010101010101010101010101_bits;
// This throws at run time.
std::bitset<64> badbits2("2101010101010101010101010101010101010101010101010101010101010101_bits");
}
The advantage is that a run-time exception is converted to a compile-time error.
You couldn't add the static assert to the bitset ctor taking a string (at least not without string template arguments).
It's very nice for mathematical code. Out of my mind I can see the use for the following operators:
deg for degrees. That makes writing absolute angles much more intuitive.
double operator ""_deg(long double d)
{
// returns radians
return d*M_PI/180;
}
It can also be used for various fixed point representations (which are still in use in the field of DSP and graphics).
int operator ""_fix(long double d)
{
// returns d as a 1.15.16 fixed point number
return (int)(d*65536.0f);
}
These look like nice examples how to use it. They help to make constants in code more readable. It's another tool to make code unreadable as well, but we already have so much tools abuse that one more does not hurt much.
UDLs are namespaced (and can be imported by using declarations/directives, but you cannot explicitly namespace a literal like 3.14std::i), which means there (hopefully) won't be a ton of clashes.
The fact that they can actually be templated (and constexpr'd) means that you can do some pretty powerful stuff with UDLs. Bigint authors will be really happy, as they can finally have arbitrarily large constants, calculated at compile time (via constexpr or templates).
I'm just sad that we won't see a couple useful literals in the standard (from the looks of it), like s for std::string and i for the imaginary unit.
The amount of coding time that will be saved by UDLs is actually not that high, but the readability will be vastly increased and more and more calculations can be shifted to compile-time for faster execution.
Bjarne Stroustrup talks about UDL's in this C++11 talk, in the first section on type-rich interfaces, around 20 minute mark.
His basic argument for UDLs takes the form of a syllogism:
"Trivial" types, i.e., built-in primitive types, can only catch trivial type errors. Interfaces with richer types allow the type system to catch more kinds of errors.
The kinds of type errors that richly typed code can catch have impact on real code. (He gives the example of the Mars Climate Orbiter, which infamously failed due to a dimensions error in an important constant).
In real code, units are rarely used. People don't use them, because incurring runtime compute or memory overhead to create rich types is too costly, and using pre-existing C++ templated unit code is so notationally ugly that no one uses it. (Empirically, no one uses it, even though the libraries have been around for a decade).
Therefore, in order to get engineers to use units in real code, we needed a device that (1) incurs no runtime overhead and (2) is notationally acceptable.
Let me add a little bit of context. For our work, user defined literals is much needed. We work on MDE (Model-Driven Engineering). We want to define models and metamodels in C++. We actually implemented a mapping from Ecore to C++ (EMF4CPP).
The problem comes when being able to define model elements as classes in C++. We are taking the approach of transforming the metamodel (Ecore) to templates with arguments. Arguments of the template are the structural characteristics of types and classes. For example, a class with two int attributes would be something like:
typedef ::ecore::Class< Attribute<int>, Attribute<int> > MyClass;
Hoever, it turns out that every element in a model or metamodel, usually has a name. We would like to write:
typedef ::ecore::Class< "MyClass", Attribute< "x", int>, Attribute<"y", int> > MyClass;
BUT, C++, nor C++0x don't allow this, as strings are prohibited as arguments to templates. You can write the name char by char, but this is admitedly a mess. With proper user-defined literals, we could write something similar. Say we use "_n" to identify model element names (I don't use the exact syntax, just to make an idea):
typedef ::ecore::Class< MyClass_n, Attribute< x_n, int>, Attribute<y_n, int> > MyClass;
Finally, having those definitions as templates helps us a lot to design algorithms for traversing the model elements, model transformations, etc. that are really efficient, because type information, identification, transformations, etc. are determined by the compiler at compile time.
Supporting compile-time dimension checking is the only justification required.
auto force = 2_N;
auto dx = 2_m;
auto energy = force * dx;
assert(energy == 4_J);
See for example PhysUnits-CT-Cpp11, a small C++11, C++14 header-only library for compile-time dimensional analysis and unit/quantity manipulation and conversion. Simpler than Boost.Units, does support unit symbol literals such as m, g, s, metric prefixes such as m, k, M, only depends on standard C++ library, SI-only, integral powers of dimensions.
Hmm... I have not thought about this feature yet. Your sample was well thought out and is certainly interesting. C++ is very powerful as it is now, but unfortunately the syntax used in pieces of code you read is at times overly complex. Readability is, if not all, then at least much. And such a feature would be geared for more readability. If I take your last example
assert(1_kg == 2.2_lb); // give or take 0.00462262 pounds
... I wonder how you'd express that today. You'd have a KG and a LB class and you'd compare implicit objects:
assert(KG(1.0f) == LB(2.2f));
And that would do as well. With types that have longer names or types that you have no hopes of having such a nice constructor for sans writing an adapter, it might be a nice addition for on-the-fly implicit object creation and initialization. On the other hand, you can already create and initialize objects using methods, too.
But I agree with Nils on mathematics. C and C++ trigonometry functions for example require input in radians. I think in degrees though, so a very short implicit conversion like Nils posted is very nice.
Ultimately, it's going to be syntactic sugar however, but it will have a slight effect on readability. And it will probably be easier to write some expressions too (sin(180.0deg) is easier to write than sin(deg(180.0)). And then there will be people who abuse the concept. But then, language-abusive people should use very restrictive languages rather than something as expressive as C++.
Ah, my post says basically nothing except: it's going to be okay, the impact won't be too big. Let's not worry. :-)
I have never needed or wanted this feature (but this could be the Blub effect). My knee jerk reaction is that it's lame, and likely to appeal to the same people who think that it's cool to overload operator+ for any operation which could remotely be construed as adding.
C++ is usually very strict about the syntax used - barring the preprocessor there is not much you can use to define a custom syntax/grammar. E.g. we can overload existing operatos, but we cannot define new ones - IMO this is very much in tune with the spirit of C++.
I don't mind some ways for more customized source code - but the point chosen seems very isolated to me, which confuses me most.
Even intended use may make it much harder to read source code: an single letter may have vast-reaching side effects that in no way can be identified from the context. With symmetry to u, l and f, most developers will choose single letters.
This may also turn scoping into a problem, using single letters in global namespace will probably be considered bad practice, and the tools that are supposed mixing libraries easier (namespaces and descriptive identifiers) will probably defeat its purpose.
I see some merit in combination with "auto", also in combination with a unit library like boost units, but not enough to merit this adition.
I wonder, however, what clever ideas we come up with.
I used user literals for binary strings like this:
"asd\0\0\0\1"_b
using std::string(str, n) constructor so that \0 wouldn't cut the string in half. (The project does a lot of work with various file formats.)
This was helpful also when I ditched std::string in favor of a wrapper for std::vector.
Line noise in that thing is huge. Also it's horrible to read.
Let me know, did they reason that new syntax addition with any kind of examples? For instance, do they have couple of programs that already use C++0x?
For me, this part:
auto val = 3.14_i
Does not justify this part:
std::complex<double> operator ""_i(long double d) // cooked form
{
return std::complex(0, d);
}
Not even if you'd use the i-syntax in 1000 other lines as well. If you write, you probably write 10000 lines of something else along that as well. Especially when you will still probably write mostly everywhere this:
std::complex<double> val = 3.14i
'auto' -keyword may be justified though, only perhaps. But lets take just C++, because it's better than C++0x in this aspect.
std::complex<double> val = std::complex(0, 3.14);
It's like.. that simple. Even thought all the std and pointy brackets are just lame if you use it about everywhere. I don't start guessing what syntax there's in C++0x for turning std::complex under complex.
complex = std::complex<double>;
That's perhaps something straightforward, but I don't believe it's that simple in C++0x.
typedef std::complex<double> complex;
complex val = std::complex(0, 3.14);
Perhaps? >:)
Anyway, the point is: writing 3.14i instead of std::complex(0, 3.14); does not save you much time in overall except in few super special cases.

Can we define an enum variable like an array? [duplicate]

This question already has answers here:
How to cast int to enum in C++?
(6 answers)
Closed last year.
i have below enum :
enum words= {dog, cat, horse, hen, goat, pig, sheep};
can i assign a new enum constant as below :
words word = words[4];
What i want to do is get a random number between 0 and words.size() .and i want to pick that word from the enum . is it possible?
int wordno = (rand()%8);
words word = (words)wordno;
Your enum declaration is a bit wrong. It should be
enum words {dog, cat, horse, hen, goat, pig, sheep};
Since C++11 it is usually also better to use an enum class instead, see this question for details:
enum class words {dog, cat, horse, hen, goat, pig, sheep};
Note that enum enumerators already have a numeric value. If you don't specify these numeric values manually, they are from left-to-right starting from zero increasing by one. So e.g. horse has the numeric value 2 in your enumeration.
However, it is not directly allowed to convert between the numeric value and the enumeration implicitly. If you need to do that, you must use static_cast:
auto word = static_cast<words>(wordno);
When doing this, you need to be careful however (which is also why the explicit cast is required), because you are not allowed to cast an integer value that is outside the range of numeric values of the enumeration in this way. In your code you do rand()%8 which can return values up to 7, but the largest numeric value in your enumeration is 6 for sheep. You would have undefined behavior with that.
auto can be replaced by words. It just lets the compiler figure out the correct type, so I don't have to write it twice. (It is already in the static_cast.)
As a sidenote, rand is not a good random number generator for many reasons (see e.g. this question). Since C++11 you can use the more modern <random> library, see this question.
Also as a sidenote I suggest avoiding C-style casts such as (words)wordno. Instead use static_cast as I showed above. You are much less likely to make serious mistakes this way, since the C-style casts are usually too permissive in what casts they allow.
For words word = words[4];, this syntax does not work. But it is not really necessary, since as above, you can initialize word directly with the value 4 with an explicit cast.
auto word = static_cast<words>(4);
But if you are intending to assign a constant value, you should just use the enumerator directly. That is what they are for:
auto word = words::goat;

what does operator"" overloading mean in C++? [duplicate]

C++11 introduces user-defined literals which will allow the introduction of new literal syntax based on existing literals (int, hex, string, float) so that any type will be able to have a literal presentation.
Examples:
// imaginary numbers
std::complex<long double> operator "" _i(long double d) // cooked form
{
return std::complex<long double>(0, d);
}
auto val = 3.14_i; // val = complex<long double>(0, 3.14)
// binary values
int operator "" _B(const char*); // raw form
int answer = 101010_B; // answer = 42
// std::string
std::string operator "" _s(const char* str, size_t /*length*/)
{
return std::string(str);
}
auto hi = "hello"_s + " world"; // + works, "hello"_s is a string not a pointer
// units
assert(1_kg == 2.2_lb); // give or take 0.00462262 pounds
At first glance this looks very cool but I'm wondering how applicable it really is, when I tried to think of having the suffixes _AD and _BC create dates I found that it's problematic due to operator order. 1974/01/06_AD would first evaluate 1974/01 (as plain ints) and only later the 06_AD (to say nothing of August and September having to be written without the 0 for octal reasons). This can be worked around by having the syntax be 1974-1/6_AD so that the operator evaluation order works but it's clunky.
So what my question boils down to is this, do you feel this feature will justify itself? What other literals would you like to define that will make your C++ code more readable?
Updated syntax to fit the final draft on June 2011
At first sight, it seems to be simple syntactic sugar.
But when looking deeper, we see it's more than syntactic sugar, as it extends the C++ user's options to create user-defined types that behave exactly like distinct built-in types. In this, this little "bonus" is a very interesting C++11 addition to C++.
Do we really need it in C++?
I see few uses in the code I wrote in the past years, but just because I didn't use it in C++ doesn't mean it's not interesting for another C++ developer.
We had used in C++ (and in C, I guess), compiler-defined literals, to type integer numbers as short or long integers, real numbers as float or double (or even long double), and character strings as normal or wide chars.
In C++, we had the possibility to create our own types (i.e. classes), with potentially no overhead (inlining, etc.). We had the possibility to add operators to their types, to have them behave like similar built-in types, which enables C++ developers to use matrices and complex numbers as naturally as they would have if these have been added to the language itself. We can even add cast operators (which is usually a bad idea, but sometimes, it's just the right solution).
We still missed one thing to have user-types behave as built-in types: user-defined literals.
So, I guess it's a natural evolution for the language, but to be as complete as possible: "If you want to create a type, and you want it to behave as much possible as a built-in types, here are the tools..."
I'd guess it's very similar to .NET's decision to make every primitive a struct, including booleans, integers, etc., and have all structs derive from Object. This decision alone puts .NET far beyond Java's reach when working with primitives, no matter how much boxing/unboxing hacks Java will add to its specification.
Do YOU really need it in C++?
This question is for YOU to answer. Not Bjarne Stroustrup. Not Herb Sutter. Not whatever member of C++ standard committee. This is why you have the choice in C++, and they won't restrict a useful notation to built-in types alone.
If you need it, then it is a welcome addition. If you don't, well... Don't use it. It will cost you nothing.
Welcome to C++, the language where features are optional.
Bloated??? Show me your complexes!!!
There is a difference between bloated and complex (pun intended).
Like shown by Niels at What new capabilities do user-defined literals add to C++?, being able to write a complex number is one of the two features added "recently" to C and C++:
// C89:
MyComplex z1 = { 1, 2 } ;
// C99: You'll note I is a macro, which can lead
// to very interesting situations...
double complex z1 = 1 + 2*I;
// C++:
std::complex<double> z1(1, 2) ;
// C++11: You'll note that "i" won't ever bother
// you elsewhere
std::complex<double> z1 = 1 + 2_i ;
Now, both C99 "double complex" type and C++ "std::complex" type are able to be multiplied, added, subtracted, etc., using operator overloading.
But in C99, they just added another type as a built-in type, and built-in operator overloading support. And they added another built-in literal feature.
In C++, they just used existing features of the language, saw that the literal feature was a natural evolution of the language, and thus added it.
In C, if you need the same notation enhancement for another type, you're out of luck until your lobbying to add your quantum wave functions (or 3D points, or whatever basic type you're using in your field of work) to the C standard as a built-in type succeeds.
In C++11, you just can do it yourself:
Point p = 25_x + 13_y + 3_z ; // 3D point
Is it bloated? No, the need is there, as shown by how both C and C++ complexes need a way to represent their literal complex values.
Is it wrongly designed? No, it's designed as every other C++ feature, with extensibility in mind.
Is it for notation purposes only? No, as it can even add type safety to your code.
For example, let's imagine a CSS oriented code:
css::Font::Size p0 = 12_pt ; // Ok
css::Font::Size p1 = 50_percent ; // Ok
css::Font::Size p2 = 15_px ; // Ok
css::Font::Size p3 = 10_em ; // Ok
css::Font::Size p4 = 15 ; // ERROR : Won't compile !
It is then very easy to enforce a strong typing to the assignment of values.
Is is dangerous?
Good question. Can these functions be namespaced? If yes, then Jackpot!
Anyway, like everything, you can kill yourself if a tool is used improperly. C is powerful, and you can shoot your head off if you misuse the C gun. C++ has the C gun, but also the scalpel, the taser, and whatever other tool you'll find in the toolkit. You can misuse the scalpel and bleed yourself to death. Or you can build very elegant and robust code.
So, like every C++ feature, do you really need it? It is the question you must answer before using it in C++. If you don't, it will cost you nothing. But if you do really need it, at least, the language won't let you down.
The date example?
Your error, it seems to me, is that you are mixing operators:
1974/01/06AD
^ ^ ^
This can't be avoided, because / being an operator, the compiler must interpret it. And, AFAIK, it is a good thing.
To find a solution for your problem, I would write the literal in some other way. For example:
"1974-01-06"_AD ; // ISO-like notation
"06/01/1974"_AD ; // french-date-like notation
"jan 06 1974"_AD ; // US-date-like notation
19740106_AD ; // integer-date-like notation
Personally, I would choose the integer and the ISO dates, but it depends on YOUR needs. Which is the whole point of letting the user define its own literal names.
Here's a case where there is an advantage to using user-defined literals instead of a constructor call:
#include <bitset>
#include <iostream>
template<char... Bits>
struct checkbits
{
static const bool valid = false;
};
template<char High, char... Bits>
struct checkbits<High, Bits...>
{
static const bool valid = (High == '0' || High == '1')
&& checkbits<Bits...>::valid;
};
template<char High>
struct checkbits<High>
{
static const bool valid = (High == '0' || High == '1');
};
template<char... Bits>
inline constexpr std::bitset<sizeof...(Bits)>
operator"" _bits() noexcept
{
static_assert(checkbits<Bits...>::valid, "invalid digit in binary string");
return std::bitset<sizeof...(Bits)>((char []){Bits..., '\0'});
}
int
main()
{
auto bits = 0101010101010101010101010101010101010101010101010101010101010101_bits;
std::cout << bits << std::endl;
std::cout << "size = " << bits.size() << std::endl;
std::cout << "count = " << bits.count() << std::endl;
std::cout << "value = " << bits.to_ullong() << std::endl;
// This triggers the static_assert at compile time.
auto badbits = 2101010101010101010101010101010101010101010101010101010101010101_bits;
// This throws at run time.
std::bitset<64> badbits2("2101010101010101010101010101010101010101010101010101010101010101_bits");
}
The advantage is that a run-time exception is converted to a compile-time error.
You couldn't add the static assert to the bitset ctor taking a string (at least not without string template arguments).
It's very nice for mathematical code. Out of my mind I can see the use for the following operators:
deg for degrees. That makes writing absolute angles much more intuitive.
double operator ""_deg(long double d)
{
// returns radians
return d*M_PI/180;
}
It can also be used for various fixed point representations (which are still in use in the field of DSP and graphics).
int operator ""_fix(long double d)
{
// returns d as a 1.15.16 fixed point number
return (int)(d*65536.0f);
}
These look like nice examples how to use it. They help to make constants in code more readable. It's another tool to make code unreadable as well, but we already have so much tools abuse that one more does not hurt much.
UDLs are namespaced (and can be imported by using declarations/directives, but you cannot explicitly namespace a literal like 3.14std::i), which means there (hopefully) won't be a ton of clashes.
The fact that they can actually be templated (and constexpr'd) means that you can do some pretty powerful stuff with UDLs. Bigint authors will be really happy, as they can finally have arbitrarily large constants, calculated at compile time (via constexpr or templates).
I'm just sad that we won't see a couple useful literals in the standard (from the looks of it), like s for std::string and i for the imaginary unit.
The amount of coding time that will be saved by UDLs is actually not that high, but the readability will be vastly increased and more and more calculations can be shifted to compile-time for faster execution.
Bjarne Stroustrup talks about UDL's in this C++11 talk, in the first section on type-rich interfaces, around 20 minute mark.
His basic argument for UDLs takes the form of a syllogism:
"Trivial" types, i.e., built-in primitive types, can only catch trivial type errors. Interfaces with richer types allow the type system to catch more kinds of errors.
The kinds of type errors that richly typed code can catch have impact on real code. (He gives the example of the Mars Climate Orbiter, which infamously failed due to a dimensions error in an important constant).
In real code, units are rarely used. People don't use them, because incurring runtime compute or memory overhead to create rich types is too costly, and using pre-existing C++ templated unit code is so notationally ugly that no one uses it. (Empirically, no one uses it, even though the libraries have been around for a decade).
Therefore, in order to get engineers to use units in real code, we needed a device that (1) incurs no runtime overhead and (2) is notationally acceptable.
Let me add a little bit of context. For our work, user defined literals is much needed. We work on MDE (Model-Driven Engineering). We want to define models and metamodels in C++. We actually implemented a mapping from Ecore to C++ (EMF4CPP).
The problem comes when being able to define model elements as classes in C++. We are taking the approach of transforming the metamodel (Ecore) to templates with arguments. Arguments of the template are the structural characteristics of types and classes. For example, a class with two int attributes would be something like:
typedef ::ecore::Class< Attribute<int>, Attribute<int> > MyClass;
Hoever, it turns out that every element in a model or metamodel, usually has a name. We would like to write:
typedef ::ecore::Class< "MyClass", Attribute< "x", int>, Attribute<"y", int> > MyClass;
BUT, C++, nor C++0x don't allow this, as strings are prohibited as arguments to templates. You can write the name char by char, but this is admitedly a mess. With proper user-defined literals, we could write something similar. Say we use "_n" to identify model element names (I don't use the exact syntax, just to make an idea):
typedef ::ecore::Class< MyClass_n, Attribute< x_n, int>, Attribute<y_n, int> > MyClass;
Finally, having those definitions as templates helps us a lot to design algorithms for traversing the model elements, model transformations, etc. that are really efficient, because type information, identification, transformations, etc. are determined by the compiler at compile time.
Supporting compile-time dimension checking is the only justification required.
auto force = 2_N;
auto dx = 2_m;
auto energy = force * dx;
assert(energy == 4_J);
See for example PhysUnits-CT-Cpp11, a small C++11, C++14 header-only library for compile-time dimensional analysis and unit/quantity manipulation and conversion. Simpler than Boost.Units, does support unit symbol literals such as m, g, s, metric prefixes such as m, k, M, only depends on standard C++ library, SI-only, integral powers of dimensions.
Hmm... I have not thought about this feature yet. Your sample was well thought out and is certainly interesting. C++ is very powerful as it is now, but unfortunately the syntax used in pieces of code you read is at times overly complex. Readability is, if not all, then at least much. And such a feature would be geared for more readability. If I take your last example
assert(1_kg == 2.2_lb); // give or take 0.00462262 pounds
... I wonder how you'd express that today. You'd have a KG and a LB class and you'd compare implicit objects:
assert(KG(1.0f) == LB(2.2f));
And that would do as well. With types that have longer names or types that you have no hopes of having such a nice constructor for sans writing an adapter, it might be a nice addition for on-the-fly implicit object creation and initialization. On the other hand, you can already create and initialize objects using methods, too.
But I agree with Nils on mathematics. C and C++ trigonometry functions for example require input in radians. I think in degrees though, so a very short implicit conversion like Nils posted is very nice.
Ultimately, it's going to be syntactic sugar however, but it will have a slight effect on readability. And it will probably be easier to write some expressions too (sin(180.0deg) is easier to write than sin(deg(180.0)). And then there will be people who abuse the concept. But then, language-abusive people should use very restrictive languages rather than something as expressive as C++.
Ah, my post says basically nothing except: it's going to be okay, the impact won't be too big. Let's not worry. :-)
I have never needed or wanted this feature (but this could be the Blub effect). My knee jerk reaction is that it's lame, and likely to appeal to the same people who think that it's cool to overload operator+ for any operation which could remotely be construed as adding.
C++ is usually very strict about the syntax used - barring the preprocessor there is not much you can use to define a custom syntax/grammar. E.g. we can overload existing operatos, but we cannot define new ones - IMO this is very much in tune with the spirit of C++.
I don't mind some ways for more customized source code - but the point chosen seems very isolated to me, which confuses me most.
Even intended use may make it much harder to read source code: an single letter may have vast-reaching side effects that in no way can be identified from the context. With symmetry to u, l and f, most developers will choose single letters.
This may also turn scoping into a problem, using single letters in global namespace will probably be considered bad practice, and the tools that are supposed mixing libraries easier (namespaces and descriptive identifiers) will probably defeat its purpose.
I see some merit in combination with "auto", also in combination with a unit library like boost units, but not enough to merit this adition.
I wonder, however, what clever ideas we come up with.
I used user literals for binary strings like this:
"asd\0\0\0\1"_b
using std::string(str, n) constructor so that \0 wouldn't cut the string in half. (The project does a lot of work with various file formats.)
This was helpful also when I ditched std::string in favor of a wrapper for std::vector.
Line noise in that thing is huge. Also it's horrible to read.
Let me know, did they reason that new syntax addition with any kind of examples? For instance, do they have couple of programs that already use C++0x?
For me, this part:
auto val = 3.14_i
Does not justify this part:
std::complex<double> operator ""_i(long double d) // cooked form
{
return std::complex(0, d);
}
Not even if you'd use the i-syntax in 1000 other lines as well. If you write, you probably write 10000 lines of something else along that as well. Especially when you will still probably write mostly everywhere this:
std::complex<double> val = 3.14i
'auto' -keyword may be justified though, only perhaps. But lets take just C++, because it's better than C++0x in this aspect.
std::complex<double> val = std::complex(0, 3.14);
It's like.. that simple. Even thought all the std and pointy brackets are just lame if you use it about everywhere. I don't start guessing what syntax there's in C++0x for turning std::complex under complex.
complex = std::complex<double>;
That's perhaps something straightforward, but I don't believe it's that simple in C++0x.
typedef std::complex<double> complex;
complex val = std::complex(0, 3.14);
Perhaps? >:)
Anyway, the point is: writing 3.14i instead of std::complex(0, 3.14); does not save you much time in overall except in few super special cases.

Is using enum types for user interface functions a good idea?

Let's say I'm building a simple class that prints some text on the screen, and it has the possibility to change the colors of the text.
myclass a("this is the text");
a.setColor("green");
I'm still learning C++ and I recently got introduced to enum and thought I'd give it a try. I was wondering if it is a good practice to use enum types in interface functions like the above setColor?
What are the advantages or disadvantages of using enum classes in interface functions? Are there cases where they are more applicable than and are there case where they are bad to use?
What if I want to combine properties? E.g.
a.setAttribute("bold reverse");
I don't know if interface is the correct term for what I want to describe: the functions that a user of my class would end up using.
In your case, there are (at least) two advantages:
No need to parse the string at run-time, leading to higher efficiency. You can use an enum variable directly in a switch statement.
An enum is (to some extent) self-documenting, the user of your code has to work hard to provide an invalid value.
One potential "disadvantage" is in the case where the colour string has come from e.g. run-time userinput (they've typed into a textbox or something). You will need to parse this string and convert it into an enum. But it's not really a disadvantage, because you'll need to do this anyway. Best practice is for the user-interface logic to validate the string and convert to the enum at the earliest opportunity.
What if I want to combine properties?
I can think of at least three options:
Use multiple calls to setAttribute.
Pass an array of attributes.
Define each enum value to be a power-of-two, and then you can combine enums with |.
Yes, using an enum in this case seems better than an actual string.
One clear advantage - strong typing.
If setColor accepts a char*, like in your case, you could do:
a.setColor("horse");
which you can only detect as an error at runtime.
If setColor takes an eColors as parameter:
a.setColor(eGreen);
a.setColor(eRed);
would compile, but
a.setColor(eHorse);
would not.
Enums are definitely more explicit than strings for this case. As for the concatenation of values, you can use some bit fiddling to make this work. Set the values of your enum to increasing powers of two, then you can OR them together.
enum TextAttributes {
Bold = 1,
Italic = 2,
Reverse = 4,
StrikeThrough = 8,
Underline = 16
};
TextAttributes attr = Bold | Reverse;

What new capabilities do user-defined literals add to C++?

C++11 introduces user-defined literals which will allow the introduction of new literal syntax based on existing literals (int, hex, string, float) so that any type will be able to have a literal presentation.
Examples:
// imaginary numbers
std::complex<long double> operator "" _i(long double d) // cooked form
{
return std::complex<long double>(0, d);
}
auto val = 3.14_i; // val = complex<long double>(0, 3.14)
// binary values
int operator "" _B(const char*); // raw form
int answer = 101010_B; // answer = 42
// std::string
std::string operator "" _s(const char* str, size_t /*length*/)
{
return std::string(str);
}
auto hi = "hello"_s + " world"; // + works, "hello"_s is a string not a pointer
// units
assert(1_kg == 2.2_lb); // give or take 0.00462262 pounds
At first glance this looks very cool but I'm wondering how applicable it really is, when I tried to think of having the suffixes _AD and _BC create dates I found that it's problematic due to operator order. 1974/01/06_AD would first evaluate 1974/01 (as plain ints) and only later the 06_AD (to say nothing of August and September having to be written without the 0 for octal reasons). This can be worked around by having the syntax be 1974-1/6_AD so that the operator evaluation order works but it's clunky.
So what my question boils down to is this, do you feel this feature will justify itself? What other literals would you like to define that will make your C++ code more readable?
Updated syntax to fit the final draft on June 2011
At first sight, it seems to be simple syntactic sugar.
But when looking deeper, we see it's more than syntactic sugar, as it extends the C++ user's options to create user-defined types that behave exactly like distinct built-in types. In this, this little "bonus" is a very interesting C++11 addition to C++.
Do we really need it in C++?
I see few uses in the code I wrote in the past years, but just because I didn't use it in C++ doesn't mean it's not interesting for another C++ developer.
We had used in C++ (and in C, I guess), compiler-defined literals, to type integer numbers as short or long integers, real numbers as float or double (or even long double), and character strings as normal or wide chars.
In C++, we had the possibility to create our own types (i.e. classes), with potentially no overhead (inlining, etc.). We had the possibility to add operators to their types, to have them behave like similar built-in types, which enables C++ developers to use matrices and complex numbers as naturally as they would have if these have been added to the language itself. We can even add cast operators (which is usually a bad idea, but sometimes, it's just the right solution).
We still missed one thing to have user-types behave as built-in types: user-defined literals.
So, I guess it's a natural evolution for the language, but to be as complete as possible: "If you want to create a type, and you want it to behave as much possible as a built-in types, here are the tools..."
I'd guess it's very similar to .NET's decision to make every primitive a struct, including booleans, integers, etc., and have all structs derive from Object. This decision alone puts .NET far beyond Java's reach when working with primitives, no matter how much boxing/unboxing hacks Java will add to its specification.
Do YOU really need it in C++?
This question is for YOU to answer. Not Bjarne Stroustrup. Not Herb Sutter. Not whatever member of C++ standard committee. This is why you have the choice in C++, and they won't restrict a useful notation to built-in types alone.
If you need it, then it is a welcome addition. If you don't, well... Don't use it. It will cost you nothing.
Welcome to C++, the language where features are optional.
Bloated??? Show me your complexes!!!
There is a difference between bloated and complex (pun intended).
Like shown by Niels at What new capabilities do user-defined literals add to C++?, being able to write a complex number is one of the two features added "recently" to C and C++:
// C89:
MyComplex z1 = { 1, 2 } ;
// C99: You'll note I is a macro, which can lead
// to very interesting situations...
double complex z1 = 1 + 2*I;
// C++:
std::complex<double> z1(1, 2) ;
// C++11: You'll note that "i" won't ever bother
// you elsewhere
std::complex<double> z1 = 1 + 2_i ;
Now, both C99 "double complex" type and C++ "std::complex" type are able to be multiplied, added, subtracted, etc., using operator overloading.
But in C99, they just added another type as a built-in type, and built-in operator overloading support. And they added another built-in literal feature.
In C++, they just used existing features of the language, saw that the literal feature was a natural evolution of the language, and thus added it.
In C, if you need the same notation enhancement for another type, you're out of luck until your lobbying to add your quantum wave functions (or 3D points, or whatever basic type you're using in your field of work) to the C standard as a built-in type succeeds.
In C++11, you just can do it yourself:
Point p = 25_x + 13_y + 3_z ; // 3D point
Is it bloated? No, the need is there, as shown by how both C and C++ complexes need a way to represent their literal complex values.
Is it wrongly designed? No, it's designed as every other C++ feature, with extensibility in mind.
Is it for notation purposes only? No, as it can even add type safety to your code.
For example, let's imagine a CSS oriented code:
css::Font::Size p0 = 12_pt ; // Ok
css::Font::Size p1 = 50_percent ; // Ok
css::Font::Size p2 = 15_px ; // Ok
css::Font::Size p3 = 10_em ; // Ok
css::Font::Size p4 = 15 ; // ERROR : Won't compile !
It is then very easy to enforce a strong typing to the assignment of values.
Is is dangerous?
Good question. Can these functions be namespaced? If yes, then Jackpot!
Anyway, like everything, you can kill yourself if a tool is used improperly. C is powerful, and you can shoot your head off if you misuse the C gun. C++ has the C gun, but also the scalpel, the taser, and whatever other tool you'll find in the toolkit. You can misuse the scalpel and bleed yourself to death. Or you can build very elegant and robust code.
So, like every C++ feature, do you really need it? It is the question you must answer before using it in C++. If you don't, it will cost you nothing. But if you do really need it, at least, the language won't let you down.
The date example?
Your error, it seems to me, is that you are mixing operators:
1974/01/06AD
^ ^ ^
This can't be avoided, because / being an operator, the compiler must interpret it. And, AFAIK, it is a good thing.
To find a solution for your problem, I would write the literal in some other way. For example:
"1974-01-06"_AD ; // ISO-like notation
"06/01/1974"_AD ; // french-date-like notation
"jan 06 1974"_AD ; // US-date-like notation
19740106_AD ; // integer-date-like notation
Personally, I would choose the integer and the ISO dates, but it depends on YOUR needs. Which is the whole point of letting the user define its own literal names.
Here's a case where there is an advantage to using user-defined literals instead of a constructor call:
#include <bitset>
#include <iostream>
template<char... Bits>
struct checkbits
{
static const bool valid = false;
};
template<char High, char... Bits>
struct checkbits<High, Bits...>
{
static const bool valid = (High == '0' || High == '1')
&& checkbits<Bits...>::valid;
};
template<char High>
struct checkbits<High>
{
static const bool valid = (High == '0' || High == '1');
};
template<char... Bits>
inline constexpr std::bitset<sizeof...(Bits)>
operator"" _bits() noexcept
{
static_assert(checkbits<Bits...>::valid, "invalid digit in binary string");
return std::bitset<sizeof...(Bits)>((char []){Bits..., '\0'});
}
int
main()
{
auto bits = 0101010101010101010101010101010101010101010101010101010101010101_bits;
std::cout << bits << std::endl;
std::cout << "size = " << bits.size() << std::endl;
std::cout << "count = " << bits.count() << std::endl;
std::cout << "value = " << bits.to_ullong() << std::endl;
// This triggers the static_assert at compile time.
auto badbits = 2101010101010101010101010101010101010101010101010101010101010101_bits;
// This throws at run time.
std::bitset<64> badbits2("2101010101010101010101010101010101010101010101010101010101010101_bits");
}
The advantage is that a run-time exception is converted to a compile-time error.
You couldn't add the static assert to the bitset ctor taking a string (at least not without string template arguments).
It's very nice for mathematical code. Out of my mind I can see the use for the following operators:
deg for degrees. That makes writing absolute angles much more intuitive.
double operator ""_deg(long double d)
{
// returns radians
return d*M_PI/180;
}
It can also be used for various fixed point representations (which are still in use in the field of DSP and graphics).
int operator ""_fix(long double d)
{
// returns d as a 1.15.16 fixed point number
return (int)(d*65536.0f);
}
These look like nice examples how to use it. They help to make constants in code more readable. It's another tool to make code unreadable as well, but we already have so much tools abuse that one more does not hurt much.
UDLs are namespaced (and can be imported by using declarations/directives, but you cannot explicitly namespace a literal like 3.14std::i), which means there (hopefully) won't be a ton of clashes.
The fact that they can actually be templated (and constexpr'd) means that you can do some pretty powerful stuff with UDLs. Bigint authors will be really happy, as they can finally have arbitrarily large constants, calculated at compile time (via constexpr or templates).
I'm just sad that we won't see a couple useful literals in the standard (from the looks of it), like s for std::string and i for the imaginary unit.
The amount of coding time that will be saved by UDLs is actually not that high, but the readability will be vastly increased and more and more calculations can be shifted to compile-time for faster execution.
Bjarne Stroustrup talks about UDL's in this C++11 talk, in the first section on type-rich interfaces, around 20 minute mark.
His basic argument for UDLs takes the form of a syllogism:
"Trivial" types, i.e., built-in primitive types, can only catch trivial type errors. Interfaces with richer types allow the type system to catch more kinds of errors.
The kinds of type errors that richly typed code can catch have impact on real code. (He gives the example of the Mars Climate Orbiter, which infamously failed due to a dimensions error in an important constant).
In real code, units are rarely used. People don't use them, because incurring runtime compute or memory overhead to create rich types is too costly, and using pre-existing C++ templated unit code is so notationally ugly that no one uses it. (Empirically, no one uses it, even though the libraries have been around for a decade).
Therefore, in order to get engineers to use units in real code, we needed a device that (1) incurs no runtime overhead and (2) is notationally acceptable.
Let me add a little bit of context. For our work, user defined literals is much needed. We work on MDE (Model-Driven Engineering). We want to define models and metamodels in C++. We actually implemented a mapping from Ecore to C++ (EMF4CPP).
The problem comes when being able to define model elements as classes in C++. We are taking the approach of transforming the metamodel (Ecore) to templates with arguments. Arguments of the template are the structural characteristics of types and classes. For example, a class with two int attributes would be something like:
typedef ::ecore::Class< Attribute<int>, Attribute<int> > MyClass;
Hoever, it turns out that every element in a model or metamodel, usually has a name. We would like to write:
typedef ::ecore::Class< "MyClass", Attribute< "x", int>, Attribute<"y", int> > MyClass;
BUT, C++, nor C++0x don't allow this, as strings are prohibited as arguments to templates. You can write the name char by char, but this is admitedly a mess. With proper user-defined literals, we could write something similar. Say we use "_n" to identify model element names (I don't use the exact syntax, just to make an idea):
typedef ::ecore::Class< MyClass_n, Attribute< x_n, int>, Attribute<y_n, int> > MyClass;
Finally, having those definitions as templates helps us a lot to design algorithms for traversing the model elements, model transformations, etc. that are really efficient, because type information, identification, transformations, etc. are determined by the compiler at compile time.
Supporting compile-time dimension checking is the only justification required.
auto force = 2_N;
auto dx = 2_m;
auto energy = force * dx;
assert(energy == 4_J);
See for example PhysUnits-CT-Cpp11, a small C++11, C++14 header-only library for compile-time dimensional analysis and unit/quantity manipulation and conversion. Simpler than Boost.Units, does support unit symbol literals such as m, g, s, metric prefixes such as m, k, M, only depends on standard C++ library, SI-only, integral powers of dimensions.
Hmm... I have not thought about this feature yet. Your sample was well thought out and is certainly interesting. C++ is very powerful as it is now, but unfortunately the syntax used in pieces of code you read is at times overly complex. Readability is, if not all, then at least much. And such a feature would be geared for more readability. If I take your last example
assert(1_kg == 2.2_lb); // give or take 0.00462262 pounds
... I wonder how you'd express that today. You'd have a KG and a LB class and you'd compare implicit objects:
assert(KG(1.0f) == LB(2.2f));
And that would do as well. With types that have longer names or types that you have no hopes of having such a nice constructor for sans writing an adapter, it might be a nice addition for on-the-fly implicit object creation and initialization. On the other hand, you can already create and initialize objects using methods, too.
But I agree with Nils on mathematics. C and C++ trigonometry functions for example require input in radians. I think in degrees though, so a very short implicit conversion like Nils posted is very nice.
Ultimately, it's going to be syntactic sugar however, but it will have a slight effect on readability. And it will probably be easier to write some expressions too (sin(180.0deg) is easier to write than sin(deg(180.0)). And then there will be people who abuse the concept. But then, language-abusive people should use very restrictive languages rather than something as expressive as C++.
Ah, my post says basically nothing except: it's going to be okay, the impact won't be too big. Let's not worry. :-)
I have never needed or wanted this feature (but this could be the Blub effect). My knee jerk reaction is that it's lame, and likely to appeal to the same people who think that it's cool to overload operator+ for any operation which could remotely be construed as adding.
C++ is usually very strict about the syntax used - barring the preprocessor there is not much you can use to define a custom syntax/grammar. E.g. we can overload existing operatos, but we cannot define new ones - IMO this is very much in tune with the spirit of C++.
I don't mind some ways for more customized source code - but the point chosen seems very isolated to me, which confuses me most.
Even intended use may make it much harder to read source code: an single letter may have vast-reaching side effects that in no way can be identified from the context. With symmetry to u, l and f, most developers will choose single letters.
This may also turn scoping into a problem, using single letters in global namespace will probably be considered bad practice, and the tools that are supposed mixing libraries easier (namespaces and descriptive identifiers) will probably defeat its purpose.
I see some merit in combination with "auto", also in combination with a unit library like boost units, but not enough to merit this adition.
I wonder, however, what clever ideas we come up with.
I used user literals for binary strings like this:
"asd\0\0\0\1"_b
using std::string(str, n) constructor so that \0 wouldn't cut the string in half. (The project does a lot of work with various file formats.)
This was helpful also when I ditched std::string in favor of a wrapper for std::vector.
Line noise in that thing is huge. Also it's horrible to read.
Let me know, did they reason that new syntax addition with any kind of examples? For instance, do they have couple of programs that already use C++0x?
For me, this part:
auto val = 3.14_i
Does not justify this part:
std::complex<double> operator ""_i(long double d) // cooked form
{
return std::complex(0, d);
}
Not even if you'd use the i-syntax in 1000 other lines as well. If you write, you probably write 10000 lines of something else along that as well. Especially when you will still probably write mostly everywhere this:
std::complex<double> val = 3.14i
'auto' -keyword may be justified though, only perhaps. But lets take just C++, because it's better than C++0x in this aspect.
std::complex<double> val = std::complex(0, 3.14);
It's like.. that simple. Even thought all the std and pointy brackets are just lame if you use it about everywhere. I don't start guessing what syntax there's in C++0x for turning std::complex under complex.
complex = std::complex<double>;
That's perhaps something straightforward, but I don't believe it's that simple in C++0x.
typedef std::complex<double> complex;
complex val = std::complex(0, 3.14);
Perhaps? >:)
Anyway, the point is: writing 3.14i instead of std::complex(0, 3.14); does not save you much time in overall except in few super special cases.