As of the time of writing, cppreference gives a reasonably simple definition of the std::in_place_t family:
struct in_place_t {
explicit in_place_t() = default;
};
inline constexpr std::in_place_t in_place{};
template <class T>
struct in_place_type_t {
explicit in_place_type_t() = default;
};
template <class T>
inline constexpr std::in_place_type_t<T> in_place_type{};
template <size_t I> struct in_place_index_t {
explicit in_place_index_t() = default;
};
template <size_t I>
inline constexpr in_place_index_t<I> in_place_index{};
However, the latest draft of the C++17 standard linked from isocpp.org has a rather more complicated definition (section 20.2.7, page 536):
struct in_place_tag {
in_place_tag() = delete;
};
using in_place_t = in_place_tag(&)(unspecified );
template <class T>
using in_place_type_t = in_place_tag(&)(unspecified <T>);
template <size_t I>
using in_place_index_t = in_place_tag(&)(unspecified <I>);
in_place_tag in_place(unspecified );
template <class T>
in_place_tag in_place(unspecified <T>);
template <size_t I>
in_place_tag in_place(unspecified <I>);
The first version is simple and easy to understand, but second version is quite opaque to me. So, questions:
Which version is correct, post-Issaqua (November 2016)? (Presumably the second, but it's possible that N4606 hasn't yet been updated after the latest meeting and cppreference has.)
Clearly this has changed at some point in time; does anyone have a link to a paper mentioning the change?
Most importantly, can anyone explain how the second version is intended to work? What would a sample implementation look like?
The first version is the right one, currently, and will in all likelihood be the one ending up in C++17.
The second version was an attempt to allow one to write in_place everywhere, with nothing, with a type, or with an index:
std::optional<int> o(std::in_place, 1);
std::any a(std::in_place<int>, 1);
std::variant<int, int> v(std::in_place<0>, 1);
The only way to make this syntax work is to make in_place an overloaded function, and that also requires making in_place*_t aliases for references to functions. There's no real implementation difference otherwise - the in_place functions aren't meant to be called, they exist only so that a reference to them can be passed around as a tag and match the corresponding _t types.
Nonetheless it was too clever and caused its own problems (for instance, unlike plain tag types, they don't respond well to being decay'd, and plain std::in_place, being an overloaded function name, misbehaves with perfect forwarders: std::optional<std::optional<int>> o(std::in_place, std::in_place); doesn't work because the compiler can't resolve the second std::in_place), so it got backed out in Issaquah, and now you have to write
std::optional<int> o(std::in_place, 1);
std::any a(std::in_place_type<int>, 1);
std::variant<int, int> v(std::in_place_index<0>, 1);
A little less pretty, but more sane.
Related
The syntax that works for classes does not work for concepts:
template <class Type>
concept C = requires(Type t) {
// ...
};
template <class Type>
concept C<Type*> = requires(Type t) {
// ...
};
MSVC says for the line of the "specialization": error C7606: 'C': concept cannot be explicitly instantiated, explicitly specialized or partially specialized.
Why cannot concepts be specialized? Is there a theoretical reason?
Because it would ruin constraint normalization and subsumption rules.
As it stands now, every concept has exactly and only one definition. As such, the relationships between concepts are known and fixed. Consider the following:
template<typename T>
concept A = atomic_constraint_a<T>;
template<typename T>
concept B = atomic_constraint_a<T> && atomic_constraint_b<T>;
By C++20's current rules, B subsumes A. This is because, after constraint normalization, B includes all of the atomic constraints of A.
If we allow specialization of concepts, then the relationship between B and A now depends on the arguments supplied to those concepts. B<T> might subsume A<T> for some Ts but not other Ts.
But that's not how we use concepts. If I'm trying to write a template that is "more constrained" than another template, the only way to do that is to use a known, well-defined set of concepts. And those definitions cannot depend on the parameters to those concepts.
The compiler ought to be able to compute whether one constrained template is more constrained than another without having any template arguments at all. This is important, as having one template be "more constrained" than another is a key feature of using concepts and constraints.
Ironically, allowing specialization for concepts would break (constrained) specialization for other templates. Or at the very least, it'd make it really hard to implement.
In addition to the great answer from Nicol Bolas:
Concepts are a bit special, because they don't behave like other templated things:
13.7.9 Concept definitions
(5) A concept is not instantiated ([temp.spec]).
[Note 1: A concept-id ([temp.names]) is evaluated as an expression. A concept cannot be explicitly instantiated ([temp.explicit]), explicitly specialized ([temp.expl.spec]), or partially specialized ([temp.spec.partial]). — end note]
Due to concepts not being able to be instantiated they also can't be specialized.
I'm not sure on why the standard decided to not make them specializable, given that it's easy to emulate specializations.
While you can't specialize concepts directly, there are quite a few ways you can work around the problem.
You can use any type of constant expression in a concept - so you could use a templated variable (which can be specialized) and just wrap it up into a concept - the standard does this for quite a few of its own concepts as well, e.g. std::is_intergral:
template<class T> struct is_integral;
// is_integral is specialized for integral types to have value == true
// and all others are value == false
template<class T>
inline constexpr bool is_integral_v = is_integral<T>::value;
template<class T>
concept integral = is_integral_v<T>;
So you could easily write a concept that has specializations like this: godbolt example
struct Foo{};
struct Bar{};
template<class T>
constexpr inline bool is_addable_v = requires(T t) {
{ t + t } -> std::same_as<T>;
};
// Specializations (could also use other requires clauses here)
template<>
constexpr inline bool is_addable_v<Foo> = true;
template<class T>
constexpr inline bool is_addable_v<T&&> = true;
template<class T>
concept is_addable = is_addable_v<T>;
int main() {
static_assert(is_addable<int>);
static_assert(is_addable<Foo>);
static_assert(!is_addable<Bar>);
static_assert(is_addable<Bar&&>);
}
Or by using a class:
template<class T>
struct is_addable_v : std::true_type {};
template<>
struct is_addable_v<struct FooBar> : std::false_type {};
template<class T>
concept is_addable = is_addable_v<T>::value;
Or even a constexpr lambda: godbolt example
// pointers must add to int
// everything else must add to double
template<class T>
concept is_special_addable = ([](){
if constexpr(std::is_pointer_v<T>)
return requires(std::remove_pointer_t<T> t) {
{ t + t } -> std::same_as<int>;
};
else
return requires(T t) {
{ t + t } -> std::same_as<double>;
};
})();
int main() {
static_assert(is_special_addable<double>);
static_assert(is_special_addable<int*>);
static_assert(!is_special_addable<double*>);
static_assert(!is_special_addable<int>);
}
So while concepts can't be specialized on their own, it's easy to achieve the same effect with existing language features.
Specialization in this sort of situation opens up a bag of worms. We opened this bag up once with template specialization. Template specialization is a major part of what makes the template language in general Turing complete. Yes, you can program in templates. You shouldn't, but you can. Boost has a library called Boost.MPL that's chock full of clever things, like an "unordered map" that operates at compile time, rather than run time.
So we would have to restrict it carefully. Simple cases may work, but complex cases would have to be forbidden. Certainly anything that is remotely capable of creating a recursive constraint would have to be watched carefully. Indeed, consider a concept:
template <typename T>
concept hailstone = false;
template <int i>
concept hailstone<std::integral_constant<int, i> =
hailstone<2 * i> || (i % 2 == 1 && hailstone<3*i - 1>);
template <>
concept hailstone<std::integral_constant<int, 0> = true;
so, is std::integral_constant<int, 27> a hailstone? It could take a while. My chosen example is based on hailstone numbers from the Collatz Conjecture. Determining whether any given number is a hailstone or not is painfully difficult (even though, as best as we can tell, every number is a hailstone number).
Now replace integral_constant with a clever structure which can do arbitrary precision. Now we're in trouble!
Now we can carefully slice off elements of this problem and mark them as doable. The spec community is not in that business. The Concepts we know in C++20 has been nicknamed concepts-lite because it's actually a drastically simplified version of a concepts library that never made it into C++11. That library effectively implemented a Description Logic, a class of logic that is known to be decidable. This was important because the computer had to run through all of the necessary calculations, and we didn't want them to take an infinite amount of time. Concepts is derived from this, so it follows the same rules. And, if you look in Description Logics, the way you prove many statements involves first enumerating the list of all named concepts. Once you had enumerated that, it was trivial to show that you could resolve any concept requirement in finite time.
As Nicol Bolas points out in his answer, the purpose of concepts was not to be some clever Turing complete system. It was to provide better error messages. Thus, while one might be able to cleverly slide in some specialization within carefully selected paths, there's no incentive to.
Consider any of the common type-level algorithms provided by libraries such as Boost.MP11, Brigand, etc...
For instance:
template<typename... Args>
struct TypeList;
using my_types = TypeList<int, float, char, float, double>;
constexpr int count = boost::mp11::mp_count_if<my_types, std::is_floating_point>::value;
// this holds:
static_assert(count == 3);
Notice that std::is_floating_point could be defined as:
template<typename T>
struct is_floating_point { constexpr bool value = __compiler_magic(T); };
And likewise, we have the std::floating_point concept
template<typename T>
concept floating_point = requires (T t) { __other_compiler_magic(T); };
Sadly, despite the similarity, there does not seem to be an easy way to write something like this without introducing a manually-named wrapper for the concept:
constexpr int count = boost::mp11::count_if<my_types, std::floating_point>::value;
My question is: why cannot concepts be passed in place of types at this point ? Is it a lack of standardization, or is it something that these libraries can solve by providing more overloads ?
It looks like every concept has to be wrapped in a templated type which will just call the concept on its template argument.
From the outside, concepts just look like meta-functions whose domain is {set of types} -> bool. Compilers are able to delay passing parameters to "traditional" type-based metafunctions such as std::is_floating_point, why can't the same seem to happen with concepts ?
The literal answer is that we have template template parameters but not concept template parameters, so you can't pass a concept as a template argument.
The other literal answer is that it was never part of the original concepts proposal and nobody has put in the effort to suggest it as an extension (although I've been collecting use-cases).
One thing that would have to be answered is how dependent concepts affect subsumption - since currently use of concepts is never dependent and so figuring out subsumption is straightforward (actually, it's still not straightforward at all, but at least all the things you need are right there). But in a scenario like:
template <template <typename> concept C, typename T>
requires C<T>
void foo(T); // #1
template <typename T>
void foo(T); // #2
Probably if #1 is viable, you want to say it's a beter candidate than #2 since it's still constrained while the other is not. Maybe that's trivial. But then:
template <template <typename> concept C, typename T>
requires C<T>
void bar(T); // #3
template <OtherConcept T>
void bar(T); // #4
Let's say #3 and #4 are both viable, is it possible to say which is better? We generally say a whole overload is always better than a different one - but that might not be the case here. Maybe this is just ambiguous?
That seems to me like the main question that would need to be answered in order to get concept template parameters.
The other question might be, can I write foo<convertible_to<int>>(42). convertible_to<int> isn't really a unary concept, but it is a type-constraint that is treated as one in certain contexts, so I would still expect that to work.
Once we have such a thing, I'm sure Boost.Mp11 will quickly acquire something like:
template <template <typename...> concept C>
struct mp_quote_c {
template <typename... T>
using fn = mp_bool<C<T...>>;
};
So that you can write:
constexpr int count = mp_count_if_q<my_types, mp_quote_c<std::floating_point>>::value;
° Preamble
This question is particularly related to the helper variable templates defined by/in the STL for all the types deriving from std::integral_constant.
° Context
I am in the process of writing a compile-time only library which aims to provide the most possible features of the STL (until C++17 for now), by using the least possible features of the language after C++11.
That is, everything that can be done using C++11 features only, are implemented using C++11. For things that cannot be implemented like that, the library will have to provide other options...
Side Note
The purpose is to minimize the needed modifications to the code, produced using the library, when this code has to be compiled with compilers having a reduced set of language features. Ie: Compilers of the 'embedded world' does often not provide everything one would them to be able to do.
° Chronology
C++11 standard library came up with std::integral_constant.
This 'helper class' was already defining the cast operator for value_type.
C++14 added the invoke operator to it, and this 'new' language feature 'variable template'.
C++17 added std::bool_constant though std::true_type and std::false_type were already defined from C++11 as std::integral_constant<bool, true and false > respectively.
C++17 also added inline variable template... There, suddenly, all the types deriving from std::integral_constant were all defining a 'helper' variable template.
Note
I perfectly understand what is the purpose of an inline variable template.
The question here is about the usefulness of the 'helpers' defined for the types deriving from std::integral_constant.
° A Bit Food
Now, consider the following code examples:
/* Test template using std::integral_constant<bool, false>
*/
template<typename...>
using AlwaysFalse = std::false_type;
/* Example #1
*/
template<typename T>
struct AssertAlwaysFalse {
static_assert(
AlwaysFalse<T>{},
"Instatiation and bool cast operator replaces variable template."
);
using Type = T;
};
using AlwaysFalseType = typename AssertAlwaysFalse<int>::Type;
/* Example #2
*/
constexpr auto alwaysFalseAuto = AlwaysFalse<int>{};
constexpr bool alwaysFalseBool = AlwaysFalse<int>{};
/* Example #3
*/
template<bool AlwaysF>
struct AlwaysFalseArg { static constexpr bool Result = AlwaysF; };
constexpr bool alwaysFalseArg = AlwaysFalseArg<AlwaysFalse<int>{}>::Result;
The above examples show that instantiating an std::integral_constant, where a value is expected, has the exact same effect one would obtain by using a 'helper' variable template.
This is perfectly natural. std::integral_constant defines the cast operator for value_type. This behavior is purely C++11, and was available way before inline variable template.
° Still Stands The Question
Is there only one good reason for having defined these 'helper' variable templates for all the types deriving from std::integral_constant ???
° In Other Words
After the comment of #NicolBolas about:
"Why instantiating some object only to convert it into a compile-time value ?"
I realized that the main point behind the question was maybe not enough clear. So I will put it like that:
If you only had at disposal the features provided with C++11, How would you implement 'something' to provide this compile-time value ?
The main benefits are compilation speed, consistency, and convenience, primarily. I'm going to take a look at a few things here. I'll try to address both what the features are used for, and how one would implement them with only C++11 features. If you only care about the implementation ideas, skip to the bottom.
integral_constant itself:
First, we have the central object here, std::integral_constant. It defines a compile-time static constant, accessed as std::integral_constant<T, V>::value, and looks something like thistaken from cppreference.
template<class T, T v>
struct integral_constant {
static constexpr T value = v;
using value_type = T;
using type = integral_constant; // using injected-class-name
constexpr operator value_type() const noexcept { return value; }
constexpr value_type operator()() const noexcept { return value; } // since c++14
};
Now, the first thing to note is that integral_constant stores the constant's value within itself, as a compile-time constant. You can access it without instantiating an instance; furthermore, instantiating the integral_constant will typically just result in an object being created and immediately converted to the constant, doing extra work for zero benefit; it's usually better to just use integral_constant::value directly instead.
constexpr bool TraitResult = integral_constant<bool, SomeConstexprTest(param)>::value;
SFINAE, Traits, and bool_constant:
The most common use case for integral_constant, by far, is as a compile-time boolean trait, likely used for SFINAE or introspection. The vast majority of <type_traits> consists of integral_constant<bool>s, with values determined according to the trait's logic, for use as yes-or-no tests.
template<typename T>
typename std::enable_if< std::is_same<SomeType, T>::value>::type
someFunc(T&& t);
template<typename T>
typename std::enable_if< !std::is_same<SomeType, T>::value>::type
someFunc(T&& t);
C++17 supplies bool_constant with this usage in mind, as a cleaner way to create boolean constants. It allows for cleaner code by simplifying the creation of custom traits:
namespace detail {
// These lines are clean.
template<typename...> struct are_unique_helper;
template<typename T> struct are_unique_helper<T> : std::true_type {};
// This... less so.
template<typename T, typename U, typename... Ts>
struct are_unique_helper<T, U, Ts...> : std::integral_constant<
bool,
!std::is_same<T, U> &&
are_unique_helper<T, Ts...>::value
> {};
}
// With integral_constant<bool>.
template<typename T, typename... Ts>
struct are_unique : std::integral_constant<bool, detail::are_unique_helper<T, Ts...>::value && are_unique<Ts...>::value> {};
// With bool_constant.
template<typename T, typename... Ts>
struct are_unique : std::bool_constant<detail::are_unique_helper<T, Ts...>::value && are_unique<Ts...>::value> {};
The name bool_constant conveys the same information as integral_constant<bool> with less wasted space or one less line, depending on coding style, and has the additional advantange of clearer conveyance thanks to emphasising the bool part. It's not strictly necessary, and can be easily supplied manually if your compiler doesn't support it, but it does provide a few benefits.
true_type and false_type:
These two provide specific constants for true and false; this is definitely useful, but many traits determine v with boolean logic. (See, e.g., std::is_same or are_unique above.) They're neither a be-all nor an end-all, though they can serve useful purposes such as base values for traits (as above, or as in std::is_same), or matching traits for overloading or SFINAE.
constexpr std::string_view isIntInner( std::true_type) { return "yes"; }
constexpr std::string_view isIntInner(std::false_type) { return " no"; }
template<typename T>
constexpr std::string_view isInt(T&&) {
return isIntInner(std::is_same<int, T>{});
}
Helpers: Type aliases & variable templates:
To explain the reason for the variable templates, we also want to look at them alongside the helper aliases defined in C++14.
template< bool B, class T = void >
using enable_if_t = typename enable_if<B,T>::type;
template< class T, class U >
inline constexpr bool is_same_v = is_same<T, U>::value;
These mainly exist as a form of convenience, really; they're a bit faster to type, a bit cleaner to read, and require a bit less finger gymnastics. The type aliases were provided first, and the helper variables are mainly there for consistency with them.
How to implement these:
You mentioned that you're aiming to implement everything using C++11 features primarily. This will allow you to provide most of the above:
integral_constant: Requires only C++11 or earlier features. (constexpr and noexcept.)
bool_constant: Introduced in C++17, but requires only C++11 features. (Alias template.)
true_type and false_type: Same as integral_constant.
Introspective logic type traits: Same as integral_constant.
Helper _t aliases: Introduced in C++14, but requires only C++11 features. (Alias template.)
Helper _v variables: Requires C++14 features. (Variable template.)
It wouldn't actually be too hard to provide helper aliases and bool_constant for compilers which don't support it, as long as using templates is supported. Possibly by, e.g., providing them within your library's namespace, in a header which is only loaded on implementations that don't include the aliases, and/or which the library's consumer can enable or disable as necessary during compilation.
While it would take a lot of work to implement variable templates, however, you do have another, more C++11-compliant option: helper functions.
template<typename T, typename U>
inline constexpr bool is_same_v() noexcept {
return std::is_same<T, U>::value;
}
Providing functions of this sort will result in code nearly as clean as the helper variable templates, which can be cleanly switched over to the official variable templates for compilers which provide them. There are a few slight differences between helper functions and helper variables, though I'm not sure if there are any use cases that would actually care about them, and the library would ideally only provide helper functions for compilers which don't themselves provide the _v variables.
#include <type_traits>
template <typename T>
struct C;
template<typename T1, typename T2>
using first = T1;
template <typename T>
struct C<first<T, std::enable_if_t<std::is_same<T, int>::value>>>
{
};
int main ()
{
}
Results of compilation by different compilers:
MSVC:
error C2753: 'C': partial specialization cannot match argument list for primary template
gcc-4.9:
error: partial specialization 'C' does not specialize any template arguments
clang all versions:
error: class template partial specialization does not specialize any template argument; to define the primary template, remove the template argument list
gcc-5+:
successfully compiles
And additionaly I want to point out that trivial specialization like:
template<typename T>
struct C<T>
{
};
successfully fails to be compiled by gcc. So it seems like it figures out that specialization in my original example is non-trivial. So my question is - is pattern like this explicitly forbidden by C++ standard or not?
The crucial paragraph is [temp.class.spec]/(8.2), which requires the partial specialization to be more specialized than the primary template. What Clang actually complains about is the argument list being identical to the primary template's: this has been removed from [temp.class.spec]/(8.3) by issue 2033 (which stated that the requirement was redundant) fairly recently, so hasn't been implemented in Clang yet. However, it apparently has been implemented in GCC, given that it accepts your snippet; it even compiles the following, perhaps for the same reason it compiles your code (it also only works from version 5 onwards):
template <typename T>
void f( C<T> ) {}
template <typename T>
void f( C<first<T, std::enable_if_t<std::is_same<T, int>::value>>> ) {}
I.e. it acknowledges that the declarations are distinct, so must have implemented some resolution of issue 1980. It does not find that the second overload is more specialized (see the Wandbox link), however, which is inconsistent, because it should've diagnosed your code according to the aforementioned constraint in (8.2).
Arguably, the current wording makes your example's partial ordering work as desired†: [temp.deduct.type]/1 mentions that in deduction from types,
Template arguments can be deduced in several different contexts, but in each case a type that is specified in terms of template parameters (call it P) is compared with an actual type (call it A), and an attempt is made to find template argument values […] that will make P, after substitution of the deduced values (call it the deduced A), compatible with A.
Now via [temp.alias]/3, this would mean that during the partial ordering step in which the partial specialization's function template is the parameter template, the substitution into is_same yields false (since common library implementations just use a partial specialization that must fail), and enable_if fails.‡ But this semantics is not satisfying in the general case, because we could construct a condition that generally succeeds, so a unique synthesized type meets it, and deduction succeeds both ways.
Presumably, the simplest and most robust solution is to ignore discarded arguments during partial ordering (making your example ill-formed). One can also orientate oneself towards implementations' behaviors in this case (analogous to issue 1157):
template <typename...> struct C {};
template <typename T>
void f( C<T, int> ) = delete;
template <typename T>
void f( C<T, std::enable_if_t<sizeof(T) == sizeof(int), int>> ) {}
int main() {f<int>({});}
Both Clang and GCC diagnose this as calling the deleted function, i.e. agree that the first overload is more specialized than the other. The critical property of #2 seems to be that the second template argument is dependent yet T appears solely in non-deduced contexts (if we change int to T in #1, nothing changes). So we could use the existence of discarded (and dependent?) template arguments as tie-breakers: this way we don't have to reason about the nature of synthesized values, which is the status quo, and also get reasonable behavior in your case, which would be well-formed.
† #T.C. mentioned that the templates generated through [temp.class.order] would currently be interpreted as one multiply declared entity—again, see issue 1980. That's not directly relevant to the standardese in this case, because the wording never mentions that these function templates are declared, let alone in the same program; it just specifies them and then falls back to the procedure for function templates.
‡ It isn't entirely clear with what depth implementations are required to perform this analysis. Issue 1157 demonstrates what level of detail is required to "correctly" determine whether a template's domain is a proper subset of the other's. It's neither practical nor reasonable to implement partial ordering to be this sophisticated. However, the footnoted section just goes to show that this topic isn't necessarily underspecified, but defective.
I think you could simplify your code - this has nothing to do with type_traits. You'll get the same results with following one:
template <typename T>
struct C;
template<typename T>
using first = T;
template <typename T>
struct C<first<T>> // OK only in 5.1
{
};
int main ()
{
}
Check in online compiler (compiles under 5.1 but not with 5.2 or 4.9 so it's probably a bug) - https://godbolt.org/g/iVCbdm
I think that int GCC 5 they moved around template functionality and it's even possible to create two specializations of the same type. It will compile until you try to use it.
template <typename T>
struct C;
template<typename T1, typename T2>
using first = T1;
template<typename T1, typename T2>
using second = T2;
template <typename T>
struct C<first<T, T>> // OK on 5.1+
{
};
template <typename T>
struct C<second<T, T>> // OK on 5.1+
{
};
int main ()
{
C<first<int, int>> dummy; // error: ambiguous template instantiation for 'struct C<int>'
}
https://godbolt.org/g/6oNGDP
It might be somehow related to added support for C++14 variable templates. https://isocpp.org/files/papers/N3651.pdf
The C++11 standard specifies a type trait std::alignment_of<T> which simply returns the value of alignof(T).
Is there a similar trait for the sizeof operator? Am I just missing it, or was it just missed in the standard, or is there some obscure technical reason why it wasn't specified?
Obviously it is trivial to create such a trait, but I can't imagine it wouldn't have been considered when introducing std::alignment_of.
For context, I have a custom type trait that I use to get the maximum value of a single trait when applied to a list of types.
template <template<class> class Trait, typename F, typename... T>
struct trait_max
: std::integral_constant<decltype(Trait<F>::value),
(Trait<F>::value > trait_max<Trait, T...>::value) ? Trait<F>::value : trait_max<Trait, T...>::value>
{ };
template <template<class> class Trait, typename F>
struct trait_max<Trait, F>
: std::integral_constant<decltype(Trait<F>::value), Trait<F>::value>
{ };
This trait is really handy for when you need know the maximum of a set of types like so:
auto max_align = traits_max<std::alignment_of, int, float, std::string>::value;
auto max_size = traits_max<std::size_of, int, float, std::string>::value; // doesn't exist
std::alignment_of isn't new in C++11. It was added (along with the rest of <type_traits>) as part of TR1 in 2007. TR1's <type_traits> was copied wholesale from Boost TypeTraits, which provided alignment_of only because there was no standard way to get at that value in 2005.
Of course in 2005 there was a way to get the size of a type T; it has been spelled sizeof(T) since time immemorial. That's why size_of<T> wasn't in Boost TypeTraits, and that's why it wasn't copied into TR1 in 2007, and that's why it wasn't grandfathered into C++11.
As of 2011, there is also a standard way to get the alignment of a type T; it's spelled alignof(T). The pre-2011 construct std::alignment_of<T>::value is needlessly verbose, and you almost certainly shouldn't be using it anymore unless you're concerned about portability to pre-2011 implementations.
I believe the most idiomatic way of writing your sample code is
size_t max_align = std::max({alignof(int), alignof(float), alignof(std::string)});
size_t max_size = std::max({sizeof(int), sizeof(float), sizeof(std::string)});
Once C++14 rolls around, std::max will become constexpr, so this will be computed at compile-time and be usable in template metaprogramming. But the suckiness of C++11's std::max is a totally separate issue, unrelated to your question. :)
EDIT: Here's a constexpr_max that works in today's C++11. Unfortunately C++11's std::initializer_list can't be used in a constexpr context; C++14 is fixing that too.
template<typename T> constexpr T constexpr_max(T t, T u) {
return t > u ? t : u;
}
template<typename T, typename... TT> constexpr T constexpr_max(T t, TT... ts) {
return constexpr_max(t, constexpr_max(ts...));
}