Mock the std::move() function to assess its performance impact - c++

I would like to "mock" the std::move() function to assess its (positive) performance impact on a C++ library I have written.
I have used std::move() extensively and I would like to avoid grepping everywhere to remove it. What is the best way to replace it with an identity function? I'm compiling with gcc.

This should "work" but it's actually undefined behaviour:
// Standard library includes must be above.
#define move not_a_move
namespace std {
template<typename T>
typename std::remove_lvalue_reference<T>::type const&
not_a_move(T&& x)
{
return x;
}
}
This won't capture implicit moves or moves done inside the standard library itself. I would recommend just removing all your uses of std::move; it's cleaner and actually allowed. :P

I'm fairly certain there is no way to achieve this short of massacring your own code to remove all the move constructors on the large objects you're wanting to profile. You may also find you would then have problems with passing non rvalue references to functions expecting rvalue references.
So I would suggest not bothering and accept that it's highly likely your code is faster and be satisfied with that.
I actually wanted to try this a year or so ago, and never found a way.

Related

Why does string_stream.str() = a_string; compile? [duplicate]

I recently learned that member functions can be ref-qualified, which allows me to write
struct S {
S& operator=(S const&) & // can only be used if the implicit object is an lvalue
{
return *this;
}
};
S operator+(S const &, S const &) {
return {};
}
thereby preventing users from doing things like
S s{};
s + s = S{}; // error
However, I see that std::string's member operator= does not do this. So the following code compiles with no warnings
std::string s;
s + s = s;
Is there a reason for allowing this?
If not, would it be possible to add the ref-qualifier in the future, or would that break existing code somehow?
Likely, the timing plays a role in this decision. Ref-qualified member functions were added to the language with C++11, while std::string has been around since C++98. Changing the definition of something in the standard library is not something to be undertaken lightly, as it could needlessly break existing code. This is not a situation where one should exclusively look at why this weird assignment should be allowed (i.e. look for a use-case). Rather, one should also look at why this weird assignment should be disallowed (i.e. look at the benefits, and weigh them against the potential pain when otherwise working code breaks). How often would this change make a difference in realistic coding scenarios?
Looking at the comments, a counter to one proposed use-case was "They could just [another approach]." Yes, they could, but what if they didn't? Proposing alternatives is productive when initially designing the structure (std::string). However, once the structure is in wide use, you have to account for existing code that currently does not use the alternative. Is there enough benefit for the pain you could cause? Would this change actually catch mistakes often enough? How common are such mistakes in contexts where another warning would not be triggered? (As one example, using assignment instead of equality in the conditional of an if statement is likely to already generate a warning.) These are some of the questions with which the language designers grapple.
Please understand that I am not saying that the change cannot be done, only that it would need to be carefully considered.
It cannot be certain why standard does not prohibit behaviour that you presented but there are a few possible explanations:
It is simply an overlook in C++11. Before C++11 there was no ref-qualified methods and so someone has forgotten to change the bahaviour in standard.
It is kept for backward compatibility for people who were using 'dirty' code like:
std::string("a") = "gds";
for some strange reasons.
And about adding this in future - it would be possible. But first the old operator= would have to become deprecated and later on removed because it would cause code like the one above to not compile. And even then some compilers would probably support it for backward standard compatibility

Why is std::string's member operator= not lvalue ref-qualified

I recently learned that member functions can be ref-qualified, which allows me to write
struct S {
S& operator=(S const&) & // can only be used if the implicit object is an lvalue
{
return *this;
}
};
S operator+(S const &, S const &) {
return {};
}
thereby preventing users from doing things like
S s{};
s + s = S{}; // error
However, I see that std::string's member operator= does not do this. So the following code compiles with no warnings
std::string s;
s + s = s;
Is there a reason for allowing this?
If not, would it be possible to add the ref-qualifier in the future, or would that break existing code somehow?
Likely, the timing plays a role in this decision. Ref-qualified member functions were added to the language with C++11, while std::string has been around since C++98. Changing the definition of something in the standard library is not something to be undertaken lightly, as it could needlessly break existing code. This is not a situation where one should exclusively look at why this weird assignment should be allowed (i.e. look for a use-case). Rather, one should also look at why this weird assignment should be disallowed (i.e. look at the benefits, and weigh them against the potential pain when otherwise working code breaks). How often would this change make a difference in realistic coding scenarios?
Looking at the comments, a counter to one proposed use-case was "They could just [another approach]." Yes, they could, but what if they didn't? Proposing alternatives is productive when initially designing the structure (std::string). However, once the structure is in wide use, you have to account for existing code that currently does not use the alternative. Is there enough benefit for the pain you could cause? Would this change actually catch mistakes often enough? How common are such mistakes in contexts where another warning would not be triggered? (As one example, using assignment instead of equality in the conditional of an if statement is likely to already generate a warning.) These are some of the questions with which the language designers grapple.
Please understand that I am not saying that the change cannot be done, only that it would need to be carefully considered.
It cannot be certain why standard does not prohibit behaviour that you presented but there are a few possible explanations:
It is simply an overlook in C++11. Before C++11 there was no ref-qualified methods and so someone has forgotten to change the bahaviour in standard.
It is kept for backward compatibility for people who were using 'dirty' code like:
std::string("a") = "gds";
for some strange reasons.
And about adding this in future - it would be possible. But first the old operator= would have to become deprecated and later on removed because it would cause code like the one above to not compile. And even then some compilers would probably support it for backward standard compatibility

Modifying scoped enum by reference

I am increasingly finding scoped enums unwieldy to use. I am trying to write a set of function overloads including a template for scoped enums that sets/initializes a value by reference--something like this:
void set_value(int& val);
void set_value(double& val);
template <typename ENUM> set_value(ENUM& val);
However, I don't quite see how to write the templated version of set_value without introducing multiple temporary values:
template <typename ENUM>
set_value(ENUM& val)
{
std::underlying_type_t<ENUM> raw_val;
set_value(raw_val); // Calls the appropriate "primitive" overload
val = static_cast<ENUM>(raw_val);
}
I believe the static_cast introduces a second temporary value in addition to raw_val. I suppose it's possible that one or both of these could be optimized away by the compiler, and in any case it shouldn't really make much difference in terms of performance since the set_value call will also generate temporary values (assuming it's not inlined), but this still seems inelegant. What I would like to do would be something like this:
template <typename ENUM>
set_value(ENUM& val)
{
set_value(static_cast<std::underlying_type_t<ENUM>&>(val));
}
... but this isn't valid (nor is the corresponding code using pointers directly instead of references) because scoped enums aren't related to their underlying primitives via inheritance.
I could use reinterpret_cast, which, from some preliminary testing, appears to work (and I can't think of any reason why it wouldn't work), but that seems to be frowned upon in C++.
Is there a "standard" way to do this?
I could use reinterpret_cast, which, from some preliminary testing, appears to work (and I can't think of any reason why it wouldn't work), but that seems to be frowned upon in C++.
Indeed, that reinterpret_cast is undefined behavior by violation of the strict aliasing rule.
Eliminating a single mov instruction (or otherwise, more or less, copying a register's worth of data) is premature micro-optimization. The compiler is likely to be able to take care of it.
If performance is really important, then follow the optimization process: profile, disassemble, understand the compiler's interpretation, and work together with it within the defined rules.
At a glance, you (and the compiler) might have an easier time with functions like T get_value() instead of void set_value(T). The flow of data and initialization make more sense, although type deduction is lost. You can regain the deduction through tag types, if that's really important.

Benefits of using reference_wrapper instead of raw pointer in containers?

What benefits has using std::reference_wrapper as template parameter of containers instead of raw pointers? That is std::vector<std::reference_wrapper<MyClass> > vs. std::vector<MyClass*>
I like forgetting about nulls and not having to use pointer syntax, but the verbosity of the types (i.e. vector<reference_wrapper<MyClass> >) plus having the call site use std::ref to wrap the actual reference makes me think it is not worth it.
I am referring to cases in which using std::shared_ptr or any other smart pointer is not an option.
Are there other benefits of using reference_wrapper or any other factors I am currently not taking into account? (I think my question applies to both C++11's reference_wrapper and boost's)
I don't think there is any technical difference. Reference wrapper provides basic pointer functionality, including the ability to change the target dynamically.
One benefit is that it demonstrates intent. It tells people who read the code that "whoever" has the variable, isn't actually controlling its lifespan. The user hasn't forgotten to delete or new anything, which some people may start to look for when they see pointer semantics.
C references are really problematic while working with templates. If you are "lucky" enough to compile code with reference as a template parameter you might have problems with code that would work (for some reason) as follows:
template<class T> f(T x) { g(x); }
template<class T> g(T x) { x++; }
Then even if you call f<int&>(x) it will call g<int>. But reference_wrapper works fine with templates.
As also mentioned earlier - you will have problems with compiling things like vector<int&>, but vector<reference_wrapper<int>> works fine.

Function return type style

I'm learning c++0x, at least the parts supported by the Visual C++ Express 2010 Beta.
This is a question about style rather than how it works. Perhaps it's too early for style and good practice to have evolved yet for a standard that isn't even released yet...
In c++0x you can define the return type of a method using -> type at the end of the function instead of putting the type at the start. I believe this change in syntax is required due to lambdas and some use cases of the new decltype keyword, but you can use it anywhere as far as I know.
// Old style
int add1(int a, int b)
{
return a + b;
}
// New style return type
auto add2(int a, int b) -> int
{
return a + b;
}
My question really then, is given that some functions will need to be defined in the new way is it considered good style to define all functions in this way for consistency? Or should I stick to only using it when necessary?
Do not be style-consistent just for being consistent. Code should be readable, i.e. understandable, that's the only real measure. Adding clutter to 95% of the methods to be consistent with the other 5%, well, that just does not sound right to me.
There is a huge codebase that uses the 'old'/current rules. I would bet that is going to be so for a long time. The problem of consistency is two-fold: who are you going to be consistent with, the few code that will require the new syntax or all existing code?
I will keep with the old syntax when the new one is not required for a bit, but then again, only time will tell what becomes the common usage.
Also note that the new syntax is still a little weird: you declare the return type as auto and then define what auto means at the end of the signature declaration... It does not feel natural (even if you do not compare it with your own experience)
Personally, I would use it when it is necessary. Just like this-> is only necessary when accessing members of a base class template (or when they are otherwise hidden), so auto fn() -> type is only necessary when the return type can't be determined before the rest of the function signature is visible.
Using this rule of thumb will probably help the majority of code readers, who might think "why did the author think we need to write the declaration this way?" otherwise.
I don't think it is necessary to use it for regular functions. It has special uses, allowing you to do easily what might have been quite awkward before. For example:
template <class Container, class T>
auto find(Container& c, const T& t) -> decltype(c.begin());
Here we don't know if Container is const or not, hence whether the return type would be Container::iterator or Container::const_iterator (can be determined from what begin() would return).
Seems to me like it would be changing the habit of a lifetime for a lot of C++ (and other C like) programmers.
If you used that style for every single function then you might be the only one doing it :-)
I am going to guess that the current standard will win out, as it has so far with every other proposed change to the definition. It has been extended, for sure, but the essential semantics of C++ are so in-grained that I don't think they are worth changing. They have influenced so many languages and style guides its ridiculous.
As to your question, I would try and separate the code into modules to make it clear where you are using old style vs new style. Where the two mix I would make sure and delineate it as much as possible. Group them together, etc.
[personal opinion]I find it really jarring to surf through files and watch the style morph back and forth, or change radically. It just makes me wonder what else is lurking in there [/personal opinion]
Good style changes -- if you don't believe me, look at what was good style in 98 and what is now -- and it is difficult to know what will considered good style and why. IMHO, currently everything related to C++0X is experimental and the qualification good or bad style just doesn't apply, yet.