Pass pointer to temporary in c++ 11? - c++

I have an existing function:
void foo(const Key* key = nullptr)
{
// uses the key
}
I want to pass it pointer to temporary Key object (i.e. rvalue) like:
foo(&Key());
This causes compilation error, but is there a way in c++ 11/14 how I can do this? Of course I could do:
Key key;
foo(&key);
But I don't need object Key, I only need it inside foo() and foo()
Or I could do:
foo(new Key());
But then the object will not be deleted.

I don't think this is a good idea, but if you really really want a temporary and cannot change foo, you can cast the temporary to a const&:
int main()
{
foo(&static_cast<const Key&>(Key{}));
}
live example on wandbox
Alternatively, you could "hide" the creation of the object behind a convenient function:
template <typename T, typename F, typename... Ts>
void invoke_with_temporary(F&& f, Ts&&... xs)
{
T obj{std::forward<Ts>(xs)...};
f(&obj);
}
int main()
{
invoke_with_temporary<Key>(&foo);
}
live example on wandbox.org
Another alternative: provide an overload of foo that takes a reference:
void foo(Key&& key)
{
foo(&key);
}

Just control the scope of the throw-away variable yourself:
{
Key key;
foo(&key);
} // <-- 'key' is destroyed here

This is a utilty function. It is basically the inverse of std::move1:
template<class T>
T& as_lvalue( T&& t ) { return t; }
if used wrong it can lead to dangling references.
Your code then becomes:
foo(&as_lvalue(Key()));
the goal of "cannot take an address of a temporary" is because you can otherwise get extremely unexpected behaviour due to things like implicit temporary creation.
In this case, we are explicitly taking the address of a temporary.
It is no more dangerous than creating a named value, calling the function, and then discarding the named value immediately.
1 std::move takes an l or r value and returns a rvalue reference to it, indicating that consumer should treat it as a temporary whose existence will be shortly discarded. as_lvalue takes an l or r value reference and returns an lvalue reference to it, indicating that the consumer should treat it as a non-temporary whose existence will persist.
They are both valid operations, but std::move is more crucial. (std::move could be called as_rvalue really). I'd advise against clever names like unmove.

The only reason to take a pointer rather than a reference is if the pointer can be null. And if that's the case, then your foo function will look something like this:
void foo(const Key *key)
{
if(key)
//Do stuff with `key`
else
//Alternate code
}
Given that, what you want is a second overload, one that refactors all of the "do stuff with key" into its own function that takes a reference. So do that:
void foo(const Key &key)
{
//Do stuff with `key`
}
void foo(const Key *key)
{
if(key)
foo(*key);
else
//Alternate code.
}
If you have common code that gets executed in both cases, refactor that out into its own function too.
Yes, it's possible that "do stuff with key" is complicated and is split up into several lines. But that suggests a very strange design here.
You could also refactor it the other way:
void foo(const Key &key) {foo(&key);}

I use something like this:
template <typename T>
const T* temp_ptr(const T&& x) { return &x; }
Used like this:
foo(temp_ptr(Key{}));
This is very useful when dealing with certain legacy APIs. DirectX 11 in particular frequently takes parameter aggregating structs by const T* and it's convenient to create and pass them inline. I don't think there's anything wrong with this idiom unlike some of the commenters here, although I'd prefer if those APIs just took a const reference and handled optional arguments differently.
Here's an example D3D11 API call where this is very useful:
vector<Vec3f> verts;
...
ID3D11BufferPtr vbuf;
d3dDevice->CreateBuffer(
temp_ptr(CD3D11_BUFFER_DESC{byteSize(verts), D3D11_BIND_VERTEX_BUFFER}),
temp_ptr(D3D11_SUBRESOURCE_DATA{data(verts)}), &vbuf);
For calling ID3D11Device::CreateBuffer() to create a vertex buffer.
On larger projects I might write wrappers for many of the D3D API calls which make them more convenient to call in a modern C++ style but for small standalone sample projects that I want to have minimum extra code or dependencies I find this very useful.
Another trick I've used in the past that works is:
foo(std::data({Key{}}));
But I don't particularly recommend this as I think the intent is unclear and relies on a bit too much knowledge of how initializer lists work. A variation is useful if you need to pass a temporary 'array' though:
d3dDeviceContext->ClearRenderTargetView(rendertarget, data({0.0f, 0.0f, 0.0f, 0.0f}));
For calling an API like ID3D11DeviceContext::ClearRenderTargetView().

if what you concern is variable scope (as you mention in comment) you can use
{Key key;foo(&key);}

Or I could do:
foo(new Key());
But then the object will not be deleted.
You could create a smart pointer, but I'd rather not. It still is a possibility though.
#include <memory>
foo(std::make_unique<const Key>().get());

Related

Is using std::move to dump a member acceptable design?

Say I have a class C the only purpose of which is to fill up a container con_m of some type that is a member of C. So C possesses methods to fill up con_m in a specific manner. After C has filled up con I no longer need C, so I want to dump con_n into another variable. But C is actually a functor in the sense that con_m can only be filled up incrementally, so cannot be a object local to the filling function. Also the implementation should be hidden to the user, so he does not need to call std::move on the function returning con_m.
template <class container_type> class C {
public:
template <class T> void fill_some_more(const T &t) {
// do stuff with t filling con_m by another increment
}
container_type dump_container() { return std::move(con_m); }
container_type con_m;
};
int main() {
C<std::vector<int>> c;
while (some_condition) {
c.fill_some_more(some_int);
}
auto con = c.dump_container();
}
Is this an appropriate use of std::move?
Also the implementation should be hidden to the user
Moving the contents of an object is not an implementation detail; it's part of what the function is doing. By moving the contents of the object, you make it so that the object has lost its contents, and therefore code that gets called later needs to respect that fact.
The reason why the C++ standards committee made it so that you had to use std::move a lot explicitly was so that people reading your code would be able to know what's going on. If a person sees c.dump_container(), they will probably assume that the dumping happened via copy. If they see std::move(c).dump_container(), and if they see that they cannot call it on an lvalue reference, then it's very clear to everyone involved what the state of c will be afterwards.
Movement should be explicit.
Of course, as Igor pointed out in the comments, this whole thing could be avoided if these were free functions that acted on a user-provided container, rather than having to have the container be a member of some type.

How to check if an object==this in c++

I have spent a great deal of time programming in Java and a decent amount of time writing c++, but I have run into an issue I haven't been able to solve. In a Java class I can simply write the following,
public void doOperation(object a)
{
if(a != this)
{
set(a); // just some method that sets this.a = object.a
}
doOperation();
}
public void doOperation()
{
this.a = pow(this.a,3);
}
The part I am having trouble implementing in c++ is the if statement where I check if the argument object is equal to this. I have tried this in c++
object::doOperation(object a)
{
if(a != this)
{
set(a);
}
doOperation();
}
object::doOperation()
{
this->a = pow(this->a,3)
}
The error I get reads, "no match for ‘operator!=’ (operand types are ‘object’ and ‘object* const’)". Thanks in advance for anybody who can help!
You can simply pass "a" by reference, take a pointer to "a" and compare it with "this", like so:
object::doOperation(object & a)
{
if(&a != this)
{
set(a);
}
doOperation();
}
object::doOperation()
{
this->a = pow(this->a,3)
}
This is a standard way that people would e.g. implement copy assignment operators in C++. It's not always done this way, but often the implementation of that will take a const reference to an object, and use a check against "this" to prevent self assignment.
Edit: Let me try to take a broader view, which might be more useful to you.
In Java, objects are implicitly passed around by reference and not by value, and they are garbage collected also, automatically destroyed when no-one needs them anymore.
The closest way to get that kind of semantics in C++ is to pass around std::shared_ptr<A> when in java you would have passed A. Then, when you need to compare against this, you can use the get method to get a raw pointer from the shared pointer and compare it literally against this. OR, if you use the std::enable_shared_from_this template when you define your class, you can use shared_from_this to get a shared_ptr<A> to this at any point in your member functions, and compare the shared_ptr's directly.
I'm assuming you are using C++11, otherwise you would use boost headers for that stuff.
Note also the stuff about "weak_ptr" which you might need to use if you have cyclic references.
That's because this is a pointer type in C++. If your function signature would use a pointer as well, it would work:
object::doOperation(object* a)
{
if(a != this)
{
set(a);
}
doOperation();
}
In Java most objects are passed around as references. To avoid aliasing problems you may then need to check for reference equality: are these two apparently distinct objects, really distinct, or do the references refer to the same object?
In C++ objects are often passed as values, copying their values. And for values it doesn't make sense to check for object identity. E.g. a function argument passed by value, as in your object::doOperation(object a) example, will always have an address different from everything else at that point in the program execution (it's freshly allocated).
Still there are some cases where objects are passed by reference (or pointer), and where self-check is appropriate.
For example, a copy assignment operator might go like this:
auto My_class::operator=( My_class const& other )
-> My_class&
{
if( &other != this )
{
values_ = other.values_; // Avoid this work for self-assign.
}
return *this;
}
The self-check can also be crucial for correctness, although with use of standard library containers and smart pointers correctness can usually be ensured without any self-check.
If an object has been passed by value as in
void object::doSomething(object x)
{
// whatever
}
then it is not necessary to compare with this. Even of the caller does
some_object.doSomething(some_object);
the x is a temporary copy - i.e. so a different object is guaranteed.
If the argument is passed by reference or argument then remember that this is a pointer and not a reference (unlike Java in which those concepts are entwined), for example;
void object::doSomething(object *x)
{
if (this != x)
{
}
}
and
void object::doSomething(object &x)
{
if (this != &x)
{
}
}
The latter assumes that object does not have an interfering operator&(). If that assumption is invalid then, in C++11 use addressof(x) (where addressof() is specified in <memory>. Before C++11, the tricks to get address of x are a little more indirect (e.g. a sequence of casts).
Personally, I don't do such tests at all. Instead, I simply do
void object::doSomething(object &x)
{
object temp(x);
// do things with temp and *this
std::swap(x, temp);
}
which relies on working copy semantics, but also gives more exception safety. If a class makes the above prohibitive, then that is more usually a problem with class design (better to find another way to avoid the need to compare with this).

Why do std::function instances have a default constructor?

This is probably a philosophical question, but I ran into the following problem:
If you define an std::function, and you don't initialize it correctly, your application will crash, like this:
typedef std::function<void(void)> MyFunctionType;
MyFunctionType myFunction;
myFunction();
If the function is passed as an argument, like this:
void DoSomething (MyFunctionType myFunction)
{
myFunction();
}
Then, of course, it also crashes. This means that I am forced to add checking code like this:
void DoSomething (MyFunctionType myFunction)
{
if (!myFunction) return;
myFunction();
}
Requiring these checks gives me a flash-back to the old C days, where you also had to check all pointer arguments explicitly:
void DoSomething (Car *car, Person *person)
{
if (!car) return; // In real applications, this would be an assert of course
if (!person) return; // In real applications, this would be an assert of course
...
}
Luckily, we can use references in C++, which prevents me from writing these checks (assuming that the caller didn't pass the contents of a nullptr to the function:
void DoSomething (Car &car, Person &person)
{
// I can assume that car and person are valid
}
So, why do std::function instances have a default constructor? Without default constructor you wouldn't have to add checks, just like for other, normal arguments of a function.
And in those 'rare' cases where you want to pass an 'optional' std::function, you can still pass a pointer to it (or use boost::optional).
True, but this is also true for other types. E.g. if I want my class to have an optional Person, then I make my data member a Person-pointer. Why not do the same for std::functions? What is so special about std::function that it can have an 'invalid' state?
It does not have an "invalid" state. It is no more invalid than this:
std::vector<int> aVector;
aVector[0] = 5;
What you have is an empty function, just like aVector is an empty vector. The object is in a very well-defined state: the state of not having data.
Now, let's consider your "pointer to function" suggestion:
void CallbackRegistrar(..., std::function<void()> *pFunc);
How do you have to call that? Well, here's one thing you cannot do:
void CallbackFunc();
CallbackRegistrar(..., CallbackFunc);
That's not allowed because CallbackFunc is a function, while the parameter type is a std::function<void()>*. Those two are not convertible, so the compiler will complain. So in order to do the call, you have to do this:
void CallbackFunc();
CallbackRegistrar(..., new std::function<void()>(CallbackFunc));
You have just introduced new into the picture. You have allocated a resource; who is going to be responsible for it? CallbackRegistrar? Obviously, you might want to use some kind of smart pointer, so you clutter the interface even more with:
void CallbackRegistrar(..., std::shared_ptr<std::function<void()>> pFunc);
That's a lot of API annoyance and cruft, just to pass a function around. The simplest way to avoid this is to allow std::function to be empty. Just like we allow std::vector to be empty. Just like we allow std::string to be empty. Just like we allow std::shared_ptr to be empty. And so on.
To put it simply: std::function contains a function. It is a holder for a callable type. Therefore, there is the possibility that it contains no callable type.
Actually, your application should not crash.
§ 20.8.11.1 Class bad_function_call [func.wrap.badcall]
1/ An exception of type bad_function_call is thrown by function::operator() (20.8.11.2.4) when the function wrapper object has no target.
The behavior is perfectly specified.
One of the most common use cases for std::function is to register callbacks, to be called when certain conditions are met. Allowing for uninitialized instances makes it possible to register callbacks only when needed, otherwise you would be forced to always pass at least some sort of no-op function.
The answer is probably historical: std::function is meant as a replacement for function pointers, and function pointers had the capability to be NULL. So, when you want to offer easy compatibility to function pointers, you need to offer an invalid state.
The identifiable invalid state is not really necessary since, as you mentioned, boost::optional does that job just fine. So I'd say that std::function's are just there for the sake of history.
There are cases where you cannot initialize everything at construction (for example, when a parameter depends on the effect on another construction that in turn depends on the effect on the first ...).
In this cases, you have necessarily to break the loop, admitting an identifiable invalid state to be corrected later.
So you construct the first as "null", construct the second element, and reassign the first.
You can, actually, avoid checks, if -where a function is used- you grant that inside the constructor of the object that embeds it, you will always return after a valid reassignment.
In the same way that you can add a nullstate to a functor type that doesn't have one, you can wrap a functor with a class that does not admit a nullstate. The former requires adding state, the latter does not require new state (only a restriction). Thus, while i don't know the rationale of the std::function design, it supports the most lean & mean usage, no matter what you want.
Cheers & hth.,
You just use std::function for callbacks, you can use a simple template helper function that forwards its arguments to the handler if it is not empty:
template <typename Callback, typename... Ts>
void SendNotification(const Callback & callback, Ts&&... vs)
{
if (callback)
{
callback(std::forward<Ts>(vs)...);
}
}
And use it in the following way:
std::function<void(int, double>> myHandler;
...
SendNotification(myHandler, 42, 3.15);

What is the use of passing const references to primitive types?

In a project I maintain, I see a lot of code like this for simple get/set methods
const int & MyClass::getFoo() { return m_foo; }
void MyClass::setFoo(const int & foo) { m_foo = foo; }
What is the point in doing that instead of the following?
int MyClass::getFoo() { return m_foo; } // Removed 'const' and '&'
void MyClass::setFoo(const int foo) { m_foo = foo; } // Removed '&'
Passing a reference to a primitive type should require the same (or more) effort as passing the type's value itself, right?
It's just a number after all...
Is this just some attempted micro-optimization or is there a true benefit?
The difference is that if you get that result into a reference yourself you can track the changes of the integer member variable in your own variable name without recalling the function.
const &int x = myObject.getFoo();
cout<<x<<endl;
//...
cout<<x<<endl;//x might have changed
It's probably not the best design choice, and it's very dangerous to return a reference (const or not), in case a variable that gets freed from scope is returned. So if you return a reference, be careful to be sure it is not a variable that goes out of scope.
There is a slight difference for the modifier too, but again probably not something that is worth doing or that was intended.
void test1(int x)
{
cout<<x<<endl;//prints 1
}
void test2(const int &x)
{
cout<<x<<endl;//prints 1 or something else possibly, another thread could have changed x
}
int main(int argc, char**argv)
{
int x = 1;
test1(x);
//...
test2(x);
return 0;
}
So the end result is that you obtain changes even after the parameters are passed.
To me, passing a const reference for primitives is a mistake. Either you need to modify the value, and in that case you pass a non-const reference, or you just need to access the value and in that case you pass a const.
Const references should only be used for complex classes, when copying objects could be a performance problem. In the case of primitives, unless you need to modify the value of the variable you shouldn't pass a reference. The reason is that references take more computation time than non-references, since with references, the program needs to look up in a table to find the address of the object. When this look-up time is shorter than the copying time, references are an improvement.
Generally, ints and addresses have the same byte length in low-level implementations. So the time of copying an int as a return value for a function is equivalent to the time of copying an address. But in the case where an int is returned, no look up is performed, therefore performance is increased.
The main difference between returning a value and returning a const reference is that you then can const_cast that reference and alter the value.
It's an example of bad design and an attempt to create a smart design where easy and concise design would be more than enough. Instead of just returning a value the author makes readers of code think what intention he might have had.
There is not much benefit. I have seen this in framework or macro generated getters and setters before. The macro code did not distinguish between primitive and non-POD types and just used const type& across the board for setters. I doubt that it is an efficiency issue or a genuine misunderstanding; chances are this is a consistency issue.
I think this type of code is written who have misunderstood the concept of references and use it for everything including primitive data types. I've also seen some code like this and can't see any benefit of doing this.
There is no point and benefit except
void MyClass::setFoo(const int foo)
void MyClass::setFoo(const int& foo)
as then you won't be able to reuse 'foo' variable inside 'setFoo' implementation. And I believe that 'int&' is just because Guy just get used to pass all things by const reference and there is nothing wrong with that.

Why is it preferable to write func( const Class &value )?

Why would one use func( const Class &value ) rather than just func( Class value )? Surely modern compilers will do the most efficient thing using either syntax. Is this still necessary or just a hold over from the days of non-optimizing compilers?
Just to add, gcc will produce similar assembler code output for either syntax. Perhaps other compilers do not?
Apparently, this is just not the case. I had the impression from some code long ago that gcc did this, but experimentation proves this wrong. Credit is due to to Michael Burr, whose answer to a similar question would be nominated if given here.
There are 2 large semantic differences between the 2 signatures.
The first is the use of & in the type name. This signals the value is passed by reference. Removing this causes the object to be passed by value which will essentially pass a copy of the object into the function (via the copy constructor). For operations which simply need to read data (typical for a const &) doing a full copy of the object creates unnecssary overhead. For classes which are not small or are collections, this overhead is not trivial.
The second is the use of const. This prevents the function from accidentally modifying the contents of value via the value reference. It allows the caller some measure of assurance the value will not be mutated by the function. Yes passing a copy gives the caller a much deeper assurance of this in many cases.
The first form doesn't create a copy of the object, it just passes a reference (pointer) to the existing copy. The second form creates a copy, which can be expensive. This isn't something that is optimized away: there are semantic differences between having a copy of an object vs. having the original, and copying requires a call to the class's copy constructor.
For very small classes (like <16 bytes) with no copy constructor it is probably more efficient to use the value syntax rather than pass references. This is why you see void foo(double bar) and not void foo(const double &var). But in the interests of not micro-optimizing code that doesn't matter, as a general rule you should pass all real-deal objects by reference and only pass built-in types like int and void * by value.
There is a huge difference which nobody has mentioned yet: object slicing. In some cases, you may need const& (or &) to get correct behavior.
Consider another class Derived which inherits from Class. In client code, you create an instance of Derived which you pass to func(). If you have func(const Class&), that same instance will get passed. As others have said, func(Class) will make a copy, you will have a new (temporary) instance of Class (not Derived) in func.
This difference in behavior (not performance) can be important if func in turn does a downcast. Compare the results of running the following code:
#include <typeinfo.h>
struct Class
{
virtual void Foo() {};
};
class Derived : public Class {};
void f(const Class& value)
{
printf("f()\n");
try
{
const Derived& d = dynamic_cast<const Derived&>(value);
printf("dynamic_cast<>\n");
}
catch (std::bad_cast)
{
fprintf(stderr, "bad_cast\n");
}
}
void g(Class value)
{
printf("g()\n");
try
{
const Derived& d = dynamic_cast<const Derived&>(value);
printf("dynamic_cast<>\n");
}
catch (std::bad_cast)
{
fprintf(stderr, "bad_cast\n");
}
}
int _tmain(int argc, _TCHAR* argv[])
{
Derived d;
f(d);
g(d);
return 0;
}
Surely modern compilers will do the
most efficient thing using either
syntax
The compiler doesn't compile what you "mean", it compiles what you tell it to. Compilers are only smart for lower level optimizations and problems the programmer overlooks (such as computation inside a for loop, dead code etc).
What you tell the compiler to do in the second example, is to make a copy of the class - which it will do without thinking - even if you didn't use it, that's what you asked the compiler to do.
The second example explicitly asks the compiler to use the same variable - conserving space and precious cycles (no copy is needed). The const is there for mistakes - since Class &value can be written to (sometimes it's desired).
Here are the differences between some parameter declarations:
copied out modifiable
func(Class value) Y N Y
func(const Class value) Y N N
func(Class &value) N Y Y
func(const Class &value) N N N
where:
copied: a copy of the input parameter is made when the function is called
out: value is an "out" parameter, which means modifications made within func() will be visible outside the function after it returns
modifiable: value can be modified within func()
So the differences between func(Class value) and func(const Class &value) are:
The first one makes a copy of the input parameter (by calling the Class copy constructor), and allows code inside func() to modify value
The second one does not make a copy, and does not allow code inside func() to modify value
If you use the former, and then try to change value, by accident, the compiler will give you an error.
If you use the latter, and then try to change value, it won't.
Thus the former makes it easier to catch mistakes.
The first example is pass by reference. Rather than pass the type, C++ will pass a reference to the object (generally, references are implemented with pointers... So it's likely an object of size 4 bytes)... In the second example, the object is passed by value... if it is a big, complex object then likely it's a fairly heavyweight operation as it involves copy construction of a new "Class".
The reason that an optimizing compiler can't handle this for you is the issue of separate compilation. In C++, when the compiler is generating code for a caller, it may not have access to the code of the function itself. The most common calling convention that I know of usually has the caller invoke the copy-constructor which means it's not possible for the compilation of the function itself to prevent the copy constructor if it's not necessary.
The only time that passing a parameter by value is preferable is when you are going to copy the parameter anyway.
std::string toUpper( const std::string &value ) {
std::string retVal(value);
transform(retVal.begin(), retVal.end(), charToUpper());
return retVal;
}
Or
std::string toUpper( std::string value ) {
transform(value.begin(), value.end(), charToUpper());
return value;
}
In this case the second example is the same speed as the first if the value parameter is a regular object, but faster if the value parameter is a R-Value.
Although most compilers will do this optimisation already I don't expect to rely on this feature till C++0X, esp since I expect it could confuse most programmers who would probably change it back.
See Want Speed? Pass by Value. for a better explaination than I could give.