Is it in anyway possible (without using boost) to have a function in c++ as follows :
void foo(int x, classX &obj = *(new classX()))
The classX is used multiple times in my codebase and there are many such functions which have a similar signature (i.e. use this class object as a default argument). Is it possible to achieve this without an overloaded call?
The code that you provided certainly compiles and "works", but I strongly advise against such a thing.
The function returns void so that means the either referenced or allocated object (or its presumed owner) does not leave the function. It must thus, if allocated, be destroyed (otherwise, that's someone else's problem, outside that function).
However, that isn't even possible, there is nobody who owns the object, or has a pointer to it! So not only do you have a possible memory leak there, you have a guaranteed memory leak (in case no object is passed), unless you add yet another ugly hack that derives a pointer from the reference only to destroy the object. That's very unpleasant.
Plus, even if you get this done (in a no-leak way), you have useless object allocation and destruction for every function call. Although one shouldn't optimize prematurely, one also shouldn't pessimize prematurely by adding regular allocations and deallocations that are not just unnecessary but actually decrease code quality.
Something that's better would be:
//namespace whatever {
classX dummy;
//}
#include <memory>
void foo(int x, classX &obj = dummy)
{
if(std::addressof(obj) != std::addressof(dummy))
{ /* do something using object */ }
else
{ /* no object supplied */ }
}
Yep, that's a global used for a good cause. You can make the global a singleton if that makes you feel better, or a static class member, all the same. Either way, you have exactly one object, no allocations, no leaks, and you can still pass an object to the function if you wish. And, you can distinguish these two cases.
To use a global shared instance as default argument is possible.
With
extern inline classX globalClassX;
void foo(int x, classX &obj = globalClassX);
it should be accomplished.
However, I'm quite uncertain about the static initialization order fiasco which may interfere.
This can be solved using Meyers Singleton approach instead:
classX& getGlobalClassX()
{
static ClassX classX;
return classX;
}
void foo(int x, classX &obj = getGlobalClassX);
An MCVE for demonstration:
#include <cassert>
#include <iostream>
struct Object {
inline static unsigned idGen = 1;
unsigned id;
const std::string name;
explicit Object(const std::string &name = std::string()): id(idGen++), name(name) { }
};
Object& getGlobalObj()
{
static Object objGlobal("global");
return objGlobal;
}
void doSomething(int x, Object &obj = getGlobalObj());
#define PRINT_AND_DO(...) std::cout << #__VA_ARGS__ << ";\n"; __VA_ARGS__
int main()
{
PRINT_AND_DO(doSomething(0));
PRINT_AND_DO(Object obj("local"));
PRINT_AND_DO(doSomething(1, obj));
PRINT_AND_DO(doSomething(2));
}
void doSomething(int x, Object &obj)
{
std::cout << "doSomething(x: " << x << ", obj: Object("
<< obj.id << ", '" << obj.name << "'))\n";
}
Output:
doSomething(0);
doSomething(x: 0, obj: Object(1, 'global'))
Object obj("local");
doSomething(1, obj);
doSomething(x: 1, obj: Object(2, 'local'))
doSomething(2);
doSomething(x: 2, obj: Object(1, 'global'))
Live Demo on coliru
Another Q/A I consulted while writing:
SO: Are static data members safe as C++ default arguments?
Related
I have the following code which seems to work always (msvc, gcc and clang).
But I'm not sure if it is really legal. In my framework my classes may have "two constructors" - one normal C++ constructor which does simple member initialization and an additional member function "Ctor" which executes additional initialization code. It is used to allow for example calls to virtual functions. These calls are handled by a generic allocation/construction function - something like "make_shared".
The code:
#include <iostream>
class Foo
{
public:
constexpr Foo() : someConstField(){}
public:
inline void Ctor(int i)
{
//use Ctor as real constructor to allow for example calls to virtual functions
const_cast<int&>(this->someConstField) = i;
}
public:
const int someConstField;
};
int main()
{
//done by a generic allocation function
Foo f;
f.Ctor(12); //after this call someConstField is really const!
//
std::cout << f.someConstField;
}
Modifying const memory is undefined behaviour. Here that int has already been allocated in const memory by the default constructor.
Honestly I am not sure why you want to do this in the first place. If you want to be able to initalise Foo with an int just create an overloaded constructor:
...
constexpr Foo(int i) : someConstField{i} {}
This is completely legal, you are initalising the const memory when it is created and all is good.
If for some reason you want to have your object initalised in two stages (which without a factory function is not a good idea) then you cannot, and should not, use a const member variable. After all, if it could change after the object was created then it would no longer be const.
As a general rule of thumb you shouldn't have const member variables since it causes lots of problems with, for example, moving an object.
When I say "const memory" here, what I mean is const qualified memory by the rules of the language. So while the memory itself may or may not be writable at the machine level, it really doesn't matter since the compiler will do whatever it likes (generally it just ignores any writes to that memory but this is UB so it could do literally anything).
No.
It is undefined behaviour to modify a const value. The const_cast itself is fine, it's the modification that's the problem.
According to 7.1.6.1 in C++17 standard
Except that any class member declared mutable (7.1.1) can be modified, any attempt to modify a const
object during its lifetime (3.8) results in undefined behavior.
And there is an example (similar to yours, except not for class member):
const int* ciq = new const int (3); // initialized as required
int* iq = const_cast<int*>(ciq); // cast required
*iq = 4; // undefined: modifies a const object
If your allocation function allocates raw memory, you can use placement new to construct an object at that memory location. With this you must remember to call the destructor of the object before freeing the allocation.
Small example using malloc:
class Foo
{
public:
constexpr Foo(int i) : someConstField(i){}
public:
const int someConstField;
};
int main()
{
void *raw_memory = std::malloc(sizeof(Foo));
Foo *foo = new (raw_memory) Foo{3}; // foo->someConstField == 3
// ...
foo->~Foo();
std::free(foo);
}
I suggest, that you use the constructor to avoid the const cast. You commented, that after your call of Ctor the value of someConstField will remain const. Just set it in the constructor and you will have no problems and your code becomes more readable.
#include <iostream>
class Foo
{
public:
constexpr Foo(int i) : someConstField(Ctor(i)){}
int Ctor(); // to be defined in the implementation
const int someConstField;
};
int main()
{
Foo f(12);
std::cout << f.someConstField;
}
I have defined a class that holds a reference to a list of static functions which are defined
outside of the class. These functions each take a pointer to that particular instance of the class as a argument.
I'm passing the this pointer to the functions. But that doesn't seem right to me.
Is there a better way?
The code below is a simplified version:
#include <iostream>
#include <map>
class A
{
public:
typedef void (*action_func)(A*);
typedef std::map<int, action_func> func_map;
A(func_map the_map)
:m_map(the_map)
{}
void execute_action(int action_id)
{
auto it = m_map.find(action_id);
if(it != m_map.end())
{
auto cmd = it->second;
cmd(this);
}
}
private:
func_map& m_map;
};
static void function_1(A* ptrToA)
{
std::cout << "This is function_1\n";
}
static void function_2(A* ptrToA)
{
std::cout << "This is function_2\n";
}
static func_map functions =
{
{1, function_1},
{2, function_2}
};
int main()
{
A obj(functions);
obj.execute_action(1);
obj.execute_action(2);
return 0;
}
The output of the above is this:
This is function_1
This is function_2
When storing references and pointers to things outside the class it is important to reason about their lifetimes. Generally, the thing being referenced should outlive the thing that references it.
In your particular case, static func_map functions; has static storage duration, that is it is created before main() starts and is destroyed after main() ends.
So you can safely use it inside A obj, which lives within the scope of main():
int main()
{
A obj(functions);
. . .
}
However, the constructor of A isn't just storing it - it stores a reference to its temporary copy instead:
A(func_map the_map)
:m_map(the_map)
{}
func_map& m_map;
What's worse, the temporary copy lives only until the end of the full-expression, i.e. until the end of A obj(functions);. So if you use it after that you will be accessing a dangling reference (undefined behavior).
To fix that, change it to a pass by-reference:
A(func_map& the_map)
:m_map(the_map)
{}
func_map& m_map;
Now there's no issue.
I'm passing the this pointer to the functions. But that doesn't seem right to me.
The same lifetime reasoning applies - if the thing where you pass this into doesn't use it for longer than this is alive, then technically there is no issue. In your case the function calls are synchronous, so by definition this is alive during each function call.
Whether or not it's "right" from a design perspective is impossible to say from the provided example. There could be better solutions, but there are also design patterns (e.g. Strategy Pattern) that are based on passing a reference to self around. So in the end it's a design choice.
I have written the following sample code:
#include <iostream>
class B
{
int Value;
public:
B(int V) : Value(V) {}
int GetValue(void) const { return Value;}
};
class A
{
const B& b;
public:
A(const B &ObjectB) : b(ObjectB) {}
int GetValue(void) { return b.GetValue();}
};
B b(5);
A a1(B(5));
A a2(b);
A a3(B(3));
int main(void)
{
std::cout << a1.GetValue() << std::endl;
std::cout << a2.GetValue() << std::endl;
std::cout << a3.GetValue() << std::endl;
return 0;
}
Compiled with mingw-g++ and executed on Windows 7, I get
6829289
5
1875385008
So, what I get from the output is that the two anonymous object are destroyed as the initialization has completed, even if they are declared in a global context.
From this my question: does is exist a way to be sure that a const reference stored in class will always refer to a valid object?
One thing you can do in class A:
A(B&&) = delete;
That way, the two lines that try to construct an A from a B temporary will fail to compile.
That obviously won't prevent you providing some other B object with a lifetime shorter than the A object referencing it, but it's a step in the right direction and may catch some accidental/careless abuses. (Other answers already discuss design scenarios and alternatives - I won't cover that same ground / whether references can be safe(r) in the illustrated scenario is already an important question.)
No, there is not. Remember that references are pointers under the hood, and normally don't control the lifetime of the object they reference (see here for an exception, although it doesn't apply in this case). I would recommend just having a B object, if this is in a piece of code that you need.
Also, you could utilize an object such as a shared_ptr in C++11, which will only eliminate the object once both the pointer in the function and in the object have been destroyed.
I have some methods that take a reference to a given object, and some are taking boost::shared_ptr. So far in my test method I created a shared_ptr pointing to one of these objects and pass *ptr to the methods expecting a reference. Is it possible to do it the other way round, e.g. create a local object on the stack, and then create a shared pointer to it in a safe way, to arrive at the straightforward alternative to &obj operator with traditional pointers?
If you find you need this, something is probably horribly wrong with your code.
If the functions take a shared pointer, it should be because they need to extend the lifetime of the object. If they don't need to extend the lifetime of the object, they should take a reference.
With what you're doing, they can't extend the lifetime of the object. If they need to, and can't, they may wind up accessing an object that has gone out of scope through a copy of the shared pointer you passed them. Boom.
It's slightly possible this might make sense. It may be that they need to extend the lifespan but you will make sure that the object remains valid longer than the longest they might possibly need to extend it. But I'd still strongly suggest not doing this. It's incredibly fragile and makes all the code you call dependent on exactly how the calling code behaves.
#include <boost/shared_ptr.hpp>
void null_deleter(int *)
{}
int main()
{
int i = 0;
boost::shared_ptr<int> p(&i, &null_deleter);
}
You can pass an appropriate deleter in the constructor of the form:
template<class Y, class D> shared_ptr(Y * p, D d);
The deleter object must do nothing in its operator()(), such as the function:
template <typename T>
void no_op(T*) {}
with which you can then construct:
boost::shared_ptr<Foo> f(&obj, no_op<Foo>);
You can use c++11 lambda function:
boost::shared_ptr<Foo> f(&obj, \[ ](Foo*){});
You can pass null_deleter in the constructor.
#include <boost/shared_ptr.hpp>
#include <boost/core/null_deleter.hpp>
int main()
{
int a = 0;
boost::shared_ptr<int> pi(&a, boost::null_deleter());
}
But watch this case: using object after destruction:
#include <boost/shared_ptr.hpp>
#include <boost/core/null_deleter.hpp>
class Y
{
public:
void tryUse()
{
std::cout << "attempt to use :"<< (uintptr_t)(void*)this<< std::endl;
}
~Y()
{
std::cout << "destructor: "<< (uintptr_t)(void*)this<< std::endl;
}
};
struct Y_user
{
boost::shared_ptr<Y> p;
~Y_user()
{
std::cout << "Y_user destructor: "<< (uintptr_t)(void*)this<< std::endl;
if (p.get())
p->tryUse();
}
};
int main()
{
{
Y_user yu;
Y y;
boost::shared_ptr<Y> p (&y, boost::null_deleter() );
yu.p = p;
}
}
Will lead to console output like this:
destructor: 140737179995232
Y_user destructor: 140737179995264
attempt to use :140737179995232
I have been stumbling over this problem for a while now. I am trying to return a pointer to an object to I can say
MyObject* obj = Manager::Create(int i, int j);
How do I correctly allocate memory so there are no leaks? I thought I was supposed to call new to make the memory on the heap but I have recently been told otherwise.
“I have been stumbling over this problem for a while now. I am trying to return a pointer to an object to I can say
MyObject* obj = Manager::Create(int i, int j);
How do I correctly allocate memory so there are no leaks? I thought I was supposed to call new to make the memory on the heap but I have recently been told otherwise.”
Since you’re asking about how to allocate memory, and since that happens inside Manager::Create, the only reasonable interpretation I can see is that you’re the one writing the Manager::Create function.
So, first of all, do you really need a factory, and what, if anything, is the “manager” actually managing?
It is my impression that people coming from a Java background have a strong tendendcy to add needless dynamic allocation and factories and “managers” and singletons and envelope patterns and whatnot, that are generally ungood in C++.
Don’t.
For example, if your obj is only needed in a local scope, use automatic storage (stack based allocation and deallocation), which can be orders of magnitude more efficient than Java-like dynamic allocation:
MyObject obj( i, j );
It this is applicable, then the question “How do I correctly allocate memory so there are no leaks” has a very simple answer in C++: just declare the variable, as above.
This applies even if you have to return such an object from a function. Then just return by value (apparently copying the object). For example as with the foo::reduce function below,
#include <iostream> // std::cout, std::endl
#include <string> // std::string, std::to_string
namespace foo {
using namespace std;
class MyObject
{
private:
string description_;
public:
string description() const { return description_; }
MyObject( int const x, int const y )
: description_( "(" + to_string( x + 0LL ) + ", " + to_string( y + 0LL ) + ")" )
{}
};
ostream& operator<<( ostream& stream, MyObject const& o )
{
return stream << o.description();
}
int gcd( int a, int b )
{
return (b == 0? a : gcd( b, a % b ));
}
MyObject reduce( int const a, int const b )
{
int const gcd_ab = gcd( a, b );
return MyObject( a/gcd_ab, b/gcd_ab );
}
} // namespace foo
int main()
{
using namespace foo;
int const a = 42;
int const b = 36;
cout << MyObject( a, b ) << " -> " << reduce( a, b ) << endl;
}
Now let’s see how this concise, simple and efficient code can be made verbose, complex and inefficient by introducing needless dynamic allocation. I write “needless” because most of the traditional reasons for dynamic allocation have been obviated by the containers etc. of the standard C++ library and the facilities of the C++ language. For example, where previously you might have used pointers to avoid costly copying, and consequently getting into the question “i need to clean up, but was the object allocated on automatic storage or dynamically?“, with C++03 you could use smart pointers such as boost::shared_ptr to automate proper destruction regardless of the object’s origins, and with C++11 you can use move semantics to largely avoid the copying inefficiencies, so that the problem does not pop up in the first place.
So, the use of dynamic allocation in the code below is, with modern C++, very artifical and construed; it does not have any practical advantage.
But with this dynamic allocation, even if for this example it is artificial and construed, proper deallocation must be guaranteed, and the general way to do that is to use a smart pointer such as std::unique_ptr:
#include <iostream> // std::cout, std::endl
#include <memory> // std::unique_ptr, std::default_delete
#include <string> // std::string, std::to_string
namespace foo {
using namespace std;
class MyObject
{
friend struct default_delete<MyObject>;
private:
string description_;
protected:
virtual ~MyObject() {} // Restrics class to dynamic allocation only.
public:
typedef unique_ptr<MyObject> Ptr;
string description() const { return description_; }
MyObject( int const x, int const y )
: description_( "(" + to_string( x + 0LL ) + ", " + to_string( y + 0LL ) + ")" )
{}
};
ostream& operator<<( ostream& stream, MyObject const& obj )
{
return stream << obj.description();
}
int gcd( int a, int b )
{
return (b == 0? a : gcd( b, a % b ));
}
MyObject::Ptr reduce( int const a, int const b )
{
int const gcd_ab = gcd( a, b );
return MyObject::Ptr( new MyObject( a/gcd_ab, b/gcd_ab ) );
}
} // namespace foo
int main()
{
using namespace foo;
int const a = 42;
int const b = 36;
MyObject::Ptr const pData( new MyObject( a, b ) );
MyObject::Ptr const pResult( reduce( a, b ) );
cout << *pData << " -> " << *pResult << endl;
}
Note the protected destructor and the friend declaration of the code that calls the destructor. The protected destructor ensures that no static or automatic instances can be created, that only dynamic allocation can be used (e.g., one might impose this restriction just to make it easier to implement the class and to ensure to no automatic object is linked to in a dynamic data structure). The friend declaration makes the protected destructor accessible to the standard library’s object destruction function – which, however, is unfortunately only used by unique_ptr.
I’m mentioning and exemplifying this because you have a factory function, which is sometimes used to restrict a class to dynamic allocation (especially by folks coming from Java). A factory function is ungood for this purpose, because with n constructors you need n factory functions. In contrast, with the protected destructor you only need a common deleter function, and as shown above, that’s provided by the standard library.
So, upshot, generally the answer to the question “How do I correctly allocate memory so there are no leaks” is to delegate deallocation responsibility to the language or the standard library, or to other library components. First and foremost this means using automatic storage (local variables) and value returns. But when there is a need, it also includes using standard library collection classes such as std::vector and std::map. Only if those do not provide the desired functionality, consider dynamic allocation. And then delegate the deallocation responsibility to smart pointers such as std::unique_ptr and std::shared_ptr.
I thought I was supposed to call new to make the memory on the heap but I have recently been told otherwise.
Well... I think you're a bit confused. Yes, you do call new to dynamically allocate memory. However, there are common patterns (see; RAII) that are used to avoid it at all costs as it is an easy way to shoot yourself in the foot (read; write bugs).
At some point, somethign has to call new in order to dynamically allocate memory. first though you need to ask yourself; does this need to be dynamically allocated? If the answer is no, then declare it like so and move on.
void Foo() {
MyObject obj; // automatic storage space, will be cleaned up when the scope is left
}
Next, why not store the pointer in a std::unique_ptr or something equivalent? That will take care of management for you, again, you don't manage the memory;
// calls delete in the destructor
std::unique_ptr pointer( new MyObject() );
The BOOST library has an equivalent class (unique_ptr is C++11).
The point is that memory is being managed by the class via its constructor and destructor. You allocate memory dynamically when you create the object and you deallocate it (i.e., call delete) in its destructor. You simply stack allocate these in as tight a scope as possible and you don't have to worry about memory leaks.
I write C++ at work every day and I almost never call new or delete. As a small example, let's examine this class, which is a trivial implementation of a scoped pointer (note; this is not a "correct" implementation and is overly trivial! This problem is harder to solve than this, but I am using it solely as an example of managing the lifetime of a dynamically allocated object).
template<class T>
class ScopedPointer {
public:
ScopedPointer(T* obj) {
m_pointer = obj;
}
~ScopedPointer() {
delete m_pointer;
}
inline T* operator->()
{
return( m_pointer );
}
inline bool IsValid() const {
return m_pointer != NULL;
}
private:
T* m_pointer;
};
You may use this class as a wrapper around a pointer to dynamically allocated memory. When it leaves its scope its destructor will be called and the memory will be cleaned up. Again, this is not production quality code! It is not correct in that it lacks several mechanisms that a real world class would need (copy/ownership semantics, a more advanced deallocater, etc.)
void Foo() {
ScopedPointer<MyObject> ptr( new MyObject() );
ptr->whatever();
} // destructor is called, dynamic memory is freed
Usually you would return a std::unique_ptr<MyObject>, and do something like:
return std::unique_ptr<MyObject>(Manager::Create(i, j), &Manager::Free);
Or are you trying to write Manager::Create yourself? If so:
std::unique_ptr<MyObject> Manager::Create(int i, int j)
{
return std::unique_ptr<MyObject>(new MyObject(i, j)); // default deleter is appropriate
}