Hi guys I just watch a tutorial about MOVE CONSTRUCTOR (better than deep copy ) and I don't really understand the concepts I'm a beginner, not a pro so I need your help to understand let's start:
first here is the code from the tutorial :
#include <iostream>
#include <vector>
using namespace std;
class Move {
private:
int *data;
public:
void set_data_value(int d) { *data = d; }
int get_data_value() { return *data; }
// Constructor
Move(int d);
// Copy Constructor
Move(const Move &source);
// Move Constructor
Move(Move &&source) noexcept;
// Destructor
~Move();
};
Move::Move(int d) {
data = new int;
*data = d;
cout << "Constructor for: " << d << endl;
}
// Copy ctor
Move::Move(const Move &source)
: Move {*source.data} {
cout << "Copy constructor - deep copy for: " << *data << endl;
}
//Move ctor
> **Move::Move(Move &&source) noexcept
> : data {source.data} {
> source.data = nullptr;
> cout << "Move constructor - moving resource: " << *data << endl;
> }**
OK HERE IS THE THING the instructor says "we steal the data and null the pointer so my question waht happend when we assighn our pointer to nullptre is it equal to zero or cant we reach it any more or what ???"
Move::~Move() {
if (data != nullptr) {
cout << "Destructor freeing data for: " << *data << endl;
} else {
cout << "Destructor freeing data for nullptr" << endl;
}
delete data;
}
int main() {
vector<Move> vec;
vec.push_back(Move{10});
vec.push_back(Move{20});
vec.push_back(Move{30});
vec.push_back(Move{40});
vec.push_back(Move{50});
vec.push_back(Move{60});
vec.push_back(Move{70});
vec.push_back(Move{80});
return 0;
}
NULL is more or less a left-over from simpler days, when C++ was much closer to its ancestor C. Since C++ was first standardized (and even before) it was recommended to use 0 for null-pointers. C++11 added nullptr which is a drop-in replacement for both 0 and NULL.
However the type of nullptr is std::nullptr_t which can be useful for templates and function arguments (for overloading).
A really nice book I was recommended recently: Effective Modern C++ by Scott Meyers.
Neither 0 nor NULL has a pointer type.
Consider below code:
void f(int); // three overloads of f
void f(bool);
void f(void*);
f(0); // calls f(int), not f(void*)
f(NULL); // might not compile, but typically calls
// f(int). Never calls f(void*)
nullptr’s actual type is std::nullptr_t
f(nullptr); // calls f(void*) overload`
It can also improve code clarity, especially when auto variables are involved.
For example, suppose you encounter this in a code base:
auto result = findRecord( /* arguments */ );
if (result == 0) {}
If you don’t happen to know (or can’t easily find out) what findRecord returns, it may not be clear whether result is a pointer type or an integral type. After all, 0 (what result is tested against) could go either way. If you see the following, on the other hand,
auto result = findRecord( /* arguments */ );
if (result == nullptr) {}
there’s no ambiguity: result must be a pointer type.
nullptr shines especially brightly when templates enter the picture. Suppose you have some functions that should be called only when the appropriate mutex has been locked. Each function takes a different kind of pointer:
int f1(std::shared_ptr<Widget> spw); // call these only when
double f2(std::unique_ptr<Widget> upw); // the appropriate
bool f3(Widget* pw); // mutex is locked
Calling code that wants to pass null pointers could look like this:
std::mutex f1m, f2m, f3m; // mutexes for f1, f2, and f3
using MuxGuard = std::lock_guard<std::mutex>;
{
MuxGuard g(f1m); // lock mutex for f1
auto result = f1(0); // pass 0 as null ptr to f1
} // unlock mutex
{
MuxGuard g(f2m); // lock mutex for f2
auto result = f2(NULL); // pass NULL as null ptr to f2
} // unlock mutex
{
MuxGuard g(f3m); // lock mutex for f3
auto result = f3(nullptr); // pass nullptr as null ptr to f3
} // unlock mutex
The failure to use nullptr in the first two calls in this code is sad, but the code works, and that counts for something. However, the repeated pattern in the calling code—lock mutex, call function, unlock mutex—is more than sad. It’s disturbing. This kind of source code duplication is one of the things that templates are designed to avoid, so let’s templatize the pattern:
template<typename FuncType, typename MuxType, typename PtrType>
auto lockAndCall(FuncType func, MuxType& mutex, PtrType ptr) -> decltype(func(ptr))
{
MuxGuard g(mutex);
return func(ptr);
}
If the return type of this function auto … -> decltype(func(ptr) has you scratching your head, you’ll see that in C++14, the return type could be reduced to a simple decltype(auto):
template<typename FuncType, typename MuxType, typename PtrType>
decltype(auto) lockAndCall(FuncType func, MuxType& mutex, PtrType ptr)
{
MuxGuard g(mutex);
return func(ptr);
}
Given the lockAndCall template (either version), callers can write code like this:
auto result1 = lockAndCall(f1, f1m, 0); // error!
auto result2 = lockAndCall(f2, f2m, NULL); // error!
auto result3 = lockAndCall(f3, f3m, nullptr); // fine
The fact that template type deduction deduces the “wrong” types for 0 and NULL (i.e., their true types, rather than their fallback meaning as a representation for a null pointer) is the most compelling reason to use nullptr instead of 0 or NULL when you want to refer to a null pointer. With nullptr, templates pose no special challenge. Combined with the fact that nullptr doesn’t suffer from the overload resolution surprises that 0 and NULL are susceptible to, the case is ironclad. When you want to refer to a null pointer, use nullptr, not 0 or NULL.
Related
I want to make Dialog handler for my app that will contain pointer to method that will be invoked when user answer "yes" and pointer to method for "no" and the main problem that these methods can have various args or without it so i dont know how to declare this variable.
class Dialog
{
protected:
Dialog()
{
}
static Dialog* singleton;
public:
Dialog(Dialog &other) = delete;
void operator=(const Dialog &) = delete;
static Dialog *instance();
string question;
?? method_yes;
?? method_no;
static bool has_dialog();
static void clear();
};
Dialog* Dialog::singleton = nullptr;
Dialog* Dialog::instance()
{
if (singleton == nullptr) {
singleton = new Dialog();
}
return singleton;
}
bool Dialog::has_dialog()
{
return singleton != nullptr;
}
void Dialog::clear()
{
if (singleton)
{
delete singleton;
singleton = nullptr;
}
}
So there is my class for dialog with user, when i want to ask user something i do
auto yes = []()
{
ExitProcess(0);
};
Dialog::instance()->question = "Do you want to exit?";
Dialog::instance()->method_yes = yes;
And somewhere upper or whatever i have answer handling
if (Dialog::has_dialog())
// render question and buttons
// if pressed button yes
Dialog::instance()->method_yes();
Dialog::clear();
And what if for example i want to manage exit code so my lambda will be
auto yes = [](int code)
{
ExitProcess(code);
};
But then there is a new argument so i cant just use
void(*method_yes)();
for declaration
At the end of the day, C++ is a strongly typed language and you'll have to provide the set of expected possible arguments in your function signature.
Since you don't want that, there are some techniques to circumvent it so let's name a few:
The old (old old) void* trick from C. You declare your function pointer as
void (*fptr)(void* state);
and then you're free to interpret state however you wish in your fptr, e.g. if state==nullptr you can assume there are "no arguments". Note that this approach is not type safe and can cause a lot of headaches if users don't respect the agreed upon protocol.
You bundle all your state in your callable and your function pointer becomes something like std::function<void()>. This way you can write:
std::function<void()> fptr = [code]() { /* ... */ };
This is the nerfed version of the above, meaning your lambdas are now responsible for capturing the state you'd be passing to the function as arguments.
A pattern I'm using lately involves C++20 designated initializers like so:
struct Argument
{
std::optional<int> code;
std::optional<std::string> name;
std::optional<float> value;
};
void (*fptr)(Argument arg); // Argument is elastic, i.e.
// it can be formed as:
// {} -> no arguments
// {.code=1} -> 1 argument
// {.code=1, value=2.}-> 2 arguments
// etc
// Fields not mentioned default to
// nullopt, which means you have
// an easy way of telling them apart
int main ()
{
fptr = [](Argument arg) {
std::cout << arg.code.value_or(0) << std::endl;
std::cout << arg.name.value_or("no name") << std::endl;
std::cout << arg.value.value_or(42) << std::endl;
};
fptr({});
std::cout << "-------------\n";
fptr({.name="Garfield"});
std::cout << "-------------\n";
fptr({.code=3, .value=3.14});
std::cout << "-------------\n";
}
This is a type-safe alternative to (1). You declare the expected set of arguments in Argument but since they are optional you can call fptr({}) and mark everything as "non existent" (the no args case) or even initialize one or more arguments explicitly e.g. fptr({.code=3, .value=3.14}). Inside fptr you can inspect whether an optional variable is "filled" and this gives you the freedom to act accordingly (demo).
If all this still seems unattractive, I wrote a post some years ago on how to create overload sets out of lambdas. Essentially the technique allows you to write things like:
auto fptr = overload(
[]{ /*...*/ }, // A
[](int code) { /*...*/ }); // B
fptr(); // Calls A
fptr(22); // Calls B
Again this means that all possible solutions (sets of functions of different types) are known at compile time, but you dodge the pain of creating that set explicitly.
Finally I'd re-visit the design before resorting to such solutions, maybe a simpler path exists e.g. express the exit functions as a hierarchy and have a factory method to generate the active function at runtime or even reconsider why should an exit function be tweakable at runtime.
I'm porting an old C++ program to modern C++. The legacy program uses new and delete for dynamic memory allocation. I replaced new with std::unique_ptr, but I'm getting compilation error when I try to reset the unique_ptr.
Here is the striped down version of the program. My aim is to get rid of all the naked new.
#include <memory>
enum class Types {
ONE,
TWO,
};
// based on type get buffer length
int get_buffer_len(Types type) {
if(type == Types::ONE) return 10;
else if(type == Types::TWO) return 20;
else return 0;
}
int main() {
Types type = Types::ONE;
std::unique_ptr<char[]> msg{};
auto len = get_buffer_len(type);
if(len > 0) {
msg.reset(std::make_unique<char[]>(len));
}
// based on type get the actual message
if(type == Types::ONE) {
get_message(msg.get());
}
}
I get the following compilation error:
error: no matching function for call to 'std::unique_ptr<char []>::reset(std::__detail::__unique_ptr_array_t<char []>)'
| msg.reset(std::make_unique<char[]>(len));
| ~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If you look at the reset function, it takes a ptr to memory, not another unique ptr:
// members of the specialization unique_ptr<T[]>
template< class U >
void reset( U ptr ) noexcept;
This function is designed to allow you to reset a unique pointer and simultaneously capture memory that you intend to manage with said unique_ptr. What you are looking to do is assign an r-value unique_ptr to ann existing unique_ptr (msg), for which c++ also has an answer:
unique_ptr& operator=( unique_ptr&& r ) noexcept;
Move assignment operator. Transfers ownership from r to *this as if by calling reset(r.release()) followed by an assignment of get_deleter() from std::forward(r.get_deleter()).
So you can instead just do:
msg = std::make_unique<char[]>(len);
This code is just for illustrating the question.
#include <functional>
struct MyCallBack {
void Fire() {
}
};
int main()
{
MyCallBack cb;
std::function<void(void)> func = std::bind(&MyCallBack::Fire, &cb);
}
Experiments with valgrind shows that the line assigning to func dynamically allocates about 24 bytes with gcc 7.1.1 on linux.
In the real code, I have a few handfuls of different structs all with a void(void) member function that gets stored in ~10 million std::function<void(void)>.
Is there any way I can avoid memory being dynamically allocated when doing std::function<void(void)> func = std::bind(&MyCallBack::Fire, &cb); ? (Or otherwise assigning these member function to a std::function)
Unfortunately, allocators for std::function has been dropped in C++17.
Now the accepted solution to avoid dynamic allocations inside std::function is to use lambdas instead of std::bind. That does work, at least in GCC - it has enough static space to store the lambda in your case, but not enough space to store the binder object.
std::function<void()> func = [&cb]{ cb.Fire(); };
// sizeof lambda is sizeof(MyCallBack*), which is small enough
As a general rule, with most implementations, and with a lambda which captures only a single pointer (or a reference), you will avoid dynamic allocations inside std::function with this technique (it is also generally better approach as other answer suggests).
Keep in mind, for that to work you need guarantee that this lambda will outlive the std::function. Obviously, it is not always possible, and sometime you have to capture state by (large) copy. If that happens, there is no way currently to eliminate dynamic allocations in functions, other than tinker with STL yourself (obviously, not recommended in general case, but could be done in some specific cases).
As an addendum to the already existent and correct answer, consider the following:
MyCallBack cb;
std::cerr << sizeof(std::bind(&MyCallBack::Fire, &cb)) << "\n";
auto a = [&] { cb.Fire(); };
std::cerr << sizeof(a);
This program prints 24 and 8 for me, with both gcc and clang. I don't exactly know what bind is doing here (my understanding is that it's a fantastically complicated beast), but as you can see, it's almost absurdly inefficient here compared to a lambda.
As it happens, std::function is guaranteed to not allocate if constructed from a function pointer, which is also one word in size. So constructing a std::function from this kind of lambda, which only needs to capture a pointer to an object and should also be one word, should in practice never allocate.
Run this little hack and it probably will print the amount of bytes you can capture without allocating memory:
#include <iostream>
#include <functional>
#include <cstring>
void h(std::function<void(void*)>&& f, void* g)
{
f(g);
}
template<size_t number_of_size_t>
void do_test()
{
size_t a[number_of_size_t];
std::memset(a, 0, sizeof(a));
a[0] = sizeof(a);
std::function<void(void*)> g = [a](void* ptr) {
if (&a != ptr)
std::cout << "malloc was called when capturing " << a[0] << " bytes." << std::endl;
else
std::cout << "No allocation took place when capturing " << a[0] << " bytes." << std::endl;
};
h(std::move(g), &g);
}
int main()
{
do_test<1>();
do_test<2>();
do_test<3>();
do_test<4>();
}
With gcc version 8.3.0 this prints
No allocation took place when capturing 8 bytes.
No allocation took place when capturing 16 bytes.
malloc was called when capturing 24 bytes.
malloc was called when capturing 32 bytes.
Many std::function implementations will avoid allocations and use space inside the function class itself rather than allocating if the callback it wraps is "small enough" and has trivial copying. However, the standard does not require this, only suggests it.
On g++, a non-trivial copy constructor on a function object, or data exceeding 16 bytes, is enough to cause it to allocate. But if your function object has no data and uses the builtin copy constructor, then std::function won't allocate.
Also, if you use a function pointer or a member function pointer, it won't allocate.
While not directly part of your question, it is part of your example.
Do not use std::bind. In virtually every case, a lambda is better: smaller, better inlining, can avoid allocations, better error messages, faster compiles, the list goes on. If you want to avoid allocations, you must also avoid bind.
I propose a custom class for your specific usage.
While it's true that you shouldn't try to re-implement existing library functionality because the library ones will be much more tested and optimized, it's also true that it applies for the general case. If you have a particular situation like in your example and the standard implementation doesn't suite your needs you can explore implementing a version tailored to your specific use case, which you can measure and tweak as necessary.
So I have created a class akin to std::function<void (void)> that works only for methods and has all the storage in place (no dynamic allocations).
I have lovingly called it Trigger (inspired by your Fire method name). Please do give it a more suited name if you want to.
// helper alias for method
// can be used in user code
template <class T>
using Trigger_method = auto (T::*)() -> void;
namespace detail
{
// Polymorphic classes needed for type erasure
struct Trigger_base
{
virtual ~Trigger_base() noexcept = default;
virtual auto placement_clone(void* buffer) const noexcept -> Trigger_base* = 0;
virtual auto call() -> void = 0;
};
template <class T>
struct Trigger_actual : Trigger_base
{
T& obj;
Trigger_method<T> method;
Trigger_actual(T& obj, Trigger_method<T> method) noexcept : obj{obj}, method{method}
{
}
auto placement_clone(void* buffer) const noexcept -> Trigger_base* override
{
return new (buffer) Trigger_actual{obj, method};
}
auto call() -> void override
{
return (obj.*method)();
}
};
// in Trigger (bellow) we need to allocate enough storage
// for any Trigger_actual template instantiation
// since all templates basically contain 2 pointers
// we assume (and test it with static_asserts)
// that all will have the same size
// we will use Trigger_actual<Trigger_test_size>
// to determine the size of all Trigger_actual templates
struct Trigger_test_size {};
}
struct Trigger
{
std::aligned_storage_t<sizeof(detail::Trigger_actual<detail::Trigger_test_size>)>
trigger_actual_storage_;
// vital. We cannot just cast `&trigger_actual_storage_` to `Trigger_base*`
// because there is no guarantee by the standard that
// the base pointer will point to the start of the derived object
// so we need to store separately the base pointer
detail::Trigger_base* base_ptr = nullptr;
template <class X>
Trigger(X& x, Trigger_method<X> method) noexcept
{
static_assert(sizeof(trigger_actual_storage_) >=
sizeof(detail::Trigger_actual<X>));
static_assert(alignof(decltype(trigger_actual_storage_)) %
alignof(detail::Trigger_actual<X>) == 0);
base_ptr = new (&trigger_actual_storage_) detail::Trigger_actual<X>{x, method};
}
Trigger(const Trigger& other) noexcept
{
if (other.base_ptr)
{
base_ptr = other.base_ptr->placement_clone(&trigger_actual_storage_);
}
}
auto operator=(const Trigger& other) noexcept -> Trigger&
{
destroy_actual();
if (other.base_ptr)
{
base_ptr = other.base_ptr->placement_clone(&trigger_actual_storage_);
}
return *this;
}
~Trigger() noexcept
{
destroy_actual();
}
auto destroy_actual() noexcept -> void
{
if (base_ptr)
{
base_ptr->~Trigger_base();
base_ptr = nullptr;
}
}
auto operator()() const
{
if (!base_ptr)
{
// deal with this situation (error or just ignore and return)
}
base_ptr->call();
}
};
Usage:
struct X
{
auto foo() -> void;
};
auto test()
{
X x;
Trigger f{x, &X::foo};
f();
}
Warning: only tested for compilation errors.
You need to thoroughly test it for correctness.
You need to profile it and see if it has a better performance than other solutions. The advantage of this is because it's in house cooked you can make tweaks to the implementation to increase performance on your specific scenarios.
As #Quuxplusone mentioned in their answer-as-a-comment, you can use inplace_function here. Include the header in your project, and then use like this:
#include "inplace_function.h"
struct big { char foo[20]; };
static stdext::inplace_function<void(), 8> inplacefunc;
static std::function<void()> stdfunc;
int main() {
static_assert(sizeof(inplacefunc) == 16);
static_assert(sizeof(stdfunc) == 32);
inplacefunc = []() {};
// fine
struct big a;
inplacefunc = [a]() {};
// test.cpp:15:24: required from here
// inplace_function.h:237:33: error: static assertion failed: inplace_function cannot be constructed from object with this (large) size
// 237 | static_assert(sizeof(C) <= Capacity,
// | ~~~~~~~~~~^~~~~~~~~~~
// inplace_function.h:237:33: note: the comparison reduces to ‘(20 <= 8)’
}
The following code is a signal implementation copied from APUE with a little modification
namespace
{
using signal_handler = void (*)(int);
signal_handler signal(sigset_t sig, signal_handler);
}
Signal::signal_handler Signal::signal(sigset_t sig, void (*handler)(int))
{
struct sigaction newAction, oldAction;
sigemptyset(&newAction.sa_mask);
newAction.sa_flags = 0;
newAction.sa_handler = handler;
if (sig == SIGALRM)
{
#ifdef SA_INTERRUPT
newAction.sa_flags |= SA_INTERRUPT;
#endif
}
else
{
newAction.sa_flags |= SA_RESTART;
}
if (sigaction(sig, &newAction, &oldAction) < 0)
throw std::runtime_error("signal error: cannot set a new signal handler.")
return oldAction.sa_handler;
}
The above code works fine during my test, but I wanted to make it more like a C++ code, so I changed signal_handler alias to
using signal_handler = std::function<void (int)>;
and also I use
newAction.sa_handler = handler.target<void (int)>();
to replace
newAction.sa_handler = handler;
and now there is a problem. I find newAction.sa_handler is still NULL after
newAction.sa_handler = handler.target<void (int)>();
but I don't know why. Anyone can help me explain this? thanks.
Here is my test code:
void usr1_handler(int sig)
{
std::cout << "SIGUSR1 happens" << std::endl;
}
void Signal::signal_test()
{
try
{
Signal::signal(SIGUSR1, usr1_handler);
}
catch (std::runtime_error &err)
{
std::cout << err.what();
return;
}
raise(SIGUSR1);
}
Even when using the original code when I run it in Xcode, there is no output. Instead, I run the executable file manually, I can see SIGUSR1 happens in the terminal. Why? How can I see the output using Xcode?
The direct answer is that target() is very picky - you must name the type of the target exactly to get a pointer to it, otherwise you get a null pointer. When you set your signal to usr1_handler, that is a pointer to a function (not a function) - its type is void(*)(int), not void(int). So you're simply giving the wrong type to target(). If you change:
handler.target<void (int)>();
to
handler.target<void(*)(int)>();
that would give you the correct target.
But note what target() actually returns:
template< class T >
T* target();
It returns a pointer to the provided type - in this case that would be a void(**)(int). You'd need to dereference that before doing further assignment. Something like:
void(**p)(int) = handler.target<void(*)(int)>();
if (!p) {
// some error handling
}
newAction.sa_handler = *p;
Demo.
However, the real answer is that this makes little sense to do. std::function<Sig> is a type erased callable for the given Sig - it can be a pointer to a function, a pointer to a member function, or even a wrapped function object of arbitrary size. It is a very generic solution. But sigaction doesn't accept just any kind of generic callable - it accepts specifically a void(*)(int).
By creating a signature of:
std::function<void(int)> signal(sigset_t sig, std::function<void(int)> );
you are creating the illusion that you are allowing any callable! So, I might try to pass something like:
struct X {
void handler(int ) { ... }
};
X x;
signal(SIGUSR1, [&x](int s){ x.handler(s); });
That's allowed by your signature - I'm providing a callable that takes an int. But that callable isn't convertible to a function pointer, so it's not something that you can pass into sigaction(), so this is just erroneous code that can never work - this is a guaranteed runtime failure.
Even worse, I might pass something that is convertible to a function pointer, but may not know that that's what you need, so I give you the wrong thing:
// this will not work, since it's not a function pointer
signal(SIGUSR1, [](int s){ std::cout << s; });
// but this would have, if only I knew I had to do it
signal(SIGUSR1, +[](int s){ std::cout << s; });
Since sigaction() limits you to just function pointers, you should limit your interface to it to just function pointers. Strongly prefer what you had before. Use the type system to catch errors - only use type erasure when it makes sense.
Here you a little example that will help you to understand the mechanims.
#include <iostream>
#include <string>
#include <functional>
void printMyInt(int a)
{
std::cout << "This is your int " << a;
}
int main()
{
std::function<void(int)> f = printMyInt;
void (*const*foo)(int) = f.target<void(*)(int)>();
(*foo)(56);
}
I would basically write the following piece of code. I understand why it can't compile.
A instance; // A is a non-default-constructable type and therefore can't be allocated like this
if (something)
{
instance = A("foo"); // use a constructor X
}
else
{
instance = A(42); // use *another* constructor Y
}
instance.do_something();
Is there a way to achieve this behaviour without involving heap-allocation?
There are better, cleaner ways to solve the problem than explicitly reserving space on the stack, such as using a conditional expression.
However if the type is not move constructible, or you have more complicated conditions that mean you really do need to reserve space on the stack to construct something later in two different places, you can use the solution below.
The standard library provides the aligned_storage trait, such that aligned_storage<T>::type is a POD type of the right size and alignment for storing a T, so you can use that to reserve the space, then use placement-new to construct an object into that buffer:
std::aligned_storage<A>::type buf;
A* ptr;
if (cond)
{
// ...
ptr = ::new (&buf) A("foo");
}
else
{
// ...
ptr = ::new (&buf) A(42);
}
A& instance = *ptr;
Just remember to destroy it manually too, which you could do with a unique_ptr and custom deleter:
struct destroy_A {
void operator()(A* a) const { a->~A(); }
};
std::unique_ptr<A, destroy_A> cleanup(ptr);
Or using a lambda, although this wastes an extra pointer on the stack ;-)
std::unique_ptr<A, void(*)(A*)> cleanup(ptr, [](A* a){ a->~A();});
Or even just a dedicated local type instead of using unique_ptr
struct Cleanup {
A* a;
~Cleanup() { a->~A(); }
} cleanup = { ptr };
Assuming you want to do this more than once, you can use a helper function:
A do_stuff(bool flg)
{
return flg ? A("foo") : A(42);
}
Then
A instance = do_stuff(something);
Otherwise you can initialize using a conditional operator expression*:
A instance = something ? A("foo") : A(42);
* This is an example of how the conditional operator is not "just like an if-else".
In some simple cases you may be able to get away with this standard C++ syntax:
A instance=something ? A("foo"):A(42);
You did not specify which compiler you're using, but in more complicated situations, this is doable using the gcc compiler-specific extension:
A instance=({
something ? A("foo"):A(42);
});
This is a job for placement new, though there are almost certainly simpler solutions you could employ if you revisit your requirements.
#include <iostream>
struct A
{
A(const std::string& str) : str(str), num(-1) {};
A(const int num) : str(""), num(num) {};
void do_something()
{
std::cout << str << ' ' << num << '\n';
}
const std::string str;
const int num;
};
const bool something = true; // change to false to see alternative behaviour
int main()
{
char storage[sizeof(A)];
A* instance = 0;
if (something)
instance = new (storage) A("foo");
else
instance = new (storage) A(42);
instance->do_something();
instance->~A();
}
(live demo)
This way you can construct the A whenever you like, but the storage is still on the stack.
However, you have to destroy the object yourself (as above), which is nasty.
Disclaimer: My weak placement-new example is naive and not particularly portable. GCC's own Jonathan Wakely posted a much better example of the same idea.
std::experimental::optional<Foo> foo;
if (condition){
foo.emplace(arg1,arg2);
}else{
foo.emplace(zzz);
}
then use *foo for access. boost::optional if you do not have the C++1z TS implementation, or write your own optional.
Internally, it will use something like std aligned storage and a bool to guard "have I been created"; or maybe a union. It may be possible for the compiler to prove the bool is not needed, but I doubt it.
An implementation can be downloaded from github or you can use boost.