I am trying to write a templated wrapper class around a stateless lambda. Something like this:
template <class TFuncOp>
class Adapter
{
public:
void Op()
{
TFuncOp func; // not possible before C++20
func();
}
};
Since this isn't possible before default constructible lambdas arrive with C++20, I used this technique to make my class work: Calling a stateless lambda without an instance (only type)
So the final solution looks like this:
template <class TFuncOp>
class Adapter
{
public:
static TFuncOp GetOpImpl( TFuncOp *pFunc = 0 )
{
static TFuncOp func = *pFunc;
return func;
}
void Op()
{
GetOpImpl()();
}
};
template <class TFuncOp>
Adapter<TFuncOp> MakeAdapter(TFuncOp func )
{
// Removing the line below has no effect.
//Adapter<TFuncOp>::GetOpImpl( &func );
return Adapter<TFuncOp>();
}
int main()
{
auto adapter = MakeAdapter( [] { printf("Hello World !\n"); } );
adapter.Op();
return 0;
}
This code works on all major compilers (clang, gcc, msvc). But with one surprising discovery. Initialization (or lack thereof) of the static local instance of the lambda in GetOpImpl() has no effect. It works fine either way.
Can anyone explain how this works? Am I invoking UB if I use the static local instance of the lambda without initializing it?
In any case, accessing a nullptr is never a good idea as it is UB.
But we can see that typical implementations generate code which simply works. I try to explain why:
First, it has nothing to do with lambdas. It is simply the not needed using of a copy constructor on a class which has no data. As you have no data, the generated code will not access the passed object. In your case, you "copy" the object which the pointer TFuncOp *pFunc = 0 points to, which is a nullptr which will crash if the object must be accessed. As there is no data to access, a typical implementation will not genrate any code which will access the nullptr at all. But it is still UB.
The same works with all other types in the same way and has nothing special with a lambda!
struct Empty
{
void Do() { std::cout << "This works the same way" << std::endl; }
// int i; // << if you add some data, you get a seg fault
};
int main()
{
Empty* ptr = nullptr;
Empty empty = *ptr; // get seg fault here, because default copy constructor access the nullptr, but typically only if copy ctor needs to access!
empty.Do();
}
And a lambda which has no captured data, is an empty structure with a operator()().
That all is a answer why it seems to work.
I'm working on implementing fibers using coroutines implemented in assembler. The coroutines work by cocall to change stack.
I'd like to expose this in C++ using a higher level interface, as cocall assembly can only handle a single void* argument.
In order to handle template lambdas, I've experimented with converting them to a void* and found that while it compiles and works, I was left wondering if it was safe to do so, assuming ownership semantics of the stack (which are preserved by fibers).
template <typename FunctionT>
struct Coentry
{
static void coentry(void * arg)
{
// Is this safe?
FunctionT * function = reinterpret_cast<FunctionT *>(arg);
(*function)();
}
static void invoke(FunctionT function)
{
coentry(reinterpret_cast<void *>(&function));
}
};
template <typename FunctionT>
void coentry(FunctionT function)
{
Coentry<FunctionT>::invoke(function);
}
int main(int argc, const char * argv[]) {
auto f = [&]{
std::cerr << "Hello World!" << std::endl;
};
coentry(f);
}
Is this safe and additionally, is it efficient? By converting to a void* am I forcing the compiler to choose a less efficient representation?
Additionally, by invoking coentry(void*) on a different stack, but the original invoke(FunctionT) has returned, is there a chance that the stack might be invalid to resume? (would be similar to, say invoking within a std::thread I guess).
Everything done above is defined behaviour. The only performance hit is that inlining something aliased thro7gh a void pointer could be slightly harder.
However, the lambda is an actual value, and if stored in automatic storage only lasts as long as the stored-in stack frame does.
You can fix this a number of ways. std::function is one, another is to store the lambda in a shared_ptr<void> or unique_ptr<void, void(*)(void*)>. If you do not need type erasure, you can even store the lambda in a struct with deduced type.
The first two are easy. The third;
template <typename FunctionT>
struct Coentry {
FunctionT f;
static void coentry(void * arg)
{
auto* self = reinterpret_cast<Coentry*>(arg);
(self->f)();
}
Coentry(FunctionT fin):f(sts::move(fin)){}
};
template<class FunctionT>
Coentry<FunctionT> make_coentry( FunctionT f ){ return {std::move(f)}; }
now keep your Coentry around long enough until the task completes.
The details of how you manage lifetime depend on the structure of the rest of your problem.
I would basically write the following piece of code. I understand why it can't compile.
A instance; // A is a non-default-constructable type and therefore can't be allocated like this
if (something)
{
instance = A("foo"); // use a constructor X
}
else
{
instance = A(42); // use *another* constructor Y
}
instance.do_something();
Is there a way to achieve this behaviour without involving heap-allocation?
There are better, cleaner ways to solve the problem than explicitly reserving space on the stack, such as using a conditional expression.
However if the type is not move constructible, or you have more complicated conditions that mean you really do need to reserve space on the stack to construct something later in two different places, you can use the solution below.
The standard library provides the aligned_storage trait, such that aligned_storage<T>::type is a POD type of the right size and alignment for storing a T, so you can use that to reserve the space, then use placement-new to construct an object into that buffer:
std::aligned_storage<A>::type buf;
A* ptr;
if (cond)
{
// ...
ptr = ::new (&buf) A("foo");
}
else
{
// ...
ptr = ::new (&buf) A(42);
}
A& instance = *ptr;
Just remember to destroy it manually too, which you could do with a unique_ptr and custom deleter:
struct destroy_A {
void operator()(A* a) const { a->~A(); }
};
std::unique_ptr<A, destroy_A> cleanup(ptr);
Or using a lambda, although this wastes an extra pointer on the stack ;-)
std::unique_ptr<A, void(*)(A*)> cleanup(ptr, [](A* a){ a->~A();});
Or even just a dedicated local type instead of using unique_ptr
struct Cleanup {
A* a;
~Cleanup() { a->~A(); }
} cleanup = { ptr };
Assuming you want to do this more than once, you can use a helper function:
A do_stuff(bool flg)
{
return flg ? A("foo") : A(42);
}
Then
A instance = do_stuff(something);
Otherwise you can initialize using a conditional operator expression*:
A instance = something ? A("foo") : A(42);
* This is an example of how the conditional operator is not "just like an if-else".
In some simple cases you may be able to get away with this standard C++ syntax:
A instance=something ? A("foo"):A(42);
You did not specify which compiler you're using, but in more complicated situations, this is doable using the gcc compiler-specific extension:
A instance=({
something ? A("foo"):A(42);
});
This is a job for placement new, though there are almost certainly simpler solutions you could employ if you revisit your requirements.
#include <iostream>
struct A
{
A(const std::string& str) : str(str), num(-1) {};
A(const int num) : str(""), num(num) {};
void do_something()
{
std::cout << str << ' ' << num << '\n';
}
const std::string str;
const int num;
};
const bool something = true; // change to false to see alternative behaviour
int main()
{
char storage[sizeof(A)];
A* instance = 0;
if (something)
instance = new (storage) A("foo");
else
instance = new (storage) A(42);
instance->do_something();
instance->~A();
}
(live demo)
This way you can construct the A whenever you like, but the storage is still on the stack.
However, you have to destroy the object yourself (as above), which is nasty.
Disclaimer: My weak placement-new example is naive and not particularly portable. GCC's own Jonathan Wakely posted a much better example of the same idea.
std::experimental::optional<Foo> foo;
if (condition){
foo.emplace(arg1,arg2);
}else{
foo.emplace(zzz);
}
then use *foo for access. boost::optional if you do not have the C++1z TS implementation, or write your own optional.
Internally, it will use something like std aligned storage and a bool to guard "have I been created"; or maybe a union. It may be possible for the compiler to prove the bool is not needed, but I doubt it.
An implementation can be downloaded from github or you can use boost.
I have a message class which was previously a bit of a pain to work with, you had to construct the message class, tell it to allocate space for your object and then populate the space either by construction or memberwise.
I want to make it possible to construct the message object with an immediate, inline new of the resulting object, but to do so with a simple syntax at the call site while ensuring copy elision.
#include <cstdint>
typedef uint8_t id_t;
enum class MessageID { WorldPeace };
class Message
{
uint8_t* m_data; // current memory
uint8_t m_localData[64]; // upto 64 bytes.
id_t m_messageId;
size_t m_size; // amount of data used
size_t m_capacity; // amount of space available
// ...
public:
Message(size_t requestSize, id_t messageId)
: m_data(m_localData)
, m_messageId(messageId)
, m_size(0), m_capacity(sizeof(m_localData))
{
grow(requestSize);
}
void grow(size_t newSize)
{
if (newSize > m_capacity)
{
m_data = realloc((m_data == m_localData) ? nullptr : m_data, newSize);
assert(m_data != nullptr); // my system uses less brutal mem mgmt
m_size = newSize;
}
}
template<typename T>
T* allocatePtr()
{
size_t offset = size;
grow(offset + sizeof(T));
return (T*)(m_data + offset);
}
#ifdef USE_CPP11
template<typename T, typename Args...>
Message(id_t messageId, Args&&... args)
: Message(sizeof(T), messageID)
{
// we know m_data points to a large enough buffer
new ((T*)m_data) T (std::forward<Args>(args)...);
}
#endif
};
Pre-C++11 I had a nasty macro, CONSTRUCT_IN_PLACE, which did:
#define CONSTRUCT_IN_PLACE(Message, Typename, ...) \
new ((Message).allocatePtr<Typename>()) Typename (__VA_ARGS__)
And you would say:
Message outgoing(sizeof(MyStruct), MessageID::WorldPeace);
CONSTRUCT_IN_PLACE(outgoing, MyStruct, wpArg1, wpArg2, wpArg3);
With C++11, you would use
Message outgoing<MyStruct>(MessageID::WorldPeace, wpArg1, wpArg2, wpArg3);
But I find this to be messy. What I want to implement is:
template<typename T>
Message(id_t messageId, T&& src)
: Message(sizeof(T), messageID)
{
// we know m_data points to a large enough buffer
new ((T*)m_data) T (src);
}
So that the user uses
Message outgoing(MessageID::WorldPeace, MyStruct(wpArg1, wpArg2, wpArg3));
But it seems that this first constructs a temporary MyStruct on the stack turning the in-place new into a call to the move constructor of T.
Many of these messages are simple, often POD, and they are often in marshalling functions like this:
void dispatchWorldPeace(int wpArg1, int wpArg2, int wpArg3)
{
Message outgoing(MessageID::WorldPeace, MyStruct(wpArg1, wpArg2, wpArg3));
outgoing.send(g_listener);
}
So I want to avoid creating an intermediate temporary that is going to require a subsequent move/copy.
It seems like the compiler should be able to eliminate the temporary and the move and forward the construction all the way down to the in-place new.
What am I doing that is causing it not to? (GCC 4.8.1, Clang 3.5, MSVC 2013)
You won't be able to elide the copy/move in the placement new: copy elision is entirely based on the idea that the compiler knows at construction time where the object will eventually end up. Also, since copy elision actually changes the behavior of the program (after all, it won't call the respective constructor and the destructor even if they have side-effects) copy elision is limited to a few very specific cases (listed in 12.8 [class.copy] paragraph 31: essentially when returning a local variable by name, when throwing a local variable by name, when catching an exception of the correct type by value, and when copying/moving a temporary variable; see the clause for exact details). Since [placement] new is none of the contexts where the copy can be elided and the argument to constructor is clearly not a temporary (it is named), the copy/move will never be elided. Even adding the missing std::forward<T>(...) to your constructor will cause the copy/move to be elided:
template<typename T>
Message(id_t messageId, T&& src)
: Message(sizeof(T), messageID)
{
// placement new take a void* anyway, i.e., no need to cast
new (m_data) T (std::forward<T>(src));
}
I don't think you can explicitly specify a template parameter when calling a constructor. Thus, I think the closest you could probably get without constructing the object ahead of time and getting it copied/moved is something like this:
template <typename>
struct Tag {};
template <typename T, typename A>
Message::Message(Tag<T>, id_t messageId, A... args)
: Message(messageId, sizeof(T)) {
new(this->m_data) T(std::forward<A>(args)...);
}
One approach which might make things a bit nicer is using the id_t to map to the relevant type assuming that there is a mapping from message Ids to the relevant type:
typedef uint8_t id_t;
template <typename T, id_t id> struct Tag {};
struct MessageId {
static constexpr Tag<MyStruct, 1> WorldPeace;
// ...
};
template <typename T, id_t id, typename... A>
Message::Message(Tag<T, id>, A&&... args)
Message(id, sizeof(T)) {
new(this->m_data) T(std::forward<A>)(args)...);
}
Foreword
The conceptual barrier that even C++2049 cannot cross is that you require all the bits that compose your message to be aligned in a contiguous memory block.
The only way C++ can give you that is through the use of the placement new operator. Otherwise, objects will simply be constructed according to their storage class (on the stack or through whatever you define as a new operator).
It means any object you pass to your payload constructor will be first constructed (on the stack) and then used by the constructor (that will most likely copy-construct it).
Avoiding this copy completely is impossible. You may have a forward constructor doing the minimal amount of copy, but still the scalar parameters passed to the initializer will likely be copied, as will any data that the constructor of the initializer deemed necessary to memorize and/or produce.
If you want to be able to pass parameters freely to each of the constructors needed to build the complete message without them being first stored in the parameter objects, it will require
the use of a placement new operator for each of the sub-objects that compose the message,
the memorization of each single scalar parameter passed to the various sub-constructors,
specific code for each object to feed the placement new operator with the proper address and call the constructor of the sub-object.
You will end up with a toplevel message constructor taking all possible initial parameters and dispatching them to the various sub-objects constructors.
I don't even know if this is feasible, but the result would be very fragile and error-prone at any rate.
Is that what you want, just for the benefit of a bit of syntactic sugar?
If you're offering an API, you cannot cover all cases. The best approach is to make something that degrades nicely, IMHO.
The simple solution would be to limit payload constructor parameters to scalar values or implement "in-place sub-construction" for a limited set of message payloads that you can control. At your level you cannot do more than that to make sure the message construction proceeds with no extra copies.
Now the application software will be free to define constructors that take objects as parameters, and then the price to pay will be these extra copies.
Besides, this might be the most efficient approach, if the parameter is something costly to construct (i.e. the construction time is greater than the copy time, so it is more efficient to create a static object and modify it slightly between each message) or if it has a greater lifetime than your function for any reason.
a working, ugly solution
First, let's start with a vintage, template-less solution that does in-place construction.
The idea is to have the message pre-allocate the right kind of memory (local buffer of dynamic) depending on the size of the object.
The proper base address is then passed to a placement new to construct the message contents in place.
#include <cstdint>
#include <cstdio>
#include <new>
typedef uint8_t id_t;
enum class MessageID { WorldPeace, Armaggedon };
#define SMALL_BUF_SIZE 64
class Message {
id_t m_messageId;
uint8_t* m_data;
uint8_t m_localData[SMALL_BUF_SIZE];
public:
// choose the proper location for contents
Message (MessageID messageId, size_t size)
{
m_messageId = (id_t)messageId;
m_data = size <= SMALL_BUF_SIZE ? m_localData : new uint8_t[size];
}
// dispose of the contents if need be
~Message ()
{
if (m_data != m_localData) delete m_data;
}
// let placement new know about the contents location
void * location (void)
{
return m_data;
}
};
// a macro to do the in-place construction
#define BuildMessage(msg, id, obj, ... ) \
Message msg(MessageID::id, sizeof(obj)); \
new (msg.location()) obj (__VA_ARGS__); \
// example uses
struct small {
int a, b, c;
small (int a, int b, int c) :a(a),b(b),c(c) {}
};
struct big {
int lump[1000];
};
int main(void)
{
BuildMessage(msg1, WorldPeace, small, 1, 2, 3)
BuildMessage(msg2, Armaggedon, big)
}
This is just a trimmed down version of your initial code, with no templates at all.
I find it relatively clean and easy to use, but to each his own.
The only inefficiency I see here is the static allocation of 64 bytes that will be useless if the message is too big.
And of course all type information is lost once the messages are constructed, so accessing their contents afterward would be awkward.
About forwarding and construction in place
Basically, the new && qualifier does no magic. To do in-place construction, the compiler needs to know the address that will be used for object storage before calling the constructor.
Once you've invoked an object creation, the memory has been allocated and the && thing will only allow you to use that address to pass ownership of the said memory to another object without resorting to useless copies.
You can use templates to recognize a call to the Message constructor involving a given class passed as message contents, but that will be too late: the object will have been constructed before your constructor can do anything about its memory location.
I can't see a way to create a template on top of the Message class that would defer an object construction until you have decided at which location you want to construct it.
However, you could work on the classes defining the object contents to have some in-place construction automated.
This will not solve the general problem of passing objects to the constructor of the object that will be built in place.
To do that, you would need the sub-objects themselves to be constructed through a placement new, which would mean implementing a specific template interface for each of the initializers, and have each object provide the address of construction to each of its sub-objects.
Now for syntactic sugar.
To make the ugly templating worth the while, you can specialize your message classes to handle big and small messages differently.
The idea is to have a single lump of memory to pass to your sending function. So in case of small messages, the message header and contents are defined as local message properties, and for big ones, extra memory is allocated to include the message header.
Thus the magic DMA used to propell your messages through the system will have a clean data block to work with either way.
Dynamic allocations will still occur once per big message, and never for small ones.
#include <cstdint>
#include <new>
// ==========================================================================
// Common definitions
// ==========================================================================
// message header
enum class MessageID : uint8_t { WorldPeace, Armaggedon };
struct MessageHeader {
MessageID id;
uint8_t __padding; // one free byte here
uint16_t size;
};
// small buffer size
#define SMALL_BUF_SIZE 64
// dummy send function
int some_DMA_trick(int destination, void * data, uint16_t size);
// ==========================================================================
// Macro solution
// ==========================================================================
// -----------------------------------------
// Message class
// -----------------------------------------
class mMessage {
// local storage defined even for big messages
MessageHeader m_header;
uint8_t m_localData[SMALL_BUF_SIZE];
// pointer to the actual message
MessageHeader * m_head;
public:
// choose the proper location for contents
mMessage (MessageID messageId, uint16_t size)
{
m_head = size <= SMALL_BUF_SIZE
? &m_header
: (MessageHeader *) new uint8_t[size + sizeof (m_header)];
m_head->id = messageId;
m_head->size = size;
}
// dispose of the contents if need be
~mMessage ()
{
if (m_head != &m_header) delete m_head;
}
// let placement new know about the contents location
void * location (void)
{
return m_head+1;
}
// send a message
int send(int destination)
{
return some_DMA_trick (destination, m_head, (uint16_t)(m_head->size + sizeof (m_head)));
}
};
// -----------------------------------------
// macro to do the in-place construction
// -----------------------------------------
#define BuildMessage(msg, obj, id, ... ) \
mMessage msg (MessageID::id, sizeof(obj)); \
new (msg.location()) obj (__VA_ARGS__); \
// ==========================================================================
// Template solution
// ==========================================================================
#include <utility>
// -----------------------------------------
// template to check storage capacity
// -----------------------------------------
template<typename T>
struct storage
{
enum { local = sizeof(T)<=SMALL_BUF_SIZE };
};
// -----------------------------------------
// base message class
// -----------------------------------------
class tMessage {
protected:
MessageHeader * m_head;
tMessage(MessageHeader * head, MessageID id, uint16_t size)
: m_head(head)
{
m_head->id = id;
m_head->size = size;
}
public:
int send(int destination)
{
return some_DMA_trick (destination, m_head, (uint16_t)(m_head->size + sizeof (*m_head)));
}
};
// -----------------------------------------
// general message template
// -----------------------------------------
template<bool local_storage, typename message_contents>
class aMessage {};
// -----------------------------------------
// specialization for big messages
// -----------------------------------------
template<typename T>
class aMessage<false, T> : public tMessage
{
public:
// in-place constructor
template<class... Args>
aMessage(MessageID id, Args...args)
: tMessage(
(MessageHeader *)new uint8_t[sizeof(T)+sizeof(*m_head)], // dynamic allocation
id, sizeof(T))
{
new (m_head+1) T(std::forward<Args>(args)...);
}
// destructor
~aMessage ()
{
delete m_head;
}
// syntactic sugar to access contents
T& contents(void) { return *(T*)(m_head+1); }
};
// -----------------------------------------
// specialization for small messages
// -----------------------------------------
template<typename T>
class aMessage<true, T> : public tMessage
{
// message body defined locally
MessageHeader m_header;
uint8_t m_data[sizeof(T)]; // no need for 64 bytes here
public:
// in-place constructor
template<class... Args>
aMessage(MessageID id, Args...args)
: tMessage(
&m_header, // local storage
id, sizeof(T))
{
new (m_head+1) T(std::forward<Args>(args)...);
}
// syntactic sugar to access contents
T& contents(void) { return *(T*)(m_head+1); }
};
// -----------------------------------------
// helper macro to hide template ugliness
// -----------------------------------------
#define Message(T) aMessage<storage<T>::local, T>
// something like typedef aMessage<storage<T>::local, T> Message<T>
// ==========================================================================
// Example
// ==========================================================================
#include <cstdio>
#include <cstring>
// message sending
int some_DMA_trick(int destination, void * data, uint16_t size)
{
printf("sending %d bytes #%p to %08X\n", size, data, destination);
return 1;
}
// some dynamic contents
struct gizmo {
char * s;
gizmo(void) { s = nullptr; };
gizmo (const gizmo& g) = delete;
gizmo (const char * msg)
{
s = new char[strlen(msg) + 3];
strcpy(s, msg);
strcat(s, "#");
}
gizmo (gizmo&& g)
{
s = g.s;
g.s = nullptr;
strcat(s, "*");
}
~gizmo()
{
delete s;
}
gizmo& operator=(gizmo g)
{
std::swap(s, g.s);
return *this;
}
bool operator!=(gizmo& g)
{
return strcmp (s, g.s) != 0;
}
};
// some small contents
struct small {
int a, b, c;
gizmo g;
small (gizmo g, int a, int b, int c)
: a(a), b(b), c(c), g(std::move(g))
{
}
void trace(void)
{
printf("small: %d %d %d %s\n", a, b, c, g.s);
}
};
// some big contents
struct big {
gizmo lump[1000];
big(const char * msg = "?")
{
for (size_t i = 0; i != sizeof(lump) / sizeof(lump[0]); i++)
lump[i] = gizmo (msg);
}
void trace(void)
{
printf("big: set to ");
gizmo& first = lump[0];
for (size_t i = 1; i != sizeof(lump) / sizeof(lump[0]); i++)
if (lump[i] != first) { printf(" Erm... mostly "); break; }
printf("%s\n", first.s);
}
};
int main(void)
{
// macros
BuildMessage(mmsg1, small, WorldPeace, gizmo("Hi"), 1, 2, 3);
BuildMessage(mmsg2, big , Armaggedon, "Doom");
((small *)mmsg1.location())->trace();
((big *)mmsg2.location())->trace();
mmsg1.send(0x1000);
mmsg2.send(0x2000);
// templates
Message (small) tmsg1(MessageID::WorldPeace, gizmo("Hello"), 4, 5, 6);
Message (big ) tmsg2(MessageID::Armaggedon, "Damnation");
tmsg1.contents().trace();
tmsg2.contents().trace();
tmsg1.send(0x3000);
tmsg2.send(0x4000);
}
output:
small: 1 2 3 Hi#*
big: set to Doom#
sending 20 bytes #0xbf81be20 to 00001000
sending 4004 bytes #0x9e58018 to 00002000
small: 4 5 6 Hello#**
big: set to Damnation#
sending 20 bytes #0xbf81be0c to 00003000
sending 4004 bytes #0x9e5ce50 to 00004000
Arguments forwarding
I see little point in doing constructor parameters forwarding here.
Any bit of dynamic data referenced by the message contents would have to be either static or copied into the message body, otherwise the referenced data would vanish as soon as the message creator would go out of scope.
If the users of this wonderfully efficient library start passing around magic pointers and other global data inside messages, I wonder how the global system performance will like that. But that's none of my business, after all.
Macros
I resorted to a macro to hide the template ugliness in type definition.
If someone has an idea to get rid of it, I'm interested.
Efficiency
The template variation requires an extra forwarding of the contents parameters to reach the constructor. I can't see how that could be avoided.
The macro version wastes 68 bytes of memory for big messages, and some memory for small ones (64 - sizeof (contents object)).
Performance-wise, this extra bit of memory is the only gain the templates offer. Since all these objects are supposedly constructed on the stack and live for a handful of microseconds, it is pretty neglectible.
Compared to your initial version, this one should handle message sending more efficiently for big messages. Here again, if these messages are rare and only offered for convenience, the difference is not terribly useful.
The template version maintains a single pointer to the message payload, that could be spared for small messages if you implemented a specialized version of the send function.
Hardly worth the code duplication, IMHO.
A last word
I think I know pretty well how an operating system works and what performances concerns might be. I wrote quite a few real-time applications, plus some drivers and a couple of BSPs in my time.
I also saw more than once a very efficient system layer ruined by too permissive an interface that allowed application software programmers to do the silliest things without even knowing.
That is what triggered my initial reaction.
If I had my say in global system design, I would forbid all these magic pointers and other under-the-hood mingling with object references, to limit non-specialist users to an inoccuous use of system layers, instead of allowing them to inadvertently spread cockroaches through the system.
Unless the users of this interface are template and real-time savvies, they will not understand a bit what is going on beneath the syntactic sugar crust, and might very soon shoot themselves (and their co-workers and the application software) in the foot.
Suppose a poor application software programmer adds a puny field in one of its structs and crosses unknowingly the 64 bytes barrier. All of a sudden the system performance will crumble, and you will need Mr template & real time expert to explain the poor guy that what he did killed a lot of kittens.
Even worse, the system degradation might be progressive or unnoticeable at first, so one day you might wake up with thousands of lines of code that did dynamic allocations for years without anybody noticing, and the global overhaul to correct the problem might be huge.
If, on the other hand, all people in your company are munching at templates and mutexes for breakfast, syntactic sugar is not even required in the first place.
I've these plain C functions from a library:
struct SAlloc;
SAlloc *new_salloc();
void free_salloc(SAlloc *s);
Is there any way I can wrap this in C++ to a smart pointer (std::unique_ptr), or otherwise a RAII wrapper ?
I'm mainly curious about the possibilities of the standard library without creating my own wrapper/class.
Yes, you can reuse unique_ptr for this. Just make a custom deleter.
struct salloc_deleter {
void operator()(SAlloc* s) const {
free_salloc(s); // what the heck is the return value for?
}
}
using salloc_ptr = std::unique_ptr<SAlloc, salloc_deleter>;
I like R. Martinho Fernandes' answer, but here's a shorter (but less efficient) alternative:
auto my_alloc = std::shared_ptr<SAlloc>(new_salloc(), free_salloc);
Is there any way I can wrap this in C++ to a smart pointer (std::unique_ptr), or otherwise a RAII wrapper ?
Yes. You need here a factory function, that creates objects initializing the smart pointer correctly (and ensures you always construct pointer instances correctly):
std::shared_ptr<SAlloc> make_shared_salloc()
{
return std::shared_ptr<SAlloc>(new_salloc(), free_salloc);
}
// Note: this doesn't work (see comment from #R.MartinhoFernandes below)
std::unique_ptr<SAlloc> make_unique_salloc()
{
return std::unique_ptr<SAlloc>(new_salloc(), free_salloc);
}
You can assign the result of calling these functions to other smart pointers (as needed) and the pointers will be deleted correctly.
Edit:
Alternately, you could particularize std::make_shared for your SAlloc.
Edit 2:
The second function (make_unique_salloc) doesn't compile. An alternative deleter functor needs to be implemented to support the implementation.
Another variation:
#include <memory>
struct SAlloc {
int x;
};
SAlloc *new_salloc() { return new SAlloc(); }
void free_salloc(SAlloc *s) { delete s; }
struct salloc_freer {
void operator()(SAlloc* s) const { free_salloc(s); }
};
typedef std::unique_ptr<SAlloc, salloc_freer> unique_salloc;
template<typename... Args>
unique_salloc make_salloc(Args&&... args) {
auto retval = unique_salloc( new_salloc() );
if(retval) {
*retval = SAlloc{std::forward<Args>(args)...};
}
return retval;
}
int main() {
unique_salloc u = make_salloc(7);
}
I included a body to SAlloc and the various functions to make it a http://sscce.org/ -- the implementation of those doesn't matter.
So long as you can see the members of SAlloc, the above will let you construct them like in an initializer list at the same time as you make the SAlloc, and if you don't pass in any arguments it will zero the entire SAlloc struct.