Detecting stale C++ references in Lua - c++

I'm lead dev for Bitfighter, a game primarily written in C++, but using Lua to script robot players. We're using Lunar (a variant of Luna) to glue the bits together.
I'm now wrestling with how our Lua scripts can know that an object they have a reference to has been deleted by the C++ code.
Here is some sample robot code (in Lua):
if needTarget then -- needTarget => global(?) boolean
ship = findClosest(findItems(ShipType)) -- ship => global lightUserData obj
end
if ship ~= nil then
bot:setAngleToPoint(ship:getLoc())
bot:fire()
end
Notice that ship is only set when needTarget is true, otherwise the value from a previous iteration is used. It is quite possible (likely, even, if the bot has been doing it's job :-) that the ship will have been killed (and its object deleted by C++) since the variable was last set. If so, C++ will have a fit when we call ship:getLoc(), and will usually crash.
So the question is how to most elegantly handle the situation and limit the damage if (when) a programmer makes a mistake.
I have some ideas. First, we could create some sort of Lua function that the C++ code can call when a ship or other item dies:
function itemDied(deaditem)
if deaditem == ship then
ship = nil
needTarget = true
end
end
Second, we could implement some sort of reference counting smart pointer to "magically" fix the problem. But I would have no idea where to start with this.
Third, we can have some sort of deadness detector (not sure how that would work) that bots could call like so:
if !isAlive(ship) then
needTarget = true
ship = nil -- superfluous, but here for clarity in this example
end
if needTarget then -- needTarget => global(?) boolean
ship = findClosest(findItems(ShipType)) -- ship => global lightUserData obj
end
<...as before...>
Fourth, I could retain only the ID of the ship, rather than a reference, and use that to acquire the ship object each cycle, like this:
local ship = getShip(shipID) -- shipID => global ID
if ship == nil then
needTarget = true
end
if needTarget then -- needTarget => global(?) boolean
ship = findClosest(findItems(ShipType)) -- ship => global lightUserData obj
shipID = ship:getID()
end
<...as before...>
My ideal situation would also throw errors intelligently. If I ran the getLoc() method on a dead ship, I'd like to trigger error handling code to either give the bot a chance to recover, or at least allow the system to kill the robot and log the problem, hopefully cuing me to be more careful in how I code my bot.
Those are my ideas. I'm leaning towards #1, but it feels clunky (and might involve lots of back and forth because we've got lots of short-lifecycle objects like bullets to contend with, most of which we won't be tracking). It might be easy to forget to implement the itemDied() function. #2 is appealing, because I like magic, but have no idea how it would work. #3 & #4 are very easy to understand, and I could limit my deadness detection only to the few objects that are interesting over the span of several game cycles (most likely a single ship).
This has to be a common problem. What do you think of these ideas, and are there any better ones out there?
Thanks!
Here's my current best solution:
In C++, my ship object is called Ship, whose lifecycle is controlled by C++. For each Ship, I create a proxy object, called a LuaShip, which contains a pointer to the Ship, and Ship contains a pointer to the LuaShip. In the Ship's destructor, I set the LuaShip's Ship pointer to NULL, which I use as an indicator that the ship has been destroyed.
My Lua code only has a reference to the LuaShip, and so (theoretically, at least, as this part is still not working properly) Lua will control the lifecycle of the LuaShip once the corresponding Ship object is gone. So Lua will always have a valid handle, even after the Ship object is gone, and I can write proxy methods for the Ship methods that check for Ship being NULL.
So now my task is to better understand how Luna/Lunar manages the lifecycle of pointers, and make sure that my LuaShips do not get deleted when their partner Ships get deleted if there is still some Lua code pointing at them. That should be very doable.
Actually, it turned out not to be doable (at least not by me). What did seem to work was to decouple the Ship and the LuaShip objects a little. Now, when the Lua script requests a LuaShip object, I create a new one and hand it off to Lua, and let Lua delete it when it's done with it. The LuaShip uses a smart pointer to refer to the Ship, so when the Ship dies, that pointer gets set to NULL, which the LuaShip object can detect.
It is up to the Lua coder to check that the Ship is still valid before using it. If they do not, I can trap the sitation and throw out an stern error message, rather than having the whole game crash (as was happening before).
Now Lua has total control over the lifecyle of the LuaShip, C++ can delete Ships without causing problems, and everything seems to work smoothly. The only drawback is that I'm potentially creating a lot of LuaShip objects, but it's really not that bad.
If you are interested in this topic, please see the mailing list thread I posted about a related concept, that ends in some suggestions for refining the above:
http://lua-users.org/lists/lua-l/2009-07/msg00076.html

I don't think you have a probelm on your Lua side, and you should not be solving it there.
Your C++ code is deleting objects that are still being referenced. No matter how they're referenced, that's bad.
The simple solution may be to let Lunar clean up all your objects. It already knows which objects must be kept alive because the script is using them, and it seems feasible to let it also do GC for random C++ objects (assuming smart pointers on the C++ side, of course - each smart pointer adds to Lunars reference count)

Our company went with solution number four, and it worked well for us. I recommend it. However, in the interests of completeness:
Number 1 is solid. Let the ship's destructor invoke some Lunar code (or mark that it should be invoked, at any rate), and then complain if you can't find it. Doing things this way means that you'll have to be incredibly careful, and maybe hack the Lua runtime a bit, if you ever want to run the game engine and the robots in separate threads.
Number 2 isn't as hard as you think: write or borrow a reference-counting pointer on the C++ side, and if your Lua/C++ glue is accustomed to dealing with C++ pointers it'll probably work without further intervention, unless you're generating bindings by inspecting symbol tables at runtime or something. The trouble is, it'll force a pretty profound change in your design; if you're using reference-counted pointers to refer to ships, you have to use them everywhere - the risks inherent in referring to ships with a mixture of bare pointers and smart ones should be obvious. So I wouldn't go that route, not as late in the project as you seem to be.
Number 3 is tricky. You need a way to determine whether a given ship object is alive or dead even after the memory representing it has been freed. All the solutions I can think of for that problem basically devolve into number 4: you can let dead ships leave behind some kind of token that's copied into the Lua object and can be used to detect deadness (you'd keep dead objects in a std::set or something similar), but then why not just refer to ships by their tokens?
In general, you can't detect whether a particular C++ pointer points to an object that's been deleted, so there's no easy magical way to solve your problem. Trapping the error of calling ship:getLoc() on a deleted ship is possible only if you take special action in the destructor. There's no perfect solution to this problem, so good luck.

This is an old question, but the right solution, IMO, is to have lua_newuserdata() create a shared_ptr or weak_ptr via either boost::shared_ptr/boost::weak_ptr or C++11's std::shared_ptr/std::weak_ptr. From there, you create a reference whenever you need it, or fail if the weak_ptr is unable to obtain lock() a shared_ptr. For example (using Boost's shared_ptr in this example since this is an old question where you probably do not have have C++11 support yet, though for new projects where possible I'd recommend C++11's shared_ptr):
using MyObjectPtr = boost::shared_ptr<MyObject>;
using MyObjectWeakPtr = boost::weak_ptr<MyObject>;
auto mySharedPtr = boost::make_shared<MyObject>();
auto userdata = static_cast<MyObjectWeakPtr*>(lua_newuserdata(L, sizeof(MyObjectWeakPtr)));
new(userdata) MyObjectWeakPtr(mySharedPtr);
And then when you need to get a C++ object:
auto weakObj = *static_cast<MyObjectWeakPtr*>(
luaL_checkudata(L, 1, "MyObject.Metatable"));
luaL_argcheck(L, weakObj != nullptr, 1, "'MyObjectWeakPtr' expected");
// If you're using a weak_ptr, this is required!!!! If your userdata is a
// shared_ptr, you can just act on the shared_ptr after luaL_argcheck()
if (auto obj = weakObj.lock()) {
// You have a valid shared_ptr, the C++ object is alive and you can
// dereference like a normal shared_ptr.
} else {
// The C++ object went away, you can safely garbage collect userdata
}
It's critical that you don't forget to deallocate the weak_ptr in your lua __gc metamethod:
static int
myobject_lua__gc(lua_State* L) {
auto weakObj = *static_cast<MyObjectWeakPtr*>(
luaL_checkudata(L, 1, "MyObject.Metatable"));
luaL_argcheck(L, weakObj != nullptr, 1, "'MyObjectWeakPtr' expected");
weakObj.~MyObjectWeakPtr();
}
Don't forget to make use of macros or template metaprogramming to avoid much of the code duplication re: static_cast<>, luaL_argcheck(), etc.
Use shared_ptr when you need to keep the C++ object alive for as long as the lua object also exists. Use weak_ptr when C++ may reap the object and it's okay for it to disappear out from under lua's feet. ALWAYS use either shared_ptr or weak_ptr when the life of an object is not known and needs to be managed automatically by refcount.
Tip: have your C++ class inherit from boost::enable_shared_from_this or std::enable_shared_from_this because it enables use of shared_from_this().

I agree with MSalters, I really don't think you should be freeing the memory from the C++ side. Lua userdata supports the ___gc metamethod to give you a chance to clean things up. If the gc is not agressive enough you can tweak it a bit, or run it manually with a small step size, more often. The lua gc is not deterministic, so if you need to have resources released then you will need to have a function that you can call to release those resources (which will also be called by __gc, with appropriate checks).
You might also want to look into using weak tables for your ship references so that you don't have to assign EVERY reference to nil to get it freed. Have one strong reference (say, in a list of all active ships) then all the others are weak references. When a ship is destroyed, set a flag on the ship that marks it as such, then set the reference to nil in the active ships table. Then, when the other ship wants to interact your logic is the same except you check for:
if ship==nil or ship.destroyed then
ship = findClosest(findItems(ShipType))
end

Related

Is there a way to optimize shared_ptr for the case of permanent objects?

I've got some code that is using shared_ptr quite widely as the standard way to refer to a particular type of object (let's call it T) in my app. I've tried to be careful to use make_shared and std::move and const T& where I can for efficiency. Nevertheless, my code spends a great deal of time passing shared_ptrs around (the object I'm wrapping in shared_ptr is the central object of the whole caboodle). The kicker is that pretty often the shared_ptrs are pointing to an object that is used as a marker for "no value"; this object is a global instance of a particular T subclass, and it lives forever since its refcount never goes to zero.
Using a "no value" object is nice because it responds in nice ways to various methods that get sent to these objects, behaving in the way that I want "no value" to behave. However, performance metrics indicate that a huge amount of the time in my code is spent incrementing and decrementing the refcount of that global singleton object, making new shared_ptrs that refer to it and then destroying them. To wit: for a simple test case, the execution time went from 9.33 seconds to 7.35 seconds if I stuck nullptr inside the shared_ptrs to indicate "no value", instead of making them point to the global singleton T "no value" object. That's a hugely important difference; run on much larger problems, this code will soon be used to do multi-day runs on computing clusters. So I really need that speedup. But I'd really like to have my "no value" object, too, so that I don't have to put checks for nullptr all over my code, special-casing that possibility.
So. Is there a way to have my cake and eat it too? In particular, I'm imagining that I might somehow subclass shared_ptr to make a "shared_immortal_ptr" class that I could use with the "no value" object. The subclass would act just like a normal shared_ptr, but it would simply never increment or decrement its refcount, and would skip all related bookkeeping. Is such a thing possible?
I'm also considering making an inline function that would do a get() on my shared_ptrs and would substitute a pointer to the singleton immortal object if get() returned nullptr; if I used that everywhere in my code, and never used * or -> directly on my shared_ptrs, I would be insulated, I suppose.
Or is there another good solution for this situation that hasn't occurred to me?
Galik asked the central question that comes to mind regarding your containment strategy. I'll assume you've considered that and have reason to rely on shared_ptr as a communical containment strategy for which no alternative exists.
I have to suggestions which may seem controversial. What you've defined is that you need a type of shared_ptr that never has a nullptr, but std::shared_ptr doesn't do that, and I checked various versions of the STL to confirm that the customer deleter provided is not an entry point to a solution.
So, consider either making your own smart pointer, or adopting one that you change to suit your needs. The basic idea is to establish a kind of shared_ptr which can be instructed to point it's shadow pointer to a global object it doesn't own.
You have the source to std::shared_ptr. The code is uncomfortable to read. It may be difficult to work with. It is one avenue, but of course you'd copy the source, change the namespace and implement the behavior you desire.
However, one of the first things all of us did in the middle 90's when templates were first introduced to the compilers of the epoch was to begin fashioning containers and smart pointers. Smart pointers are remarkably easy to write. They're harder to design (or were), but then you have a design to model (which you've already used).
You can implement the basic interface of shared_ptr to create a drop in replacement. If you used typedefs well, there should be a limited few places you'd have to change, but at least a search and replace would work reasonably well.
These are the two means I'm suggesting, both ending up with the same feature. Either adopt shared_ptr from the library, or make one from scratch. You'd be surprised how quickly you can fashion a replacement.
If you adopt std::shared_ptr, the main theme would be to understand how shared_ptr determines it should decrement. In most implementations shared_ptr must reference a node, which in my version it calls a control block (_Ref). The node owns the object to be deleted when the reference count reaches zero, but naturally shared_ptr skips that if _Ref is null. However, operators like -> and *, or the get function, don't bother checking _Ref, they just return the shadow, or _Ptr in my version.
Now, _Ptr will be set to nullptr (or 0 in my source) when a reset is called. Reset is called when assigning to another object or pointer, so this works even if using assignment to nullptr. The point is, that for this new type of shared_ptr you need, you could simply change the behavior such that whenever that happens (a reset to nullptr), you set _Ptr, the shadow pointer in shared_ptr, to the "no value global" object's address.
All uses of *,get or -> will return the _Ptr of that no value object, and will correctly behave when used in another assignment, or reset is called again, because those functions don't rely upon the shadow pointer to act upon the node, and since in this special condition that node (or control block) will be nullptr, the rest of shared_ptr would behave as though it was pointing to nullptr correctly - that is, not deleting the global object.
Obviously this sounds crazy to alter std::pointer to such application specific behavior, but frankly that's what performance work tends to make us do; otherwise strange things, like abandoning C++ occasionally in order to obtain the more raw speed of C, or assembler.
Modifying std::shared_ptr source, taken as a copy for this special purpose, is not what I would choose (and, factually, I've faced other versions of your situation, so I have made this choice several times over decades).
To that end, I suggest you build a policy based smart pointer. I find it odd I suggested this earlier on another post today (or yesterday, it's 1:40am).
I refer to Alexandrescu's book from 2001 (I think it was Modern C++...and some words I don't recall). In that he presented loki, which included a policy based smart pointer design, which is still published and freely available on his website.
The idea should have been incorporated into shared_ptr, in my opinion.
Policy based design is implemented as the paradigm of a template class deriving from one or more of it's parameters, like this:
template< typename T, typename B >
class TopClass : public B {};
In this way, you can provide B, from which the object is built. Now, B may have the same construction, it may also be a policy level which derives from it's second parameter (or multiple derivations, however the design works).
Layers can be combined to implement unique behaviors in various categories.
For example:
std::shared_ptr and std::weak_ptrare separate classes which interact as a family with others (the nodes or control blocks) to provide smart pointer service. However, in a design I used several times, these two were built by the same top level template class. The difference between a shared_ptr and a weak_ptr in that design was the attachment policy offered in the second parameter to the template. If the type is instantiated with the weak attachment policy as the second parameter, it's a weak pointer. If it's given a strong attachment policy, it's a smart pointer.
Once you create a policy designed template, you can introduce layers not in the original design (expanding it), or to "intercept" behavior and specialize it like the one you currently require - without corrupting the original code or design.
The smart pointer library I developed had high performance requirements, along with a number of other options including custom memory allocation and automatic locking services to make writing to smart pointers thread safe (which std::shared_ptr doesn't provide). The interface and much of the code is shared, yet several different kinds of smart pointers could be fashioned simply by selecting different policies. To change behavior, a new policy could be inserted without altering the existing code. At present, I use both std::shared_ptr (which I used when it was in boost years ago) and the MetaPtr library I developed years ago, the latter when I need high performance or flexible options, like yours.
If std::shared_ptr had been a policy based design, as loki demonstrates, you'd be able to do this with shared_ptr WITHOUT having to copy the source and move it to a new namespace.
In any event, simply creating a shared pointer which points the shadow pointer to the global object on reset to nullptr, leaving the node pointing to null, provides the behavior you described.

Splitted interface of library (ChessGame with Figures etc) vs user's law to delete every pointer

Sometimes it's convenient to split interface of some system/library in more than one class.
For example, consider idea of library for playing Chess. Its interface would use (and deliver to players) different object for every single game and - during game - another object for every figure.
In Java there wouldn't be such a problem. But in C++, a library user can delete (or make attempt to delete) every pointer he'll get. Even shared_ptr/weak_ptr.
What do you think about such situations? Should I use in my interface wrapping classes that deleting isn't dangerous?
What is an usual way for such dilemmas?
Is there a way that STL smart pointers would help? I heard that they should be used always and only to express ownership, so they seem to have nothing to do with this issue (Chess is owner of SingleGame, SingleGame is owner of every Figure).
PS did I use correct tags/subject?
You can't stop a user from breaking stuff. As others have suggested, use smart pointers. With C++11, there is no reason not to use them in new code. If the user still breaks it, that's their fault. You can't design a library that is completely foolproof. You can just do your best to disuade foolish behavior.
As others have said, smart pointers (or other RAII schemes) are often a great idea. They can clearly indicate ownership and at the same time provide an automatic mechanism for managing it. Try using such if you can.
But really, no reasonable C++ programmer should be blindly calling delete on every pointer they get. When they use a library/API/whatever which returns a pointer/handle/resource/etc they should be reading its documentation to tell them whether or not they will be responsible for deallocation and if so then when technique should be used.
So at a minimum, just make sure your public interface clearly indicates when ownership is passed to the caller and what method they should use for cleanup.

C++ implicit data sharing [duplicate]

I am building a game engine library in C++. A little while back I was using Qt to build an application and was rather fascinated with its use of Implicit Sharing. I am wondering if anybody could explain this technique in greater detail or could offer a simple example of this in action.
The key idea behind implicit sharing seems to go around using the more common term copy-on-write. The idea behind copy-on-write is to have each object serve as a wrapper around a pointer to the actual implementation. Each implementation object keeps track of the number of pointers into it. Whenever an operation is performed on the wrapper object, it's just forwarded to the implementation object, which does the actual work.
The advantage of this approach is that copying and destruction of these objects are cheap. To make a copy of the object, we just make a new instance of a wrapper, set its pointer to point at the implementation object, and then increment the count of the number of pointers to the object (this is sometimes called the reference count, by the way). Destruction is similar - we drop the reference count by one, then see if anyone else is pointing at the implementation. If not, we free its resources. Otherwise, we do nothing and just assume someone else will do the cleanup later.
The challenge in this approach is that it means that multiple different objects will all be pointing at the same implementation. This means that if someone ends up making a change to the implementation, every object referencing that implementation will see the changes - a very serious problem. To fix this, every time an operation is performed that might potentially change the implementation, the operation checks to see if any other objects also reference the implementation by seeing if the reference count is identically 1. If no other objects reference the object, then the operation can just go ahead - there's no possibility of the changes propagating. If there is at least one other object referencing the data, then the wrapper first makes a deep-copy of the implementation for itself and changes its pointer to point to the new object. Now we know there can't be any sharing, and the changes can be made without a hassle.
If you'd like to see some examples of this in action, take a look at lecture examples 15.0 and 16.0 from Stanford's introductory C++ programming course. It shows how to design an object to hold a list of words using this technique.
Hope this helps!

Long delegation chains in C++

This is definitely subjective, but I'd like to try to avoid it
becoming argumentative. I think it could be an interesting question if
people treat it appropriately.
In my several recent projects I used to implement architectures where long delegation chains are a common thing.
Dual delegation chains can be encountered very often:
bool Exists = Env->FileSystem->FileExists( "foo.txt" );
And triple delegation is not rare at all:
Env->Renderer->GetCanvas()->TextStr( ... );
Delegation chains of higher order exist but are really scarce.
In above mentioned examples no NULL run-time checks are performed since the objects used are always there and are vital to the functioning of the program and
explicitly constructed when execution starts. Basically I used to split a delegation chain in these cases:
1) I reuse the object obtained through a delegation chain:
{ // make C invisible to the parent scope
clCanvas* C = Env->Renderer->GetCanvas();
C->TextStr( ... );
C->TextStr( ... );
C->TextStr( ... );
}
2) An intermediate object somewhere in the middle of the delegation chain should be checked for NULL before usage. Eg.
clCanvas* C = Env->Renderer->GetCanvas();
if ( C ) C->TextStr( ... );
I used to fight the case (2) by providing proxy objects so that a method can be invoked on non-NULL object leading to an empty result.
My questions are:
Is either of cases (1) or (2) a pattern or an antipattern?
Is there a better way to deal with long delegation chains in C++?
Here are some pros and cons I considered while making my choice:
Pros:
it is very descriptive: it is clear out of 1 line of code where did the object came from
long delegation chains look nice
Cons:
interactive debugging is labored since it is hard to inspect more than one temporary object in the delegation chain
I would like to know other pros and cons of the long delegation chains. Please, present your reasoning and vote based on how well-argued opinion is and not how well you agree with it.
I wouldn't go so far to call either an anti-pattern. However, the first has the disadvantage that your variable C is visible even after it's logically relevant (too gratuitous scoping).
You can get around this by using this syntax:
if (clCanvas* C = Env->Renderer->GetCanvas()) {
C->TextStr( ... );
/* some more things with C */
}
This is allowed in C++ (while it's not in C) and allows you to keep proper scope (C is scoped as if it were inside the conditional's block) and check for NULL.
Asserting that something is not NULL is by all means better than getting killed by a SegFault. So I wouldn't recommend simply skipping these checks, unless you're a 100% sure that that pointer can never ever be NULL.
Additionally, you could encapsulate your checks in an extra free function, if you feel particularly dandy:
template <typename T>
T notNULL(T value) {
assert(value);
return value;
}
// e.g.
notNULL(notNULL(Env)->Renderer->GetCanvas())->TextStr();
In my experience, chains like that often contains getters that are less than trivial, leading to inefficiencies. I think that (1) is a reasonable approach. Using proxy objects seems like an overkill. I would rather see a crash on a NULL pointer rather than using a proxy objects.
Such long chain of delegation should not happens if you follow the Law of Demeter. I've often argued with some of its proponents that they where holding themselves to it too conscientiously, but if you come to the point to wonder how best to handle long delegation chains, you should probably be a little more compliant with its recommendations.
Interesting question, I think this is open to interpretation, but:
My Two Cents
Design patterns are just reusable solutions to common problems which are generic enough to be widely applied in object oriented (usually) programming. Many common patterns will start you out with interfaces, inheritance chains, and/or containment relationships that will result in you using chaining to call things to some extent. The patterns are not trying to solve a programming issue like this though - chaining is just a side effect of them solving the functional problems at hand. So, I wouldn't really consider it a pattern.
Equally, anti-patterns are approaches that (in my mind) counter-act the purpose of design patterns. For example, design patterns are all about structure and the adaptability of your code. People consider a singleton an anti-pattern because it (often, not always) results in spider-web like code due to the fact that it inherently creates a global, and when you have many, your design deteriorates fast.
So, again, your chaining problem doesn't necessarily indicate good or bad design - it's not related to the functional objectives of patterns or the drawbacks of anti-patterns. Some designs just have a lot of nested objects even when designed well.
What to do about it:
Long delegation chains can definitely be a pain in the butt after a while, and as long as your design dictates that the pointers in those chains won't be reassigned, I think saving a temporary pointer to the point in the chain you're interested in is completely fine (function scope or less preferably).
Personally though, I'm against saving a permanent pointer to a part of the chain as a class member as I've seen that end up in people having 30 pointers to sub objects permanently stored, and you lose all conception of how the objects are laid out in the pattern or architecture you're working with.
One other thought - I'm not sure if I like this or not, but I've seen some people create a private (for your sanity) function that navigates the chain so you can recall that and not deal with issues about whether or not your pointer changes under the covers, or whether or not you have nulls. It can be nice to wrap all that logic up once, put a nice comment at the top of the function stating which part of the chain it gets the pointer from, and then just use the function result directly in your code instead of using your delegation chain each time.
Performance
My last note would be that this wrap-in-function approach as well as your delegation chain approach both suffer from performance drawbacks. Saving a temporary pointer lets you avoid the extra two dereferences potentially many times if you're using these objects in a loop. Equally, storing the pointer from the function call will avoid the over head of an extra function call every loop cycle.
For bool Exists = Env->FileSystem->FileExists( "foo.txt" ); I'd rather go for an even more detailed breakdown of your chain, so in my ideal world, there are the following lines of code:
Environment* env = GetEnv();
FileSystem* fs = env->FileSystem;
bool exists = fs->FileExists( "foo.txt" );
and why? Some reasons:
readability: my attention gets lost till I have to read to the end of the line in case of bool Exists = Env->FileSystem->FileExists( "foo.txt" ); It's just too long for me.
validity: regardles that you mentioned the objects are, if your company tomorrow hires a new programmer and he starts writing code, the day after tomorrow the objects might not be there. These long lines are pretty unfriendly, new people might get scared of them and will do something interesting such as optimising them... which will take more experienced programmer extra time to fix.
debugging: if by any chance (and after you have hired the new programmer) the application throws a segmentation fault in the long list of chain it is pretty difficult to find out which object was the guilty one. The more detailed the breakdown the more easier to find the location of the bug.
speed: if you need to do lots of calls for getting the same chain elements, it might be faster to "pull out" a local variable from the chain instead of calling a "proper" getter function for it. I don't know if your code is production or not, but it seems to miss the "proper" getter function, instead it seems to use only the attribute.
Long delegation chains are a bit of a design smell to me.
What a delegation chain tells me is that one piece of code has deep access to an unrelated piece of code, which makes me think of high coupling, which goes against the SOLID design principles.
The main problem I have with this is maintainability. If you're reaching two levels deep, that is two independent pieces of code that could evolve on their own and break under you. This quickly compounds when you have functions inside the chain, because they can contain chains of their own - for example, Renderer->GetCanvas() could be choosing the canvas based on information from another hierarchy of objects and it is difficult to enforce a code path that does not end up reaching deep into objects over the life time of the code base.
The better way would be to create an architecture that obeyed the SOLID principles and used techniques like Dependency Injection and Inversion Of Control to guarantee your objects always have access to what they need to perform their duties. Such an approach also lends itself well to automated and unit testing.
Just my 2 cents.
If it is possible I would use references instead of pointers. So delegates are guaranteed to return valid objects or throw exception.
clCanvas & C = Env.Renderer().GetCanvas();
For objects which can not exist i will provide additional methods such as has, is, etc.
if ( Env.HasRenderer() ) clCanvas* C = Env.Renderer().GetCanvas();
If you can guarantee that all the objects exist, I don't really see a problem in what you're doing. As others have mentioned, even if you think that NULL will never happen, it may just happen anyway.
This being said, I see that you use bare pointers everywhere. What I would suggest is that you start using smart pointers instead. When you use the -> operator, a smart pointer will usually throw if the pointer is NULL. So you avoid a SegFault. Not only that, if you use smart pointers, you can keep copies and the objects don't just disappear under your feet. You have to explicitly reset each smart pointer before the pointer goes to NULL.
This being said, it wouldn't prevent the -> operator from throwing once in a while.
Otherwise I would rather use the approach proposed by AProgrammer. If object A needs a pointer to object C pointed by object B, then the work that object A is doing is probably something that object B should actually be doing. So A can guarantee that it has a pointer to B at all time (because it holds a shared pointer to B and thus it cannot go NULL) and thus it can always call a function on B to do action Z on object C. In function Z, B knows whether it always has a pointer to C or not. That's part of its B's implementation.
Note that with C++11 you have std::smart_ptr<>, so use it!

Weak reference to a scoped_ptr?

Generally I follow the Google style guide, which I feel aligns nicely with the way I see things. I also, almost exclusively, use boost::scoped_ptr so that only a single manager has ownership of a particular object. I then pass around naked pointers, the idea being that my projects are structured such that the managers of said objects are always destroyed after the objects that use them are destroyed.
http://google-styleguide.googlecode.com/svn/trunk/cppguide.xml#Smart_Pointers
This is all great, however I was just bitten by a nasty little memory stomp bug where the owner just so happened to be deleted before objects that were using it were deleted.
Now, before everyone jumps up and down that I'm a fool for this pattern, why don't I just use shared_ptr ? etc., consider the point that I don't want to have undefined owner semantics. Although shared_ptr would have caught this particular case, it sends the wrong message to users of the system. It says, "I don't know who owns this, it could be you!"
What would have helped me would have been a weak pointer to a scoped pointer. In effect, a scoped pointer that has a list of weak references, that are nulled out when the scoped pointer destructs. This would allow single ownership semantics, but give the using objects a chance to catch the issue I ran into.
So at the expense of an extra 'weak_refs' pointer for the scoped_ptr and an extra pointer for the 'next_weak_ptr' in the weak_ptr, it would make a neat little single owner, multiple user structure.
It could maybe even just be a debug feature, so in 'release' the whole system just turns back into a normally sized scoped_ptr and a standard single pointer for the weak reference.
So..... my QUESTIONS after all of this are:
Is there such a pointer/patten already in stl/boost that I'm
missing, or should I just roll my own?
Is there a better way, that
still meets my single ownership goal?
Cheers,
Shane
2. Is there a better way, that still meets my single ownership goal?
Do use a shared_ptr, but as a class member so that it's part of the invariant of that class and the public interface only exposes a way to obtain a weak_ptr.
Of course, pathological code can then retain their own shared_ptr from that weak_ptr for as long as they want. I don't recommend trying to protect against Machiavelli here, only against Murphy (using Sutter's words). On the other hand, if your use case is asynchronous, then the fact that locking a weak_ptr returns a shared_ptr may be a feature!
Although shared_ptr would have caught this particular case, it sends the wrong message to users of the system. It says, "I don't know who owns this, it could be you!"
A shared_ptr doesn't mean "I don't know who owns this". It means "We own this." Just because one entity does not have exclusive ownership does not mean that anyone can own it.
The purpose of a shared_ptr is to ensure that the pointer cannot be destroyed until everyone who shares it is in agreement that it ought to be destroyed.
Is there such a pointer/patten already in stl/boost that I'm missing, or should I just roll my own?
You could use a shared_ptr exactly the same way you use a scoped_ptr. Just because it can be shared doesn't mean you have to share it. That would be the easiest way to work; just make single-ownership a convention rather than an API-established rule.
However, if you need a pointer that is single-owner and yet has weak pointers, there isn't one in Boost.
I'm not sure that weak pointers would help that much. Typically, if a
component X uses another component Y, X must be informed of Y's demise,
not just to nullify the pointer, but perhaps to remove it from a list,
or to change its mode of operation so that it no longer needs the
object. Many years ago, when I first started C++, there was a flurry of
activity trying to find a good generic solution. (The problem was
called relationship management back then.) As far as I know, no good
generic solution was every found; at least, every project I've worked on
has used a hand built solution based on the Observer pattern.
I did have a ManagedPtr on my site, when it was still up, which behaved
about like what you describe. In practice, except for the particular
case which led to it, I never found a real use for it, because
notification was always needed. It's not hard to implement, however;
the managed object derives from a ManagedObject class, and gets all of
the pointers (ManagedPtr, and not raw pointers) it hands out from it.
The pointer itself is registered with the ManagedObject class, and the
destructor of the ManagedObject class visits them all, and "disconnects"
them by setting the actual pointer to null. And of course, ManagedPtr
has an isValid function so that the client code can test before
dereferencing. This works well (regardless of how the object is
managed—most of my entity objects "own" themselves, and do a
delete this is response to some specific input), except that you tend
to leak invalid ManagedPtr (e.g. whenever the client keeps the pointer
in a container of some sort, because it may have more than one), and
clients still aren't notified if they need to take some action when your
object dies.
If you happen to be using QT, QPointer is basically a weak pointer to a QObject. It connects itself to the "I just got destroyed" event in the pointed-to value, and invalidates itself automatically as needed. That's a seriously hefty library to pull in for what amounts to bug tracking, though, if you aren't already jumping through QT's hoops.