smart pointers in windows programming - c++

Excluding STL, I only found CComPtr in C++ windows programming. Is there any other types of smart pointers in windows SDK? Thanks.

First, STL's and boost's smart pointers are available on Windows and there's nothing wrong with using those.
Speaking of purely Windows stuff, COM interface pointers, with their AddRef/Release lifetime management model, readily lends itself to smart pointers. There are some smart pointer classes in Windows-specific libraries that are geared towards storing COM interface pointers. In addition to the ATL's CComPtr<>, there's _com_ptr_t<> of Microsoft Native COM, and MFC's COleDispatchDriver. The latter is hardly ever used with the advent of Native COM. With the exception of CComPtr, those are used together with type library import facilities.

In the Windows SDK (specific to ATL), there is CAutoPtr(single item allocation) and CAutoVectorPtr (array allocation).

The MSDN article states that CComPtr is designed to be used with COM objects only. Generally Boost smart pointers are commonly used as a platform-independent C++ smart pointer library. Since the concept of smart pointers isn't bound to a particular OS, there's really no need to use a smart pointer implementation bound to Windows, even if that's the only platform you plan on developing the application for.

Related

Should I use smart pointers for my application and library?

There are so much literature about smart pointers, I have read as much as I could. I just want a simple answer.
I have used raw pointer for my 3D renderer engines, and now I have to share some widgets between renderers, so here smart pointer comes. Please guide me should I update my entire software/library with smart pointer (std::shared_ptr)? What will be the cons in that case? I know the pros, just wanna know are there any cons? and what things are important that I should consider during the transformation from raw pointers to smart pointers? please be specific? Thanks.
Generally, they are a useful tool but not all tasks are suitable for a single tool. That said, here's a few things you should consider:
Learning about smart pointers is a valuable skill. Knowing them is the base for applying them when appropriate and ignoring them in the few cases they are not.
Smart pointers are more than just std::shared_ptr. There is also std::unique_ptr, also look into std::make_shared, std::make_unique (C++14) and std::enable_shared_from_this as a minimum.
When passing std::shared_ptr as parameters, make sure you don't add synchronized, superfluous reference count manipulations, e.g., use const std::shared_ptr<...>&.
Even when the standardized smart pointers are not the right tool, know how they work and create your own RAII wrappers - still better than full-blown, error-prone raw pointers.
When interfacing existing C-style APIs, you can still often use smart pointers and only hand down the raw pointer to those APIs where needed, using .get() on the smart pointer.
Much is to say about the pros of smart pointers and as other have already said I would recommend using them, where appropriate, in your applications.
When it comes to the API of libraries, I would say: It depends. If you are distributing your library in source form, then yes, you might use smart pointers in your API and your users might benefit from them. If you however want to distribute your library as DLL, smart pointers are not the right tool. That's because they are defined as templates and template classes are not distributable as DLLs. The users of your library would be forced to use the exact same compiler and template library as you did for the production of the DLL -- and that might be not what you want. Hence, for APIs of DLLs I would think twice before using smart pointers.

Only plain pointers used in Qt API

I've been working with Qt for somedays and I wonder why all their API uses plain pointers instead of their own smart pointers like QSharedPointer.
Wouldn't it be more consistent to use them?
QSharedPointer is implemented since Qt 4.5. In Qt, QObjects organize themselves in object trees. When you create a QObject, with another object as the parent, the former is added to the latter's children list and destroyed in the latter's destructor. So you do not need to use QSharedPointer with its overhead.
Why should QSharedPointer be used when in Qt APIs object ownership is usually exclusive to one object? There is no need for sharing.
A more appropriate question would be why is Qt using raw pointers instead of smart pointers (be those Qt's or C++11's), and the reason for this is simple - those are new features, and even though Qt 5 has been released after C++11 (and internally employs it), rewriting everything to use smart pointers besides tedious will also result in annihilation of backward comparability of user code.
Overall, Qt APIs seem to be somewhat lacking and incoherent in this regard. For example - it is a major inconvenience that Qt's smart pointers are not supported in QtQuick, which uses its own private smart pointer implementation, so you should either have ownership managed by the QML engine or by C++, but you cannot really share across the two.

Are shared_ptr and the reference counting in iOS the same idea?

I'm not very experienced with either C++ or iOS, so I'm just curious if iOS the reference counting works basically alike in boost shared pointers and in NSObject?
From what I gather here, using ARC is very similar to using std::shared_ptr ("strong" pointers) and std::weak_ptr ("weak" pointers).
Abuse the former, and avoid the latter. Anyway, prefer std::unique_ptr if you can.
(Also, I am somewhat astonished that you had to release pointers manually when programming for iOS. In the 21st century.)
I'm not very experienced with C++ so I may be not completely correct about shared_ptr, but for me they doesn't seem alike. In Obj-C there're two options. Manual memory management - you manually increment and decrement reference counts for your objects, no magic occurs here. And new ARC which is mostly compile-time feature, while shared_ptr is just runtime implementation.

Is it possible to introduce Automatic Reference Counting (ARC) to C++?

Objective C has introduced a technology called ARC to free the developer from the burden of memory management. It sounds great, I think C++ developers would be very happy if g++ also has this feature.
ARC allows you to put the burden of memory management on the (Apple LLVM 3.0) compiler, and never think about retain, release and autorelease ever again
So, if LLVM3.0 can do that, I think g++ also can free C++ developers from the tough jobs of memory management, right?
Is there any difficulties to introduce ARC to C++?
What I mean is: If we don't use smart pointers, we just use new/new[], is it possible for a compiler to do something for us to prevent memory leaks? For example, change the new to a smart pointer automatically?
C++ has the concept of Resource Allocation is Initialization(RAII) & intelligent use of this method saves you from explicit resource management.
C++ already provides shared_ptr which provides reference counting.
Also, there are a host of other Smart pointers which employ RAII to make your life easier in C++.
This is a good question. ARC is much more than just an implementation of smart pointers. It is also different from garbage collection, in that it does give you full control over memory management.
In ARC you know exactly when objects are going to be released. The reason people think is isn't true, is that there's no explicit "release" call that you write. But you know when the compiler will insert one. And it's not in some garbage collection step, it's inline when objects are considered no longer needed.
It contains a compiler step that analyzes the code and tries to find any redundant sequences of incrementing and decrementing reference counts. This could probably be achieved by an optimizing C++ compiler if it was given a smart pointer implementation that its optimizer could see through.
ARC also relies on the semantics of objective c. Firstly, pointers are annotated to say whether they are strong or weak. This could also be done in C++, just by having two different pointer classes (or using smart and vanilla pointers). Secondly, it relies on naming conventions for objective c methods to know whether their return values should be implicitly weak or strong, which means it can work alongside non-ARC code (ARC needs to know if your non-ARC code intended to return an object with a +1 reference count, for example). If your "C ARC" didn't sit alongside non-"C ARC" code you wouldn't need this.
The last thing that ARC gives you, is really good analysis of your code to say where it thinks leaks may be, at compile time. This would be difficult to add to C++ code, but could be added into the C++ compiler.
There's no need. We have shared pointers that do this for us. In fact, we have a range of pointer types for a variety of different circumstances, but shared pointers mimic exactly what ARC is doing.
See:
std::shared_ptr<>
boost::shared_ptr<>
Recently I wrote some Objective-C++ code using Clang and was surprised to find that Objective-C pointers were actually handled as non-POD types in C++ that I could use in my C++ classes without issues.
They were actually freed automatically in my destructors!
I used this to store weak references in std::vectors because I couldn't think of a way to hold an NSArrary of weak references..
Anyways, it seems to me like Clang implements ARC in Objective-C by emulating C++ RAII and smart pointers in Objective-C. When you think about it, every NSObject* in ARC is just a smart pointer (intrusive_ptr from Boost) in C++.
The only difference I can see between ARC and smart pointers is that ARC is built into the language. They have the same semantics besides that.
One of the reasons C++ is used at all is full control over memory management. If you don't want that in a particular situation there are smart pointers to do the managing for you.
Managed memory solutions exist, but in the situation C++ is chosen rightfully (for large-scale big applications), it is not a viable option.
What's the advantage of using ARC rather than full garbage collection? There was a concrete proposal for garbage collection before the committee; in the end, it wasn't handled because of lack of time, but there seems to be a majority of the committee (if not truly a consensus) in favor of adding garbage collection to C++.
Globally, reference counting is a poor substitute for true garbage collection: it's expensive in terms of run time, and it needs special code to handle cycles. It's applicable in specific limited cases, however, and C++ offers it via std::shared_ptr, at the request of the programmer, when he knows it's applicable.
Take a look of Qt. Qt has implemented this feature by leverage the hierarchy chain. You can new a pointer and assign a parent to it, Qt will help you to manage the memory.
There are already some implementations of similar technologies for C++; e.g., Boehm-Demers-Weiser garbage collector.
C++11 has a special Application Binary Interface for anyone wishing to add her own garbage collection.
In the vast majority of cases, techniques like smart pointers can do the job of painless memory management for C++ developers.
Microsoft C++/CX has ARC for ref classes. Embarcadero has 2 C++ compilers, one of them has ARC.

win32 application aren't so object oriented and why there are so many pointers?

This might be a dumb question to some of you and maybe I asked this question wrong, because I am new to C++. But I notice when working in a lot of Win32 applications, you use a lot of resources that are pointers. Why do you have to always acquire a objects pointer? Why not initiate a new instance of the class. and speaking of that, I notice in most cases you never initiate new objects, but always call on methods that return that pointer. What if that pointer is being used somewhere else. Couldn't you mess something up if you alter that pointer and it is being used somewhere else?
Windows APIs were designed for C, which was and still is the most used language for system programming; C APIs are the de-facto standard for system APIs, and for this almost all other languages had and have some way to call external C functions, so writing a C API helps to be compatible with other languages.
C APIs need just a simple ABI, that consists of almost just the definition for the calling convention to use for functions (and something about the structures layout). C++ and other object oriented languages, on the contrary, require a complex ABI, that must define how objects are laid out in memory, how to handle inheritance, how to lay out the vtable, how to propagate exceptions, where to put RTTI data, ... Moreover not all the languages are object oriented, and using APIs thought for C++ with other non-object oriented languages may be a real pain (if you ever used COM from C you know what I mean).
As an aside, when Windows was initially designed C++ wasn't so widespread on PCs, and also C wasn't used so much: actually, a large part of Windows 3.11 and many applications were still written in assembly, since the memory and CPU constraints at the era were very tight; compilers were also less smart than now are, especially C++ ones. On machines where hand-forged assembly was often the only solution, the C++ overhead was really unacceptable.
For the pointers thing: Windows APIs use almost always handles, i.e. opaque pointers, to be able to change the underlying nature of every resource without affecting the existing applications and to stop applications to mess around with internal structures. It doesn't matter if the structure used by the window manager to represent a window internally is changed: all the applications use simply an HWND, which is always of the size of a pointer. You may think at this as some kind of PIMPL idiom.
However, Windows is in some way object-oriented (see for example the whole "window class" concept, or, at a deeper level, the inner working of the NT kernel, which is heavily based on the "object" concept), however its most basic APIs, being simple C functions, somehow hide this OO nature. The shell, on the other side, being designed many years after, is written mostly in C++ and it provides a really object-oriented COM interface.
Interestingly, you can see in COM all the tradeoffs that you must face in building a cross-language but still C++ biased object-oriented interface: the result is quite complicated, in some respects ugly and not really simple to use from any language. The Windows APIs, instead, being simple functions are generally more easily callable.
If you are interested in a system based on C++ APIs you may have a look at Haiku; personally, this is one of the aspects because of which I am quite interested in that project.
By the way, if you are going to do Win32 programming just with the APIs you'd better get a good book to get used to these "particularities" and to other Win32 idioms. Two well-known ones are the Rector-Newcomer and the Petzhold.
Because Win32 Api are written on plain C, not C++. So any program on almost any language can make call to those API.
Plus, there are not simple mechanism to use objects across diferent modules, and diferent languages. I.e. you can't export C++ class to python. Of course, there are technoligies like OLE/COM, but they still written on plain C. And they are bit comlicated to use.
At other hand - calls to plain C functions are standardized. So you can call routines from DLL or static lib in any language.
Win32 was designed to work with the C language not C++.
That's why you will see return types of the defined BOOL instead of bool for example.
bool is specific to C++ and doesn't exist in C.
For Microsoft's object oriented wrapper of Win32 see MFC.
A newer framework from Microsoft since then is the .Net Framework.
The .Net framework is based on managed code though and does not run natively. The most modern way to do GUI programming on Windows is WPF or even Silverlight.
The most modern way to do unmanaged GUI programming is still using MFC, although some people still prefer using straight Win32.
Note working with pointers is not specific to C, it is still very common in C++.
First reason is because passing pointers around is cheap. Pointers are 4 bytes on x86 and 8 bytes on x64. While the structure or class it points to can occupy a whole lot more in memory. So instantiating a class means reserving new memory again and again. This isn't efficient from speed and memory consumption POVs.
Another way is to pass references to objects or smart pointers or similar constructs. But win32 api was designed in the C era, so this is where it is up to now ;)
As about potential messing up with pointers - it's possible of course. But most of the time their lifetime is explicitly stated in the API (if not obvious).
Probably because the Win32 API is "older" than mainstream object-oriented programming, it's not a C++ API at its core.
Its almost like you should try one of the many OO wrappers. Like MFC or .net.
The Windows API is plain old C, hence the use of pointers everywhere. Also, the reason you ask Windows for a new pointer is because Windows needs to keep track of all objects... it allocates things and tells you a pointer (or sometimes just a numeric ID) to let you work with them.
Having C functions as the API allows both C and C++ programmers use it.
Windows APIs goes way back - C was famous during those days.
All the HWND, HANDLE, HDC are but a weak attempt to make clamped, objects-like data types (using struct). C FAQ has a question on this -> http://c-faq.com/struct/oop.html.
To understand the pointers you might want to read the CPlusPlus.com tutorial on pointers.