Which is more efficient:A Win32 func. or a similar CRT func. in a VC++ app.? - c++

I started win32 programming for fun because I like complex things and I like programming(this is all Charles Petzold's and Jeffrey Richter's fault for writing such beautiful books.) and may be because I have a thing for performance code.
Now, the real question:I'll use the example of GetEnvironmentVariable()[a win32 API func.] and getenv()[a standard CRT func.].
Both of these return the value of an environment variable provided as an argument.
So using which one would be more efficient or in other words which one has a shorter call stack of which one is more direct?Think of some func. being called a million times.
I believe that either of them maps to the other.Am I right or I'm missing something here.
Summary:While programming for win32 api, if there are functions available in both the api and the c/c++ libraries that offer same functionality, which one should I use?
thanks.

For most apps, it's unlikely that use of one or other API will be the primary performance concern.
The CRT and C++ Standard Library is mapped onto Win32 APIs so using Win32 direct will be slightly more efficient. If you need to write portable C code, use the CRT though.
In C++, most often, using the Standard Library allows easier production of idiomatically-correct code, and that outweighs any marginal performance gain from going direct to Win32.
getenv is perhaps not a great example because the mapping to Win32 is trivial. Consider instead reproducing <iostream> using the Win32 APIs, and the benefit of a good library becomes clearer.

Stick with the CRT. It maps to the WinAPI, but not necessarily directly. For example, printf() might map to WriteConsole, but with buffering for performance. If GetEnvironmentVariable() doesn't need any code wrapped around it, then getenv() will be the same performance, and if it does (such as buffering), then the CRT shall provide it. And it's "right", not "write".

Both functions are likely to be similar in performance, probably ending reading the values from the registry. But more importantly, there is no reason they should ever become a critical performance issue: registry is a database, if you need using some value from registry again and again, you cache it in some variable.

Related

Win32 API functions vs. their CRT counterparts (e.g. CopyMemory vs. memcpy)

In writing Win32 C/C++ code, is there any advantage (e.g. performance?) in using Windows-specific functions like lstrcpyn or CopyMemory instead of the corresponding CRT functions (aside from portability of CRT functions)?
At least some CRT functions use the Win32 functions internally. Also the CRT requires additional initialization (e.g. thread specific data for functions like strtok) and cleanup, that you might not want to have to happen.
You could create a plain Win32 application, without any dependency on anything else including the CRT (much like you could create a plain NT application using NTDLL.DLL - I think smss.exe of Windows is such a process BTW).
Having that said, I think that for most applications that doesn't matter.
UPDATE Since people seem to get so hooked up on the difference of individual functions, in particular memcpy vs. CopyMemory, I would like to add that not all functions in CRT are wrappers around those in Win32. Naturally, some can be implemented without any help from Win32 (actually memcpy is a good example for that), while others (sensibly) can't. Something that, I believe, #Merdad hinted in his answer to.
So, portability aside, I don't think performance is the next best argument for or against using the CRT.
You should choose what fits best and that typically will be the CRT. And there is nothing speaking against using individual Win32 functions (with CRT equivalents), where you seem fit.
It depends on the function and your requirements.
For things like memcpy, there isn't any point whatsoever to choosing the Windows-specific versions. Stick with standard C to keep it simple and portable.
For other things like mbstowcs, you might need to use things like MultiByteToWideChar instead -- depending on what functionality you need.
Personally I go for the C versions if possible, and only go for Win32 versions afterwards -- because there's really no reason to write Windows-specific code when it could be written portably.

file handling routines on Windows

Is it allowed to mix different file handling functions in a one system e.g.
fopen() from cstdio
open() from fstream
CreateFile from Win API ?
I have a large application with a lot of legacy code and it seems that all three methods are used within this code. What are potential risks and side effects ?
Yes, you can mix all of that together. It all boils down to the CreateFile call in any case.
Of course, you can't pass a file pointer to CloseHandle and expect it to work, nor can you expect a handle opened from CreateFile to work with fclose.
Think of it exactly the same way you think of malloc/free vs new/delete in C++. Perfectly okay to use concurrently so long as you don't mix them.
It is perfectly OK to use all of these file methods, as long as they don't need to interact. The minute you need to pass a file opened with one method into a function that assumes a different method, you'll find that they're incompatible.
As a matter of style I would recommend picking one and sticking to it, but if the code came from multiple sources that may not be possible. It would be a big refactoring effort to change the existing code, without much gain.
Your situation isn't that uncommon.
Code that is designed to be portable is usually written using standard file access routines (fopen, open, etc). Code that is OS-specific is commonly written using that OS's native API. Your large application is most likely a combination of these two types of code. You should have no problem mixing the file access styles in the same program as long as you remember to keep them straight (they are not interchangeable).
The biggest risk involved here is probably portability. If you have legacy code that has been around for a while, it probably uses the standard C/C++ file access methods, especially if it pre-dates the Win32 API. Using the Win32 API is acceptable, but you must realize that you are binding your code to the scope and lifetime of that API. You will have to do extra work to port that code to another platform. You will also have to re-work this code if, say, in the future Microsoft obsoletes the Win32 API in favor of something new. The standard C/C++ methods will always be there, constant and unchanging. If you want to help future-proof your code, stick to standard methods and functions as much as possible. At the same time, there are some things that require the Win32 API and can't be done using standard functions.
If you are working with a mix of C-style, C++-style, and Win32-style code, then I would suggest separating (as best as is reasonably possible) your OS-specific code and your portable code into separate modules with well-defined APIs. If you have to re-write your Win32 code in the future, this can make things easier.

Unmanaged C++: How to dynamically load code?

Our industry is in high-performance distributed parallel computing. We have an unmanaged C++ application being developed using Visual Studio 2008.
Our application (more like a framework) is supposed to be able to dynamically load code (algorithms) developed by 3rd parties (there can be many dlls) that conforms to our interface specification, and calls the loaded code to get some results.
Think of it like you want to call a sin(x) function, but there are many different implementations of sin(x) that you could use.
I have a few questions as I'm very new to this area of dynamically loading code:
Is dll (dynamic-link library) the answer to this type of requirement?
If the 3rd-party used a different type of IDE to create the dll compared to mine (say Eclipse CDT, C++Builder, Visual C++ 6.0, etc), would the dll still work with my application?
Our application is supposed to work cross-platform as well (should be able to run on Linux), would using QLibrary be the most logical way to abstract away all the platform-specific dll loading?
(OPTIONAL) Are there some unforeseen problems that I might encounter?
1) generally, yes. If the API gets complex and multi-object, I'd use COM or a similar mechanism. But if you have to manage only a little state, or can go completely state-free, a pure DLL interface is fine.
2) Use a suitable calling convention (stdcall) and data types. I would not even asusme that the implementation has to be in C++. so that means char/wchar_t, explicit-sized ints, e.g. int32 , float and double, and C-style arrays of them.
3) can't say
4)
No cross-boundary memory allocations: free what you allocate, and let the plugin free what it allocated.
Your API design has a big influence on achievable performance and implementation effort. DOn't make the functions to small, give the implementation some freedom how it handles certain things, define error protocol, threading requirements etc.
[edit]
Also, if you declare structures, look into alignment and compiler options to control this (usually #pragma pack). This is required so clients see the same layout.
The compiler usually "mangles" the names for the exported symbols (e.g. add an udnerscore for STDCALL convention). Typically, this is controlled through a .def file that is passed to the linker.
Yes, it is. But there are many issues.
May or may not. Firstly, you should be aware of calling conventions. Secondly, since there is no standardized ABI for C++, you should stick to plain C at interface level (Btw, COM technology was all about this -- making a standardized ABI). Thirdly, each plugin may have its own CRT and so problems may appear, there is a good article about this on MSDN.
Yes, this will help with loading dynamic libraries, but not with problems in (2). Althought, these problems mostly windows specific.
In my experience, QLibrary is the best answer to your questions.
It provides a simple interface and takes care of all the platform-specific details.
Indeed, that means that the plugins must be also written using QT.

C library vs WinApi

Many of the standard c library (fwrite, memset, malloc) functions have direct equivalents in the Windows Api (WriteFile, FillMemory/ ZeroMemory, GlobalAlloc).
Apart from portability issues, what should be used, the CLIB or Windows API functions?
Will the C functions call the Windows Api functions or is it the other way around?
There's nothing magical about the C library. It's just a standardized API for accessing common services from the OS. That means it's implemented on top of the OS, using the API's provided by the OS.
Use whichever makes sense in your situation. The C library is portable, Win32 isn't. On the other hand, Win32 is often more flexible, and exposes more functionality.
The functions aren't really equivalent with the exception of some simple things like ZeroMemory.
GlobalAlloc for example gives you memory, but it was used for shared memory transfer under win16 as well. Parts of this functionality still exist.
WriteFile will not only write to files but to (among others) named pipes as well. Something fwrite or write can't directly do.
I'd say use c library functions if possible and use the windows functions only if you need the extra functionality or if you get a performance improvement.
This will make porting to other platforms easier later on.
It's probably more information than you're looking for (and maybe not exactly what you asked) but Catch22.net has an article entitled "Techniques for reducing Executable size" that may help point out the differences in Win32 api calls and c runtime calls.
Will the C functions call the winapi functions or is it the other way around?
The C functions (which are implemented in a user-mode library) call the WINAPI functions (which are implemented in the O/S kernel).
If you're going to port your application across multiple platforms I would say that you should create your own set of wrappers, even if you use the standard C functions. That will give you the best flexibility when switching platforms as your code will be insulated from the underlying system in a nicer way.
Of course that means if you're only going to program for Windows platforms then just use the Windows functions, and don't use the standard C library functions of the same type.
In the end you just need to stay consistent with your choice and you'll be fine.
C functions will call the system API, the Standard Runtime C Library (or CRT) is an API used to standardize among systems.
Internally, each system designs its own API directly using system calls or drivers. If you have a commercial version of Visual C++, it used to provide the CRT source code, this is interesting reading.
A few additional points on some examples:
FillMemory, ZeroMemory
Neither these nor the C functions are system calls, so either one might be implemented on top of the other, or they could even have different implementations, coming from a common source or not.
GlobalAlloc
Since malloc() is built on top of operating system primitives exposed by its API, it would be interesting to know if malloc() and direct usage of such allocators coexist happily without problems. I might imagine of some reasons why malloc might silently assume that the heap it accesses is contiguous, even if I would call that a design bug, even if it were documented, unless the additional cost for the safety were non insignificant.
Well, I'm currently trying to avoid including fstream, sstream, iostream and many C standard library files and using winAPIs instead because including any of these libraries increases the size of the output executable from about 10 KB to about 500 KB.
But sometimes it's better to use the C standard library to make your code cross-platform.
So I think it depends on your goal.

Windows API spying/hijacking techniques

I'm interested in using API spying/hijacking to implement some core features of a project I'm working on. It's been mentioned in this question as well, but that wasn't really on topic so I figured it'd be better with a question of its own for this.,
I'd like to gather as much information as possible on this, different techniques/libraries (MS Detours, IAT patching) or other suggestions.
Also, it'd be especially interesting to know if someone has any real production experience of using such techniques -- can they be made stable enough for production code or is this strictly a technique for research? Does it work properly over multiple versions of windows? How bug prone is it?
Personal experiences and external links both appreciated.
I implemented syringe.dll (L-GPL) instead of MS Detours (we did not like the license requirements or huge payment for x64 support) it works fantastically well, I ported it from Win32 to Win64, we have been using in our off-the-self commercial applications for around 2 years now.
We use it for very simple reasons really its to provide a presentation frame work for re-packing, re-branding the same compiled application as many different products, we do general filtering and replacment for string, general resource, toolbar, and menus.
Being L-GPL'd we supply the source, copyright etc, and only dynamically link to the library.
Hooking standard WinAPI functions is relatively safe since they're not going to change much in the near future, if at all, since Microsoft does it's
best to keep the WinAPI backwards compatible between versions.
Standard WinAPI hooking, I'd say, is generally stable and safe.
Hooking anything else, as in the target program's internals, is a different story.
Regardless of the target program, the hooking itself is usually a solid practice. The weakest link of the process is usually finding the correct spot,
and hanging on to it.
The smallest change in the application can and will change the addresses of functions, not to mention dynamic libraries and so forth.
In gamehacking, where hooking is standard practice, this has been defeated to some degree with "sigscanning", a technique first developed by LanceVorgin on the somewhat infamous
MPC boards. It works by scanning the executable image for the static parts of a function, the actual instruction bytes that won't change unless the
function's action is modified.
Sigscanning is obviously better than using static address tables, but it will also fail eventually, when the target application is changed enough.
Example implementation of sigscanning in c++ can be found here.
I've been using standard IAT hooking techniques for a few years now and it works well has been nice and stable and ported to x64 with no problems. The main problems I've had have been more to do with how I inject the hooks in the first place, it took a fair while to work out how best to suspend managed processes at the 'right' point in their start up so that injection was reliable and early enough for me. My injector uses the Win32 debug API and whilst this made it easy to suspend unmanaged processes it took a bit of trial and error to get managed processes suspended at an appropriate time.
My uses for IAT have mostly been for writing test tools, I have a deadlock detection program which is detailed here: http://www.lenholgate.com/blog/2006/04/deadlock-detection-tool-updates.html, a GetTickCount() controlling program which is available for download from here http://www.lenholgate.com/blog/2006/04/tickshifter-v02.html
and a time shifting application which is still under development.
Something a lot of people forget is that windows dll's are compiled as hot-patchable images(MSDN).
Hot-patching is the best way to do WinAPI detours, as its clean and simple, and preserves the original function, meaning no inline assembly needs to be used, only slightly adjusted function pointers.
A small hot patching tutorial can be found here.