Obtain version of Chilkat - chilkat

In order to obtain the current version of the Chilkat library I'm using, I need to use the CkGlobal object.
Do I need to use the UnlockComponent method first in order to use this?

Most Chilkat classes have a Version property, including CkGlobal: https://chilkatsoft.com/refdoc/vcCkGlobalRef.html#prop23
You can always access properties before the lib is unlocked. This is because in many development environments, the property values may be shown at design time.
In any case, you should unlock once at the start of your program by calling UnlockBundle. (The individual UnlockComponent methods that exist in older Chilkat classes are deprecated and don't need to be called if CkGlobal.UnlockBundle is called once.) See the links to "Unlocking Chillkat" at https://www.chilkatsoft.com/readme.asp

Related

How CRYPTO_num_locks() will return required number of locks?

https://linux.die.net/man/3/crypto_num_locks says that CRYPTO_num_locks() returns required number of locks. Previously I used CRYPTO_NUM_LOCKS which was a macro with value 41 to construct a mutex array till openssl 1.0.2s. Now (OPENSSL 1.1.1d) they have introduced
# define CRYPTO_num_locks() (1).
Hence from my understanding, the macro will have value 1, hence i surely can't use this macro for declaring a runtime array.
I can go with changing the value of macro inside crypto.h, but just wanted to know the reason why did OPENSSL changed the value they were returning for the number of locks. I went through their gitlink https://github.com/openssl/openssl and changes https://www.openssl.org/news/changelog.html#x13.
These things made me a bit clear about the use of function instead of macro, but can't understand two questions:
Why the value of CRYPTO_num_locks() set to only (1). And will it be safe for me to change it to say 41, or something else
If this was supposed to be macro only, what's the point in removing earlier macro(CRYPTO_NUM_LOCKS).
It is useful to look at the 1.1.1d definition of CRYPTO_num_locks in include/openssl/crypto.h and its vicinity:
* On the other hand, the locking callbacks are no longer used. Consequently,
* the callback management functions can be safely replaced with no-op macros.
*/
# define CRYPTO_num_locks() (1)
# define CRYPTO_set_locking_callback(func)
# define CRYPTO_get_locking_callback() (NULL)
# define CRYPTO_set_add_lock_callback(func)
# define CRYPTO_get_add_lock_callback() (NULL)
So whatever your code does with that macro and the ones related to it, it is no longer relevant.
Looking even further up in that file, it turns out that this is all conditionally defined under the value of a macro called OPENSSL_API_COMPAT:
# if OPENSSL_API_COMPAT < 0x10100000L
This macro is intended to indicate whether application code is allowed to use older OpenSSL constructs. In stead of changing any OpenSSL header files, it would be better to actually set this macro to 0x1010000L (or even higher) when compiling the application and then work through any constructs that are no longer available. That would ensure that the code no longer uses any deprecated 1.0.2 stuff.
For a very brief confirmation of what happened to the OpenSSL thread-safety model between 1.0.2 and 1.1.0, see the answer to the OpenSSL FAQ question Is OpenSSL thread-safe?:
Yes but with some limitations; for example, an SSL connection cannot
be used concurrently by multiple threads. This is true for most
OpenSSL objects.
For version 1.1.0 and later, there is nothing further
you need do.
For earlier versions than 1.1.0, it is necessary for your
application to set up the thread callback functions. To do this, your
application must call CRYPTO_set_locking_callback(3) and one of the
CRYPTO_THREADID_set... API's. See the OpenSSL threads manpage for
details and "note on multi-threading" in the INSTALL file in the
source distribution.

Check if a DLL is signed C++

I am trying to check if a DLL is signed based on the file path. I see that there are pre-existing solutions for this type of problem using WinVerifyTrust, however, when I tried checking it against "C:\Windows\System32\kernel32.dll" it said: "The file "C:\Windows\System32\kernel32.dll" is not signed." although kernel32 should be a signed dll. I am on Windows 7 fyi.
This is the source code to the function I called: https://msdn.microsoft.com/en-us/library/windows/desktop/aa382384(v=vs.85).aspx
How can I fix the function?
Yes WinVerifyTrust is the correct function to use but you have to be prepared to call it twice.
First you call it with WTD_CHOICE_FILE, if that succeeds then you are done. If not, you must call it again with WTD_CHOICE_CATALOG (CryptCATAdminCalcHashFromFileHandle + CryptCATAdminEnumCatalogFromHash + CryptCATCatalogInfoFromContext) because some Windows files do not embed the certificate information (especially non-PE files). (You can also try to find the catalog info first to avoid calling it twice but I assume this is slower)
There are various threads (this and this) on the Sysinternals forum is perhaps the best resource for questions related to this.

How to deprecate function when return type changes c++

What strategies are there for deprecating functions when their return type needs to change? For example, I have:
BadObject foo(int); // Old function: BadObject is being removed.
Object foo(int); // New function.
Object and BadObject are very different internally, and swapping their return types will break code for current users of my library. I'm aiming to avoid that.
I can mark BadObject foo(int) deprecated, and give users time to change affected code.
However, I can't overload foo based on return-type. foo is very well named, and it doesn't need to take extra parameters. How can I add the new function to my library whilst maintaining the old version, at least for a while?
What's the strategy to deprecate the old function without breaking too much dependant code, while providing users the time to migrate to the new version? Ideally I'd keep the current function name and parameter list, because it's named quite well now. It feels like this should be a reasonably common problem: what's a decent way to solve it?
Although the solution will force you to change your function names, but it'll be a compromise between your old users and your new ones.
So - rename the old foo into deprecatedFoo and your new foo into foo2 (or anything you want). Then, in the header file you include with your library, you can simply:
#define deprecatedFoo foo
and inside the function itself do:
#warning ("This function is deprecated. Use 'foo2' or change the #define in LINE in file HEADER.")
Users of the old versions won't have to change their code, and will be issued a warning, and the new users will probably listen and change the #define in order to use the new foo.
In the next version you'll just delete the old foo and the define.
I think a classic example is Boost's Spirit.
From their FAQ:
While introducing Spirit V2 we restructured the directory structure in
order to accommodate two versions at the same time. All of
Spirit.Classic now lives in the directory
boost/spirit/home/classic
where the directories above contain forwarding headers to the new
location allowing to maintain application compatibility. The
forwarding headers issue a warning (starting with Boost V1.38) telling
the user to change their include paths. Please expect the above
directories/forwarding headers to go away soon.
This explains the need for the directory
boost/spirit/include
which contains forwarding headers as well. But this time the headers
won't go away. We encourage application writers to use only the
includes contained in this directory. This allows us to restructure
the directories underneath if needed without worrying application
compatibility. Please use those files in your application only. If it
turns out that some forwarding file is missing, please report this as
a bug.
You can ease migration by keeping the new and old versions in separate directories and using forwarding headers to maintain compatibility. Users will eventually be forced to use the new headers.
SDL 2.0 has a different approach. They don't provide a compatibility layer but instead a migration guide walking the users through the most dramatic changes. In this case, you can help users understand how they need to restructure their code.
What if to make your Object class inherit from BadObject (which you'll keep temporarily)? Then the old user code won't know about that, so it won't break provided that your new "foo" function still returns your objects correctly.

How can i pass C++ Objects to DLLs with different _ITERATOR_DEBUG_LEVEL

My executable makes calls to a number of DLLs, that i wrote myself. According to 3rd party C++ libs used by these DLLs i can not freely choose compiler settings for all DLLs. Therefore in some DLLs _ITERATOR_DEBUG_LEVEL is set to 2 (default in the debug version), but in my executable _ITERATOR_DEBUG_LEVEL is set to 0, according to serious performance problems.
When i now pass a std::string to the DLL, the application crashes, as soon as the DLL tries to copy it to a local std::string obj, as the memory layout of the string object in the DLL is different from that in my executable. So far i work around this by passing C-strings. I even wrote a small class that converts a std::map<std::string, int> to and from a temporary representation in C-Data in order to pass sich data from and to the DLL. This works.
How can i overcome this problem? I want to pass more different classes and containers, and for several reasons i do not want to work with _ITERATOR_DEBUG_LEVEL = 2.
The problem is that std::string and other containers are template classes. They are generated at compilation time for each binary, so in your case they are generated differently. You can say it's not the same objects.
To fix this, you have several solutions, but they are all following the same rule of thumb : don't expose any template code in your header code shared between binaries.
You could create specific interfaces only for this purpose, or just make sure your headers don't expose template types and functions. You can use those template types inside your binaries, but just don't expose them to other binaries.
std::string in interfaces can be replaced by const char * . You can still use std::string in your systems, just ask for const char * in interfaces and use std::string::c_str() to expose your data.
For maps and other containers you'll have to provide functions that allow external code to manipulate the internal map. Like "Find( const char* key ); ".
The main problem will be with template members of your exposed classes. A way to fix this is to use the PImpl idiom : create (or generate) an API, headers, that just expose what can be done with your binaries, and then make sure that API have pointers to the real objects inside your binaries. The API will be used outside but inside your library you can code with whatever you want. DirectX API and other OS API are done that way.
It is not recommended to use an c++ interface with complex types (stl...) with 3rd party libs, if you get them only as binary or need special settings for compile which are different from your settings.
As you wrote - the implementation could be different with the same compiler depending on your settings and with different compilers the situation gets even worse.
If possible compile the 3rd party lib with your compiler and your settings.
If that's not possible you may write an wrapper-DLL which is compiled with the same compiler and same settings than the 3rd party lib and provides an C-Data interface for you. In your project you may write another wrapper-class so you can make your function-calls with STL-Objects and get them converted and transfered "in background".
My own experience with flags like _SECURE_SCL and _ITERATOR_DEBUG_LEVEL, is that they must be consistent if you attempt to pass a stl object accross dll boudaries.
However I think you can pass a stl object to a dll which has a smaller _ITERATOR_DEBUG_LEVEL
since you can probably give a stl object instantiated in a debug dll to a dll compiled in release mode.
EDIT 07/04/2011
Apparently Visual Studio 2010 provides some niceties to detect mismatches between ITERATOR_DEBUG_LEVEL. I have not watch the video yet.
http://blogs.msdn.com/b/vcblog/archive/2011/04/05/10150198.aspx

How to mimic the "multiple instances of global variables within the application" behaviour of a static library but using a DLL?

We have an application written in C/C++ which is broken into a single EXE and multiple DLLs. Each of these DLLs makes use of the same static library (utilities.lib).
Any global variable in the utility static library will actually have multiple instances at runtime within the application. There will be one copy of the global variable per module (ie DLL or EXE) that utilities.lib has been linked into.
(This is all known and good, but it's worth going over some background on how static libraries behave in the context of DLLs.)
Now my question.. We want to change utilities.lib so that it becomes a DLL. It is becoming very large and complex, and we wish to distribute it in DLL form instead of .lib form. The problem is that for this one application we wish to preserve the current behaviour that each application DLL has it's own copy of the global variables within the utilities library. How would you go about doing this? Actually we don't need this for all the global variables, only some; but it wouldn't matter if we got it for all.
Our thoughts:
There aren't many global variables within the library that we care about, we could wrap each of them with an accessor that does some funky trick of trying to figure out which DLL is calling it. Presumably we can walk up the call stack and fish out the HMODULE for each function until we find one that isn't utilities.dll. Then we could return a different version depending on the calling DLL.
We could mandate that callers set a particular global variable (maybe also thread local) prior to calling any function in utilities.dll. The utilities DLL could then use this global variable value to determine the calling context.
We could find some way of loading utilities.dll multiple times at runtime. Perhaps we'd need to make multiple renamed copies at build time, so that each application DLL can have it's own copy of the utilities DLL. This negates some of the advantages of using a DLL in the first place, but there are other applications for which this "static library" style behaviour isn't needed and which would still benefit from utilities.lib becoming utilities.dll.
You are probably best off simply having utilities.dll export additional functions to allocate and deallocate a structure that contains the variables, and then have each of your other worker DLLs call those functions at runtime when needed, such as in the DLL_ATTACH_PROCESS and DLL_DETACH_PROCESS stages of DllEntryPoint(). That way, each DLL gets its own local copy of the variables, and can pass the structure back to utilities.dll functions as an additional parameter.
The alternative is to simply declare the individual variables locally inside each worker DLL directly, and then pass them into utilities.dll as input/output parameters when needed.
Either way, do not have utilities.dll try to figure out context information on its own. It won't work very well.
If I were doing this, I'd factor out all stateful global variables - I would export a COM object or a simple C++ class that contains all the necessary state, and each DLL export would become a method on your class.
Answers to your specific questions:
You can't reliably do a stack trace like that - due to optimizations like tail call optimization or FPO you cannot determine who called you in all cases. You'll find that your program will work in debug, work mostly in release but crash occasionally.
I think you'll find this difficult to manage, and it also puts a demand that your library can't be reentrant with other modules in your process - for instance, if you support callbacks or events into other modules.
This is possible, but you've completely negated the point of using DLL's. Rather than renaming, you could copy into distinct directories and load via full path.