Logging of ATL class objects - c++

I have a pretty large Dll library, which is developed in C++ using Microsoft Active Template Library (ATL).
I'm trying to gather test data from this library during runtime, so I can build some good unit tests for it later. Since some of the classes are ridiculously big, it would be very tedious work to manually add logging to all the member functions and log the state of all hundreds of member data variables after every member function is run.
Of course, I don't need to log exactly everything. I could just log the state after the most frequent member functions have been run. However, this library needs to be ported to a more modern stack in the future so it would be good to have solid unit tests.
Is there a way to dump whole ATL objects from memory to file, which can later be easily analyzed? Are there any tools or libraries available for this kind of task?
Can I even find ATL classes that could help me with this?

Related

How to add boost.asio to the windows universal app project?

How can I add boost.asio to a windows universal project to it's shared components?
Do I need to create separate project and include the header files there or is there more simple way ?
Thanks!
While I can't get into the specifics of Universal Apps too much (I'm not an authority on that subject), I can tell you that this: boost::asio is a header only library. That means that by simply including the headers into your C++ project, that code is merged directly into your main assembly. I highly recommend using it in this way.
If you're going to include this header only library into another DLL that you then include in your main app, things are going to messy. First, you have the headache of building binaries for each target (x86, x64 and ARM) and maintaining those dependencies but beyond that, the real headache is what you need to go through to make boost::asio function when being loaded from a shared assembly.
In order to do this, you need to define a special static member inside ::asio called winsock_init in your code. ::asio uses an internal, static customized reference counter using interlocked exchanges to track its own usage. When the counter is incremented beyond zero, calls to things such as WSAStartup() are made to ensure that the library plays nice with Winsock. When the counter reaches zero again, WSACleanup() is called again for the same reasons.
The structure winsock_init circumvents this functionality, so it's up to you to correctly, manually call these functions from within your shared assembly, otherwise you're going to completely break ASIO AND your application will fail compliance testing for app store deployment.
Also, whenever you try to wrap ::asio into a shared assembly you need to include special source files one time only, within the dll and then you need to define a bunch of special boost config variables both in this DLL project and any project that uses this ::asio dll.
My advice again is to simply include the headers alone in your primary assembly and then you're not introducing all of these headaches. Another alternative is to simply use C++/CLI or Managed C++, whatever it's called these days, and directly access the .NET socket classes from within your mixed C++ code.
See here for more details about compiling ASIO into a separate assembly if you really want to suffer all the pain I've described.

Precompile script into objects inside C++ application

I need to provide my users the ability to write mathematical computations into the program. I plan to have a simple text interface with a few buttons including those to validate the script grammar, save etc.
Here's where it gets interesting. These functions the user is writing need to execute at multi-megabyte line speeds in a communications application. So I need the speed of a compiled language, but the usage of a script. A fully interpreted language just won't cut it.
My idea is to precompile the saved user modules into objects at initialization of the C++ application. I could then use these objects to execute the code when called upon. Here are the workflows I have in mind:
1) Testing(initial writing) of script: Write code in editor, save, compile into object (testing grammar), run with test I/O, Edit Code
2) Use of Code (Normal operation of application): Load script from file, compile script into object, Run object code, Run object code, Run object code, etc.
I've looked into several off the shelf interpreters, but can't find what I'm looking for. I considered JAVA, as it is pretty fast, but I would need to load the JAVA virtual machine, which means passing objects between C and the virtual machine... The interface is the bottleneck here. I really need to create a native C++ object running C++ code if possible. I also need to be able to run the code on multiple processors effectively in a controlled manner.
I'm not looking for the whole explanation on how to pull this off, as I can do my own research. I've been stalled for a couple days here now, however, and I really need a place to start looking.
As a last resort, I will create my own scripting language to fulfill the need, but that seems a waste with all the great interpreters out there. I've also considered taking an existing open source complier and slicing it up for the functionality I need... just not saving the compiled results to disk... I don't know. I would prefer to use a mainline language if possible... but that's not required.
Any help would be appreciated. I know this is not your run of the mill idea I have here, but someone has to have done it before.
Thanks!
P.S.
One thought that just occurred to me while writing this was this: what about using a true C compiler to create object code, save it to disk as a dll library, then reload and run it inside "my" code? Can you do that with MS Visual Studio? I need to look at the licensing of the compiler... how to reload the library dynamically while the main application continues to run... hmmmmm I could then just group the "functions" created by the user into library groups. Ok that's enough of this particular brain dump...
A possible solution could be use gcc (MingW since you are on windows) and build a DLL out of your user defined code. The DLL should export just one function. You can use the win32 API to handle the DLL (LoadLibrary/GetProcAddress etc.) At the end of this job you have a C style function pointer. The problem now are arguments. If your computation has just one parameter you can fo a cast to double (*funct)(double), but if you have many parameters you need to match them.
I think I've found a way to do this using standard C.
1) Standard C needs to be used because when it is compiled into a dll, the resulting interface is cross compatible with multiple compilers. I plan to do my primary development with MS Visual Studio and compile objects in my application using gcc (windows version)
2) I will expose certain variables to the user (inputs and outputs) and standardize them across units. This allows multiple units to be developed with the same interface.
3) The user will only create the inside of the function using standard C syntax and grammar. I will then wrap that function with text to fully define the function and it's environment (remember those variables I intend to expose?) I can also group multiple functions under a single executable unit (dll) using name parameters.
4) When the user wishes to test their function, I dump the dll from memory, compile their code with my wrappers in gcc, and then reload the dll into memory and run it. I would let them define inputs and outputs for testing.
5) Once the test/create step was complete, I have a compiled library created which can be loaded at run time and handled via pointers. The inputs and outputs would be standardized, so I would always know what my I/O was.
6) The only problem with standardized I/O is that some of the inputs and outputs are likely to not be used. I need to see if I can put default values in or something.
So, to sum up:
Think of an app with a text box and a few buttons. You are told that your inputs are named A, B, and C and that your outputs are X, Y, and Z of specified types. You then write a function using standard C code, and with functions from the specified libraries (I'm thinking math etc.)
So now your done... you see a few boxes below to define your input. You fill them in and hit the TEST button. This would wrap your code in a function context, dump the existing dll from memory (if it exists) and compile your code along with any other functions in the same group (another parameter you could define, basically just a name to the user.) It then runs the function using a functional pointer, using the inputs defined in the UI. The outputs are sent to the user so they can determine if their function works. If there are any compilation errors, that would also be outputted to the user.
Now it's time to run for real. Of course I kept track of what functions are where, so I dynamically open the dll, and load all the functions into memory with functional pointers. I start shoving data into one side and the functions give me the answers I need. There would be some overhead to track I/O and to make sure the functions are called in the right order, but the execution would be at compiled machine code speeds... which is my primary requirement.
Now... I have explained what I think will work in two different ways. Can you think of anything that would keep this from working, or perhaps any advice/gotchas/lessons learned that would help me out? Anything from the type of interface to tips on dynamically loading dll's in this manner to using the gcc compiler this way... etc would be most helpful.
Thanks!

Deciding about constructed objects at compilation time

I have following problem to solve.
I have component A. This component has some sub-components - B,C,D. Using cmake I am building or not those B,C,D components. It depends on current platform configuration. My cmake system is making executable makefiles (for A component) for linking only those components, which were used in given cmake run. If component B was built, it is added to executable if not - is not linked. The same with other - C,D.
All those B,C,D components provide some implementations of interface used in A component. This A component shall manage objects created by B,C,D and keep those objects in some map, using proper object at proper time.
Question:
I want to achieve some simple and reliable mechanism for adding those objects implementing A interface automatically, the same as it is now with linking - linked are only modules, which were built. The same with those objects - I would like to have them registered in A component only when they were compiled.
It is hard for me to explain it. The idea is easy - build some map of those objects at compilation time. Only components compiled shall deliver their object to this map.
I have used designs similar to how Objective-C and Smalltalk implement methods.
In C++, methods == member functions and must be defined at compile time. So, even though the interface can be extended with mechanisms such is the preprocessor, the same configuration must also affect any clients of the class, or they simply won't link.
So I use a message passing system to invoke methods on objects. So if A is the main class, and you compile in C and D but not B, then the message processor of A will only respond to messages that have handlers registered by C and D.
This type of design does require having a messaging system of some sort. There are numerous existing systems such as Google Protocol Buffers and Apache Thrift. I chose to design one since I wanted even more runtime configurability than most existing systems allow (many of these messaging systems have IDL compilers involved).
However, it did allow me to get closer to the OO realm than the mixed-paradigm language C++ typically permits.

TDD - Creating a new class in an empty project to make dependencies explicit as they are added

Using TDD, I'm considering creating an (throw-away) empty project as Test-harness/container for each new class I create. So that it exists in a little private bubble.
When I have a dependency and need to get something else from the wider project then I have to do some work to add it into my clean project file and I'm forced into thinking about this dependency. Assuming my class has a single responsibility then I ought not to have to do this very much.
Another benefit is an almost instant compile / test / edit cycle.
Once I'm happy with the class, I can then add it to the main project/solution.
Has anyone done anything similar before or is this crazy?
I have not done this in general, create an empty project to test a new class, although it could happen if I don't want to modify the current projects in my editor.
The advantages could be :
sure not to modify the main project, or commit by accident
dependencies are none, with certaintly
The drawbacks could be :
cost some time ...
as soon as you want to add one dependency on your main project, you instantly get all the classes in that project ... not what you want
thinking about dependencies is usual, we normally don't need an empty project to do so
some tools check your project dependencies to verify they follow a set of rules, it could be better to use of those (as that could be used not only when starting a class, but also later on).
the private bubble concept can also by found as import statements.
current development environments on current machines already give you extra-fast operations ... if not, you could do something about it (tell us more ...)
when ok, you would need to copy to your regular project your main and your test class. This can cost you time, especially as the package might not be adequate (simplest possible in your early case because your project is empty, but adequate to your regular project later).
Overall, I'm afraid this would not be a timesaver... :-(
I have been to a presentation for using Endeavour. One of the concepts they depended highly upon was decoupling as you suggest:
service in seperate solution with its own testing harness
Endeavour is in a nutshell a powerfull development environment / plugin for VS which helps archieving these things. Among a lot of other stuff it also hooks into / creates a nightly build from SourceSafe to define which dll's are building and places those in a shared folder.
When you create code which depends on an other service you don't reference the VS project but the compiled DLL in the shared folder.
By doing this a few of the drawbacks suggested by KLE are resolved:
Projects depending on your code reference the DLL instead of your project (build time win)
When your project fails to build it will not break integration; they depend upon a DLL which is still available from last working build
All classes visible - nope
Middle ground:
You REALLY have to think about dependency's, more then in 'simple' setups.
Still costs time
But ofcourse there is also a downside:
its not easy to detect circular dependency's
I am currently in the process of thinking how to archieve the benefits of this setup without the full-blown install of Endeavour because its a pretty massive product which does really much (which you won't all need).

Benefits of exporting a class from a dll vs. static library

I have a C++ class I'm writing now that will be used all over a project I'm working on. I have the option to put it in a static library, or export the class from a dll. What are the benefits/penalties for each approach. The only one I can think of is compiled code size which I don't really care about. Thanks!
Advantages of a DLL:
You can have multiple different exe's that access this functionality, so you will have a smaller project size overall.
You can dynamically update your component without replacing the whole exe. If you do this though be careful that the interface remains the same.
Sometimes like in the case of LGPL you are forced into using a DLL.
You could have some components as C#, Python or other languages that tie into your DLL.
You can build programs that consume your DLL that work with different versions of the DLL. For example you could check if a function exists in a certain operating system DLL and only call it if it exists, and otherwise do some other processing.
Advantages of Static library:
You cannot have dll verisoning problems that way
Less to distribute, you aren't forced into a full installer if you only have a small application.
You don't have to worry about anyone else tying into your code that would have been accessible if it was a DLL.
Easier to develop a static library as you don't need to worry about exports and imports.
Memory management is easier.
One of the most significant and often unnoted features of dynamic libraries on Windows is that DLLs have their own heap. This can be an advantage or a disadvantage depending on your point of view but you need to be aware of it. For example, a global variable in a DLL will be shared among all the processes attaching to that library which can be a useful form of de facto interprocess communication or the source of an obscure run time error.