Experience with IBPP interface for Firebird database - c++

I'd like to the ask guys with experience in Firebird and IBPP (especially the latter). I found a lot of positive posts about Firebird but I'm having a problem to decide about IBPP. The interface itself is clean and simple but it seems that the project does not have much of activity going on (maybe because it's very stable).
Would you recommend IBPP for production environment?
Is it thread-safe?
Any known bugs?
Thanks.

In addition to the points Milan mentioned:
There is currently no way to use more than one client library when connecting to different databases, or even to specify which client library will be used. There is a certain hard-coded sequence of client library locations that are probed, and the first one that is found will be used for all connections. An IBPP version changing this has been hinted at for a very long time, but hasn't arrived yet. SVN trunk contains some code to deal with this, but I'd say that's alpha quality at most.
And all of this holds true for Windows only, as on all other platforms the Firebird client library isn't loaded at runtime anyway.
The library isn't thread-safe. That doesn't matter for the most part, as you should let each thread have its own connection, transaction and other assorted objects anyway. But IBPP uses its own smart pointer implementation, which is neither completely exception-safe nor thread-safe. Still, as long as you initialize the library from the main thread (before any other thread is created) and create and destroy IBPP objects in the same thread (so absolutely no sharing of objects with other threads!) using IBPP in multiple threads should work fine.
If you can live with the points above (they may not matter to you, at all) it is certainly ready for production use. You can always change things you run into, as we did for FlameRobin too.

IBPP is very stable and I would recommend it for production. That is, if you're going to use it for regular applications.
If you want to build an admin tool or something similar, then be prepared to go inside and get your hands dirty as some of the newer features (i.e. Firebird 2.5 stuff) that are not SQL but API improvements are not supported. For example, it is missing a layer that would expose the new trace API.
Anyway, go ahead and I use it. I have a bunch of IBPP applications in production for years, and, as Douglas wrote, FlameRobin is using IBPP and it works flawlessly (at least as far as DB layer is concerned).
The only thing to be careful about is NUMERIC fields, which are internally stored as integer+scale in Firebird. IBPP exposes those via C/C++ "double", but also via 16/32/64bit integer. So be very careful when retrieving such values, as you will get no warning. For example, if you have DECIMAL(18,2) field with value 254.00 in it, and you accidentaly read that into an integer, you will get 25400, not 254. Make sure you either read those in as double or scale yourself later. This is useful because you can safely convert 25400 to string and then add a decimal point, so you don't lose precision with double (it all depends on the kind of your application and which digits count, of course).

I can't really tell from experience because I've never used IBPP.
But apparently it's used by the flamerobin project so I'd trust it to be 'stable enough'.

Related

Point Cloud Library's multiple-inheritance with single inheritance restriction

The C++ plugin API in which I work is bad enough without STL/exception handling but it also forbids multiple-inheritance. In other words, I can build with it if I don't mind my plugin crashing the host application on startup or I can go single and it will crash on the first direct instance of multiple inheritance in PCL (of which there is only one instance in my plugin code, but that is all it takes one supposes, and, yes, it is a required instance).
I assume that any multiple inheritances used within the PCL libs are isolated (since they appear to use this feature often) but as soon as I use something with it directly - crash.
There seem to be very few options. I can try to find another library for point cloud surface meshing with commercial usage licensing (ha!) or actually write a separate executable using PCL that is called from the plugin to do the work and pass the results back to the plugin (horrendous, platform dependent, and not an integrated solution). This entire entreprise is becoming loathsome. So much time and effort expended researching, preparing, learning, adjusting projects, carefully setting this up only to find that it won't work under these conditions.
If you have an alternative BSD library option to mention that would be great. If you think that I should go for a CL/DOS-based application to be launched to do the processing that would be great to hear arguments for as well. I support both Windows and MacOS X.
Going the external executable route. I can save the point cloud to the pcd format from the application, run the executable to load and process the file to output the results in obj format for the application to use. It is still a horrid solution but at least it works.

hibernate-like saving state of a program

Is there any way in C++ or Java or Python that would allow me to save the state of my program, no questions asked? For example, I've spent an hour learning how to save a tree-like structure into a file. Very educative but I feel I could just do:
saveState(file);
And the "file" would contain whole memory my program uses. Just like operating system's "hibernate" or "suspend-to-disk" feature. I know about boost serialization, this is probably not what I'm looking for.
What you most likely want is what we call serialization or object marshalling. There are a whole butt load of academic problems with data/object serialization that you can easily google.
That being said given the right library (probably very native) you could do a true snapshot of your running program similarly what "OS specific hibernate" does. Here is an SO answer for doing that on Linux: https://stackoverflow.com/a/12190830/318174
To do the above snapshot-ing though you will most likely need an external process from the process you want to save. I highly recommend you don't that. Instead read/lookup in your language of choice (btw welcome to SO, don't tag every language... that pisses people off) how to do serialization or object marshalling... hint... most people these days pick JSON.
I think that what you describe would be a feature that few people would actually want to use for a real system. Usually you want to save something so it can be transmitted, or so you can stop running the program, or guard against the possibility that the program quits (or power fails).
In most production systems one wants to make the writes to disk small and incremental so that the system can remain responsive, and writing inconsistent data can be avoided. Writing ALL memory to disk on a regular basis would probably result in lots of non-responsive time. You would need to lock the entire system to avoid inconsistent state.
Writing your own persistence is tedious and error prone however so you may find this SO question of interest: Persisting graph data (Java)
There are a couple of frameworks around this. Check out Google Protocol Buffers if you need support for Java, Python, and C++ https://developers.google.com/protocol-buffers/ I've used it in some projects and it works well.
There's also Thrift (orginally from Facebook) http://thrift.apache.org/ I don't have any experience with it though.
Another option is what #QuentinUK suggests. Use a class that inherits from something streamable and/or make streamable operators/functions.
I'd use a framework.
Here's your problem:
http://en.wikipedia.org/wiki/Address_space_layout_randomization
Back in ancient history (16-bit DOS programs with extenders), compilers used to support "based" pointers which stored relative addresses. These were safe to serialize en masse. And applications did so, saving both code and data, the serialized modules were called "overlays".
Today, you'd need based pointer support in your toolchain (resulting in every pointer access requiring an extra adjustment), or else to go through all the data, distinguishing the pointers from the other data (how?) and adjusting them to their new storage location, in case the OS already loaded some library at the same address your old program had used for its heap. In modern "managed" environments, where pointers already have to be identified for the garbage collector, this is feasible even if not commonly done. In native code, it's very difficult, although that metadata is created to enable relocation of shared libraries.
So instead people end up walking their entire data structures manually, and converting object links (pointers) into something that can be restored on the other end, even though the object has a new address (again, because the old address may have been used for a shared library).
Note that many processors have features to support based addressing... and that since based addressing is no longer common, compilers went ahead and used those pointer arithmetic features to speed up user code.
Yes, derive objects from a streamable class and add the streaming functions. Then you can stream everything to disk. You will need a library for this such as MFC.

Creating a limited use version of a program in VC++

Our company helps migrate client software from other languages to C++. We provide them C++ source code for their application along with header files and compiled libraries for runtime support functions. We charge for both the migration as well as the runtime. Recently a potential client asked to migrate one of a number of systems they have. This system contains 7 programs and we would like to limit the runtime so only these 7 programs can acess it. We can time limit the runtime by putting an encrypted expiration date in the object library but, since we have to provide the source code for the converted programs, we are having difficult coming up with a way to limit the access to a specific set of programs. Obviously, anything we put into the source code to identify the program could be copied to any other program so the only hope seems to be having the run time library discover some set of characteristics about the programs and then validating them against a set of characteristics embedded in the run time library. As I understand it, C++ has very little reflection capability (RTTI is all I could find) so I wanted to ask if anyone has faced a similar problem and found a way to solve it. Thanks in advance for any suggestions.
Based on the two answers a little clarification seems in order. We fully expect the client to modify the source code and normally we provide them an unrestricted version of the runtime libraries. This particular client requested a version that was limited to a single system and is happy to enter into a license that restricts the use of the runtime library to that system. Therefore a discussion of the legal issues isn't relevant. The issue is a technical one -- given a license that is limited to a single system and given that the client has the source to the calling programs but not the runtime, is there a way to limit access to the runtime to the set of programs comprising that system thus enforcing the terms of the license.
If they're not supposed to make further changes to the programs, why did you give them the source code? And if they are expected to continue changing the programs (i.e. maintenance), who decides whether a change constitutes a new program that's not allowed to use the library?
There's no technical way to enforce that licensing model.
There's possibly a legal way -- in the code that loads/enables the library, write a comment "This is a copy protection measure". Then DMCA forbids them from including that code into other programs (in the USA). But IANAL, and I don't think DMCA is valid anyway.
Consult a lawyer to find out what rights you have under the contract/bill of sale to restrict their use.
The most obvious answer I could think of is to get the name and/or path of the calling process-- simply compare this name to the 7 "allowed" programs in your support library. Certainly, they could create a new process with the same name, but they might not know to do so.
Another level could be to further compare the executable size against the known size for that application. (You'll likely want to allow a reasonably wide range around the expected size, in case they make changes to the source code, and/or compile with different options.)
As another thought, you might try adding some seemingly benign strings into the app's resources. ("Copyright 2011 ~Your Corporation Name~")-- You can then scan the parent executable for the magic strings. If they create a new product, they might not think to create this resource.
Finally, as already noted by Ben, if you are giving them the source code, there are likely no foolproof solutions to this problem. (As he said, at what point does "modified" code become a new application?) The best you will likely be able to do is to add enough small roadblocks that they won't bother trying to use that lib for another product. It likely depends on how determined and/or lucky they are.
Why not just technically limit the use of the runtime to one system? There are many software protection solutions out there, one that comes to my mind is SmartDongle.
Now the runtime could still be used by any other program on that machine, but I think this should be a minor concern, no?

Using new Windows features with fallback

I've been using dynamic libraries and GetProcAddress stuff for quite some time, but it always seems tedious, intellisense hostile, and ugly way to do things.
Does anyone know a clean way to import new features while staying compatible with older OSes.
Say I want to use a XML library which is a part of Vista. I call LoadLibraryW and then I can use the functions if HANDLE is non-null.
But I really don't want to go the #typedef (void*)(PFNFOOOBAR)(int, int, int) and PFNFOOOBAR foo = reinterpret_cast<PFNFOOOBAR>(GetProcAddress(GetModuleHandle(), "somecoolfunction"));, all that times 50, way.
Is there a non-hackish solution with which I could avoid this mess?
I was thinking of adding coolxml.lib in project settings, then including coolxml.dll in delayload dll list, and, maybe, copying the few function signatures I will use in the needed file. Then checking the LoadLibraryW return with non null, and if it's non-null then branching to Vista branch like in a regular program flow.
But I'm not sure if LoadLibrary and delay-load can work together and if some branch prediction will not mess things up in some cases.
Also, not sure if this approach will work, and if it wont cause problems after upgrading to the next SDK.
IMO, LoadLibrary and GetProcAddress are the best way to do it.
(Make some wrapper objects which take care of that for you, so you don't pollute your main code with that logic and ugliness.)
DelayLoad brings with it security problems (see this OldNewThing post) (edit: though not if you ensure you never call those APIs on older versions of windows).
DelayLoad also makes it too easy to accidentally depend on an API which won't be available on all targets. Yes, you can use tools to check which APIs you call at runtime but it's better to deal with these things at compile time, IMO, and those tools can only check the code you actually exercise when running under them.
Also, avoid compiling some parts of your code with different Windows header versions, unless you are very careful to segregate code and the objects that are passed to/from it.
It's not absolutely wrong -- and it's completely normal with things like plug-in DLLs where two entirely different teams probably worked on the two modules without knowing what SDK version each other targeted -- but it can lead to difficult problems if you aren't careful, so it's best avoided in general.
If you mix header versions you can get very strange errors. For example, we had a static object which contained an OS structure which changed size in Vista. Most of our project was compiled for XP, but we added a new .cpp file whose name happened to start with A and which was set to use the Vista headers. That new file then (arbitrarily) became the one which triggered the static object to be allocated, using the Vista structure sizes, but the actual code for that object was build using the XP structures. The constructor thought the object's members were in different places to the code which allocated the object. Strange things resulted!
Once we got to the bottom of that we banned the practise entirely; everything in our project uses the XP headers and if we need anything from the newer headers we manually copy it out, renaming the structures if needed.
It is very tedious to write all the typedef and GetProcAddress stuff, and to copy structures and defines out of headers (which seems wrong, but they're a binary interface so not going to change) (don't forget to check for #pragma pack stuff, too :(), but IMO that is the best way if you want the best compile-time notification of issues.
I'm sure others will disagree!
PS: Somewhere I've got a little template I made to make the GetProcAddress stuff slightly less tedious... Trying to find it; will update this when/if I do. Found it, but it wasn't actually that useful. In fact, none of my code even used it. :)
Yes, use delay loading. That leaves the ugliness to the compiler. Of course you'll still have to ensure that you're not calling a Vista function on XP.
Delay loading is the best way to avoid using LoadLibrary() and GetProcAddress() directly. Regarding the security issues mentioned, about the only thing you can do about that is use the delay load hooks to make sure (and optionally force) the desired DLL is being loaded during the dliNotePreLoadLibrary notification using the correct system path, and not relative to your app folder. Using the callbacks will also allow you to substitute your own fallback implementations in the dliFailLoadLib/dliFailGetProc notifications when the desired API function(s) are not available. That way, the rest of your code does not have to worry about platform differences (or very little).

Global (process wide) properties in Win32

I am trying to share some data across DLLs in a project which has an extremely complicated dependency structure (numberous DLLs).
I want to be able to associate a key with some data in one part of the application, and then extract that data by supplying the appropriate key in some other part of the app. In a way, one can say that I looking for something that is similar to Java's System.setProperty()/getProperty().
I was sure that the Process APIs would give me some access to a process-wide buffer, but I had no luck. Any ideas?
(I know that the clean solution is to introduce a new DLL and to link it properly to the existing DLLs. Unfortunately, this type of solution is beyond the mandate of my team).
You don't need fancy API's for that. Windows has a much older API precisely for this kind of stuff. These things are known as "atoms". You'd use functions as AddAtom and FindAtom. By default atoms are process-wide.
To be clear here there is one exe with multiple DLL's in only one process but multiple modules. So you aren't looking for inter-process communications.
In answer I see two strategies:
use Windows API atoms which are slightly limited (basically only string data) which can work within or between processes.
If you write a DLL which contains your speculated SetProperty/getproperty functionality you don't have to compile ALL the other DLL's again (which is presumably what is beyond your team's specification) - you only need to recompile those DLL's which are currently using your new features (set/getproperty) (which is presumably within your teams power). So this seems a direct and powerful solution.