Blocking use of functions dynamically in a C++ library - c++

We have a library created as both .lib and .dll (it's a big library which is written completely in C++ for the windows platform). The users can use the library in their programs or libraries or in whatever the they want to use it.
But I want to restrict some of the functions to some of the users.
For example lets say,
Our library has 3 functions foo() , bar() and hoo().
User A pays for the functions foo(), bar() and hoo().
User B pays for the functions bar() and hoo().
So, when we are giving B the library files (headers/libs/dlls, etc.),
We can create a copy of our library and delete foo() function and its related stuff and send it to B
OR we can send him the whole library, with some-kind of a way to block him from using foo().
1st way is not good because it's a huge work and have to be careful about dependencies. Even if we know for sure that bar() and hoo() is not depending on foo(), still it's a headache to remove things, and give them a customized version of the lib which will also include more testing. And maintenance will be even more problematic. And the SVN will be chaotic too.
2nd way is the best method i guess. But how to do it?
And what if B pays for foo() functions later? Then I would have to let him use it.
I guess that now you understand the problem. The two ways are just my opinions and maybe my conclusions on them could be wrong too. So, I am asking if anyone has any ideas/suggestions on this matter.

I would just create one version of the .dll including headers and let the user download the import libs for what he has paid for. For everything else, a lawyer is the best tool.
Someone buys an additional module? Let them download the additional import lib.
Any kind of protection/lock (whether it's a simple check for the has_paid boolean, a public key scheme, or a proxy library) that you build in can be circumvented. Some are more trivial to bypass, some are a bit harder.
By not giving them the necessary import libraries for, say, module foo, you give a gentle hint to the honest customer.
Will some people still cheat you for your money? Of course they will (if your product is interesting enough), but these people would do that either way. They might as well pirate the complete package from the beginning. They're the kind of people who wouldn't pay for your product anyway. That's where you use a lawyer.
Will the majority of people cheat you for your money? Unlikely. The people who are willing to pay for your product are unlikely to cheat you for half of the license fee, at least intentionally. It's a small gain for a large risk. You could rather easily find out that someone is cheating you.
They're people who have an identity, and they have something to lose. They would not want to be involved in a lawsuit costing a few hundred thousand (which means losing their assets), nor have the negative publicity involved.
It might only happen unintentionally, but since you're not shipping the necessary import libs, that's not possible.
You can see your users, in a very simplified manner, as one of three categories:
We are a small software company with 5 developers, we've been in the business for 3 years and our company has an equity of 50,000. We already paid 500 for the foo license, the bar license would cost us another 500. You know that we're using your library, since we bought foo. And we know that you know. A lawsuit would cost us approximately 50,000 -- oh shit, let's rather pay an extra 500.
We are Microsoft. I'll get fired if someone finds out that I'm using this library illegitimately. Now what will my supervisor say if I disturb him in his after-lunch sleep, over a budget of another 500? Darn, we might need another two boring meetings to decide. Let's see, I guess we can book it on this account...
I'm a kool cracka d00d. I am so cool, but I'm still going to use your library for a program that my 5 friends in school will use. License? Licenses are for losers. You won't find out my name anyway.
The first two have something to lose, the third one doesn't. The first two will pay you and won't (intentionally) cheat you. The last one you couldn't care about less.

I would try to implement this mechanism using the Windows Forwared-Library mechanism (Section Export Forwarding). This mechanism allows you to build a kind of proxy library that you can distribute and that does 'nothing' but forward the real code execution to other libraries. That would have the advantage to provide one single, common interface but differentiate between the users and their 'permission' to use an API in other libraries layers.

Third-party licensing libraries exist for exactly this purpose. I can name Flex (no affiliation), which I know has this exact functionality you're looking for.

You could invent a public/private key "unlock" mechanism. But that's likely a lot of work for little gain. Anything I propose would not be impossible for someone to hack.
Why not just split the code binaries up in accordance with the different licensing schemes you are offering? That is: Customer A above gets foo.lib, bar.lib, and hoo.lib. Customer B only gets bar.lib and hoo.lib. Both get a "sharedcode.lib" that has all the common dependencies.

Related

c++ share project but hide the core of the code

I want to share part of my code with someone so he can add some features to it
but I want to keep the core of the code secret.
The structure of my code is the following (the arrows mean #include
"header.hpp"):
________ ___________
| | | |--->|OtherPrivate1.hpp|
| |--->|Private.hpp|--->| ... |\
| | |___________|--->|OtherPrivateN.hpp| \
|main.cpp| x-->|SomeOtherPrivate.hpp|
| | __________ /
| | | |---->|OtherShared1.hpp| /
| |--->|Shared.hpp|---->| ... |/
|________| |__________|---->|OtherSharedM.hpp|
The Private.hpp contains only one template class called Private. The
Shared.hpp contains one normal class for which the definition are in
Shared.cpp. All other files can contain either one template class or one
normal class. At some point in the inclusion pattern, both part include some
privates files.
I've never created libraries, but from what I understood there is no need to
create a library for template class. Plus as my goal is to hide everything
that is private, having a library and a header file that contains the
definition is not useful.
So I think that using a library is not what I want unless there is a way to
hide the definitions of the template files.
Is there a way to share only some part of my code so the other contributors
can compile, test and run the code but don't have access to other part ?
Edit
The reason why I want to do that is because I've been developing a code for
two years and now an intern is joining me but I'm not sure that he's going to
stay and I don't want to give him everything.
ReEdit
The answers are heading toward law, licence, NDA and stuff like that... I'm
happy I learned a bit about this stuff but it was not my initial question. I
don't really have security concern but more readability and simplification. I
don't want to spent 3 months to explain my intern my whole code if he's not
staying...
I believe you should not do that. Either share the full source code, or define and document an API, and give a binary shared library implementing that documented API. If you want others to contribute to your code, give them access to all the source code.
BTW, template-s cannot really be hidden.
If you want someone else to contribute to your code, you should give access to your source code.
If you really want to hide things (IMHO it is a bad thing to do, and certainly will repeal potential contributors) you might have some plugin infrastructure.
Getting external contributions is always a hard thing. My opinion is that these days making your library free software is the only way to attract external contributors. Remember that competent developers is a scare resource, why would they contribute to your hidden thing while they have a lots of potential free software projects welcoming external contribution?
At last, a person seeing or having access to your source code does not necessarily have the right to contribute to it or to run your code (in particular, copyright laws separate the ability to read code from the legal permission to compile and run it on a computer). If your are scared of your intern, it is a legal or a social or or relational or a personal issue, not a technical one. (You probably should contact a lawyer for legal aspects and/or a therapist for relational/social/personal aspects, not ask here)
My opinion is that if you want external contributions you should publish your software with a free software license, and if you just want to have an intern (or collaborator) working on your code base you should give him access to the entire source code (you may ask a lawyer to make a written contract about mutual obligations like an NDA, but that is off-topic here). Having someone working on your code and not showing all of it is crazy, inefficient, and insulting. You are likely to ruin your reputation by behaving such way with your intern. If I was your intern, I would be pissed off and won't speak in a good way about you! Trust is even more important than contribution (but is a prerequisite).
A person having seen your source code does not any rights on it, except those defined by law, by license (e.g. a free software license) or by contract.
Details are of course specific to the legal system relevant for your case (so are off-topic here). Ask your lawyer!
NB
You are probably confusing "abstractions" (a very useful way to design software) and "information hiding" or "secrets"; there are not the same. The point is that bugs appear at the interface or the intersection of abstractions and are usually whole program (e.g. memory leaks, undefined behavior, etc...). Read more about leaky abstractions. This is why a developer should access all your source code, not some of it. The alternative would be to have a very well specified and perfectly documented API, and that is very hard to get.
I guess that it is a one person, two years project, so probably less than 150 thousand lines of source code. You probably should not spend 3 months explaining them (and if you do, it means that your software is under-designed and needs strong refactoring, which your intern would help you doing if you explain and show all of your source).
If you have template class you are pretty much unable to make these code secret. This is because of the specific of templates that allow you to create the types/methods/functions at the compiling time. When it comes to the other files. I think you have to work on using Factory software design pattern or similar.
If the law of Your country allows it You can ask Your intern to sign a contract that forbids him from disclosing or using anything than He has done or have seen during work with Your code. If He fails to keep a secret then He will need to compensate for every loss that resulted.
This costs You nothing and your intern might do a better job when he sees the whole thing.

link C++ static library on specified computer

I developed a special business algorithm into a static library, and my other developers write non-critical code which will link to this static library when compile. I want to restrict only my company computer(Linux) can link this static library, to prevent this static library from being stolen and abused.
if not a good solution,any other suggestions is appreciated!
Thank you very much!
Very difficult to do this in a way that isn't easily bypassed by someone with some skills in doing such things. A simple solution would be to check the network card MAC-address, and refuse to run if it's "wrong" - but of course, anyone wanting to use your code would then just patch the binary to match their MAC-address. And you have to recompile the code whenever you use it on another computer.
Edit based on comment: No, the linker won't check the MAC-address, but the library can have code in it that checks the MAC-address, and then prints a message.
A more likely to work solution is to request permission from a server somewhere. If you also use some crypto-services to contact the server along with a two-part key, it can be pretty difficult to break. There are commercial products based on this, and it that didn't work, then they wouldn't exist - but I'm sure there are people who can bypass/fake those too.
As always, it comes down to the compromise between "how important it is to protect" vs. "how hard is it to break the protection". Games companies spend tons of money to make the game uncheatable and hard to copy, but within days, there are people who have bypassed it. And that's not even state secrets - government agencies (CIA, FBI, KGB, MI5, etc) that can have 100 really bright people in a room trying to break something will almost certainly break into whatever it is, almost no matter what it is - it just isn't worth the effort unless it's something REALLY important [and of course, then it is also well protected by both physical and logical protection mechanisms - you don't just log onto an FBI server from the internet, without some extra security, for example].

Does symbol visibility protect shared library from abuse/crack?

The GCC visibility feature enables us to strip off those APIs in our shared library that we don't want the customer to see. Indeed we can't use ldopen to call those hidden functions, but I wonder if this is secure enough to protect our sensitive APIs.
I just want to get some brief explanation of the reliability/security of hidden APIs in shared library, so that I can balance the effort and the risk. I asked this question only because I can't find adequate description regarding this concern in the documentations of GCC.
Please reopen this question.
The genuine purpose for the visibility attribute is that the library doesn't expose parts of itself that aren't meant to be used directly. It makes very little difference to anyone trying to crack it. They will still have to disassemble the code, and it's not terribly hard to figure out entry points for functions (have a look yourself!). Yes, it's hard work to work your way through megabytes of code, but someone with experience will know what sort of things to look for, so can probably skip over a huge amount of code.
As long as you can dissassemble the code, you can hack it. Takes maybe longer, but it can be done. A more secure protection is to run the code on some server. That's how Diablo 3 is protecting it, which works good enough. However, cracking something is a question of motivation. If your program is good enough and a hacker is determined enough somebody will crack it. You can only increase the time it takes.

Easiest way to limit executable to running on a certain computer

I am trying to create an executable that will only run on one machine. What is the easiest way to achieve this? A license file? Or is there a machine address much like the MAC for network connections I could hardcode into the executable?
If it will run on only one machine, then... simply secure the machine and only store the executable on this machine.
That's, unfortunately, about the only reliable method.
Longer answer:
bits can be copied
programs can be dissassembled
dissassembly can be reverse engineered (though it's sometimes long)
the cost of reverse engineering can be made higher than the value produced by the program, possibly at a higher cost that the value produced by the program.
If you look long enough at the software industry:
DRM: fail
Licenses: fail
Licenses with web activation: fail
If it's worth cracking, it'll get cracked.
The easiest way would be to make it illegal via licencing. Trying to enforce this technically is impossible, and only hurts your users (user?): What if they reinstall the OS, or change their network card, or upgrade their CPU?
There is no solution that is 100% effective, and there is no solution that is "easiest" and also "highly effective." There typically is a continuum of "effective" and "drives users crazy" that you need to be very careful about.
The MAC address is not a horrible place to start. It's not that difficult to change your MAC address, but if you have multiple instances of the same MAC on the same subnet, their machine won't work, so it's good for keeping many people on the same subnet from running your product without licenses. The problem with MAC is that on desktops, the MAC changes if the network card changes; that ticks off your paying users.
A license file is generally better than modifying the binary. You should sign the license file, however, so that it can't be easily modified.
Your application itself is always the weakest link. A dedicated attacker will just remove the test from your application. There is no universal solution to this problem. A good approach is always around "keeping honest people honest" by making it easy to license your application correctly, and easy for the user to determine if they've done so. You can easily spend huge amounts of money trying to annoy people who will never pay you anyway.
I think my answer to another question applies here.
This is a legal issue, not a technological issue. Your goal should be to make it easy for people who want make sure they have valid licenses to your software. Rather than securing your code against people who want to steal it, you should focus on helping customers that are worried about accidentally using it without a proper license.
I'll also repeat my comment from yet another question:
I think "keeping honest people honest" is the right mind set to
approach this problem. Nothing can be cryptographically secure, but
having some sort of unique key or number for each license can actually
make it easier for business customers to account for their software,
and that adds value to your product. Onerous DRM (that doesn't work)
aimed at thwarting criminals (who'd never pay anyway) is just an
obstacle to paying customers.
If you want to create only one copy of your executable file and if you have access to the machine on which you have to install the executable file, then it is okay to hardcode the MAC address into your executable file. In case you want to distribute more than one copies of your executable file and if you don't have access to all the machines, then you might code the executable file to demand a license file which is generated form the MAC address of the machines. It is a sort of software activation.

Copy-protecting a static library

I will soon be shipping a paid-for static library, and I am wondering if it is possible to build in any form of copy protection to prevent developers copying the library.
Ideally, I would like to prevent the library being linked into an executable at all, if (and only if!) the library has been illegitimately copied onto the developer's machine. Is this possible?
Alternatively, it might be acceptable if applications linked to an illegitimate copy of the library simply didn't work; however, it is very important that this places no burden on the users of these applications (such as inputting a license key, using a dongle, or even requiring an Internet connection).
The library is written in C++ and targets a number of platforms including Windows and Mac.
Do I have any options?
I agree with other answers that a fool-proof protection is simply impossible. However, as a gentle nudge...
If your library is precompiled, you could discourage excessive illegitimate use by requiring custom license info in the API.
Change a function like:
jeastsy_lib::init()
to:
jeastsy_lib::init( "Licenced to Foobar Industries", "(hex string here)" );
Where the first parameter identifies the customer, and the second parameter is an MD5 or other hash of the first parameter with a salt.
When your library is purchased, you would supply both of those parameters to the customer.
To be clear, this is an an easily-averted protection for someone smart and ambitious enough. Consider this a speed bump on the path to piracy. This may convince potential customers that purchasing your software is the easiest path forward.
A C++ static library is a terribly bad redistributable.
It's a bot tangential, but IMO should be mentioned here. There are many compiler options that need to match the caller:
Ansi/Unicode,
static/dynamic CRT linking,
exception handling enabled/disabled,
representation of member function pointers
LTCG
Debug/Release
That's up to 64 configurations!
Also they are not portable across platforms even if your C++ code is platform independent - they might not even work with a future compiler version on the same platform! LTCG creates huge .lib files. So even if you can omit some of the choices, you have a huge build and distribution size, and a general PITA for the user.
That's the main reason I wouldn't consider buying anything that comes with static libraries only, much less somethign that adds copy protection of any sort.
Implementation ideas
I can't think of any better fundamental mechanism than Shmoopty's suggestion.
You can additionally "watermark" your builds, so that if you detect a library "in the wild", you can determine whom you sold that one to. (However, what are you going to do? Write angry e-mails to an potentially innocent customer?) Also, this requires some effort, using an easily locatable sequence of bytes not affecting execution won't help much.
You need to protect yourself agains LIB "unpacker" tools. However, the linker should still be able to remove unused functions.
General thoughts
Implementing a decent protection mechanism takes great care and some creativity, and I haven't yet seen a single one that does not create additional support cost and requires tough social decisions. Every hour spent on copy protection is an hour not spent improving your product. The market for C++ code isn't exactly huge, I see a lot of work that your customers have to pay for.
When I buy code, I happily pay for documentation, support, source code and other signs of "future proofness". Not so much for licencing.
Ideally, I would like to prevent the library being linked into an executable at all, if (and only if!) the library has been illegitimately copied onto the developer's machine. Is this possible?
How would you determine whether your library has been "illegitimately copied" at link time?
Remembering that none of your code is running when the linker does its work.
So, given that none of your code is running, we can't do anything at compile or link time. That leaves trying to determine whether the library was illegitimately copied onto the linking machine, from a completely unrelated target machine. And I'm still not seeing any way of making the two situations distinguishable, even if you were willing to impose burdens like "requires internet access" on the end-user.
My conclusion is that fuzzy lollipop's suggestion of "make something so useful that people want to buy it" is the best way to "copy-protect" your code library.
copy protection and in this case, execution protection by definition "places a burden on the user". no way to get around that. best form of copy protection is write something so useful people feel compelled to buy it.
You can't do what you want (perfect copy protection that places no burden on anyone except the people illegally copying the work).
There's no way for you to run code at link time with the standard linkers, so there's no way to determine at that point whether you're OK or not.
That leaves run-time, and that would mean requiring the end-users to validate somehow, which you've already determined is a non-starter.
Your only options are: ship it as-is and hope developers don't copy it too much, OR write your own linker and try to get people to use that (just in case it isn't obvious: That's not going to work. No developer in their right mind is going to buy a library that requires a special linker).
If you are planning to publish an expensive framework you might look into using FLEXlm.
I'm not associated with them but have seen it in various expensive frameworks often targeted Silicon Graphics hardware.
A couple ideas... (these have some major draw backs though which should be obvious)
For at compile time: put the library file on a share, and give it file permissions only for the developers you've sold it to.
For at run time: compile the library to work only on certain machines, eg. check the UIDs or MAC ids or something
I will soon be shipping a paid-for static library
The correct answer to your question is: don't bother with copy protection until you prove that you need it.
You say that you are "soon to be shipping a paid-for static library." Unless you have proven that you have people who are willing to steal your technology, implementing copy protection is irrelevant. An uneasy feeling that "there are people out there who will steal it" is not proof it will be stolen.
The hardest part of starting up a business is creating a product people will pay for. You have not yet proven that you have done that; ergo copy protection is irrelevant.
I'm not saying that your product has no value. I am saying that until you try to sell it, you will not know whether it has value or not.
And then, even if you do sell it, you will not know whether people steal it or not.
This is the difference between being a good programmer and being a good business owner.
First, prove that someone wants to steal your product. Then, if someone wants to steal it, add copy protection and keep improving your product.
I have only done this once. This was the method I used. It is far from foolproof, but I felt it was a good compromise. It is similar to the answer of Drew Dorman.
I would suggest providing an initialisation routine that requires the user to provide their email and a key linked to that email. Then have a way that anyone using the product can view the email information.
I used this method on a library that I use when writing plugins for AfterEffects. The initialisation routine builds the message shown in the "About" dialog for the plugin, and I made this message display the given email.
The advantages of this method in my eyes are:
A client is unlikely to pass on their email and key because they don't want their email associated with products they didn't write.
They could circumvent this by signing up with a burner email, but then they don't get their email associated with products they do write, so again this seems unlikely.
If a version with a burner email gets distributed then people might try it, then decide they want to use it, but need a version associated to their email so might buy a copy. Free advertising. You may even wish to do this yourself.
I also wanted to ensure that when I provide plugins to a company, they can't give my library to their internal programmers to write plugins themselves, based on my years of expertise. To do this I also linked the plugin name to the key. So a key will only work for a specific plugin name and developer email.
To expand on Drew's answer - to do this you take the users email when they sign up, you tag a secret set of characters on the end and then hash it. You give the user the hash. The secret set of characters is the same for all users and is known to your library, but the email makes the hash unique. When a user initialises the library with their email and the hash, your library appends the characters, hashes it and checks the result against the hash the user provided. This way you do not need a custom build for every user.
In the end I felt anything more complex than this would be futile as someone who really wanted to crack my library would probably be better at it than I would be at defending it. This method just stops a casual pirater from easily taking my library.