Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I'm searching for a C++ and open source library to protect a commercial software again crack etc...
Do you know one ?
I have a phrase I like to use for these types of situations: "You can't solve a social problem with a technological solution". If someone is sufficiently motivated to do something you don't like, you can't stop them. The harder you make it to do something, the harder they'll try to get around your barrier. In the end, the only way is to diminish their motivation, and that needs a social solution.
Effectively preventing software from being cracked is an incredibly hard cat and mouse game. With every advancement you can make protecting your program, someone is going to figure it out and get around it. After all, your program does have to run on a computer, and if the computer can understand what it's doing, given enough time, a sufficiently motivated person can do it.
I'm not saying that crack-protecting isn't useful. If you make it hard enough, it will take crackers so long to subvert your software that once they do, that version's so out of date that it's useless. But doing this right is very difficult, and unfortunately there are no simple band-aid solutions that a lay-person can just slap on. Like Tom said, any "just stick it in" method of crack proofing can be just as easily be "snipped right out". Your program needs to be designed from the start to have an anti-cracking approach.
With no intention to insult you, if you're asking this question, then its clear that you don't know enough about software protection to design it in or to use it effectively, and you clearly aren't prepared for the arms race you need to be in to keep your software really powerfully protected.
Most likely whatever you're writing isn't worth the effort of locking it down to an extreme level. Take the simple approach. Your goal should be to keep honest people honest. Just write a plain old, simple verification routine to check if the user's key, when combined with the user's name, address, and other info, passes some checksum. For every copy you sell, take the user's info, generate the key and give that to them. Change the checksum with each version so users can't use old keys. If you must, combine that with some type of periodic "phone home" system over the internet, where a list of leaked (and thus rescinded) keys is published.
Keep in mind that phone home systems tend to piss off your honest customers, and it's far worse to burn a good customer with bad copy protection (a sale you'll never get again) than to keep a non-customer from getting a copy of your program (a sale you wouldn't have gotten anyway).
Sure a cracker can get around it, but crackers aren't your customers. You can't stop them or change their motivations. In the end, they're going to do what they're going to do.
Sure a bad customer could give out their key, or use it against the terms of your license (like run it on too many computers). A key could leek out or be stolen, and a bad person could use it, or they could use a cracked version. You can't prevent those things from happening, but you do have the legal system (a social solution) to deal with it when it does.
The important thing is that you avoid spending too much effort on locking things down and that you avoid making things too draconian for your legitimate customers. After all, you write software for your those customers, not for the crackers. Pissing off your customers to make thing just a little harder for the crackers is never worth it.
Yeah, you see, that's the thing. If such a thing existed it would use predefined library functions which would be very easy to detect from a crack... This is exactly why Apple doesn't provide sample code for its App Store protection on Mac: making an open-source library for it makes it easier to crack apps rather than harder. After all, if you implemented something like this you wouldn't add extra protection anymore and the cracker could make a generic crack for all software.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I would like to start selling some software I have developed in C++. The first line of protection will be the fact that C++ produces an executable. Within that, I will also apply algorithmic and manual obfuscation techniques to make it very hard to understand even once cracked.
With regards to licensing, my plan is to create an API you can send a request to. The data will include your license key and your device fingerprint. Upon receiving this data, the API will check for the license key in the database, and ensure the device fingerprint matches the fingerprint stored. If it does, it will reply with some sort of cryptographic response that must match a certain pattern. The client will then check if that response matches the pre-determined pattern, and if it does the software will be allowed to be used. If it does not, the user will be locked out. And this response will be empty if the API check failed, so that will also cause the user to be locked out.
I am aware that this is not unbreakable, but I would like to make it as difficult to break as possible without investing a ridiculous amount of time. The reason I wanted to add some cryptographic response is so the user can't just spoof the response from my server. Although I will also be using HTTPS on top of that. If this is a good idea, what sort of cryptographic check would you recommend?
The idea of the fingerprint is to prevent users from using the software on multiple computers at a time. I'm not quite sure what to use for this, but I was thinking of hashing a combination of the MAC address, computer name and something else. Any suggestions?
Is there anything else I should be doing to protect my software?
Thanks.
Don't waste your time. It's impossible to stop everyone, and even if you stop 99.999% of the people from cracking it, it only takes a single person to crack it and upload it to all the pirate sites. And the harder you make it, the more it will annoy legitimate users.
I'm working professionally on creating software licensing system. I can tell you, that's not easy to make software protecting system that will be safe enough to discourage people before they break it.
Yes, all systems are crackable. It's only matter of time before someone finds a way to bypass security. Our job is to make it as hard as possible giving them as few clues as possible.
I will also apply algorithmic and manual obfuscation techniques to make it very hard to understand even once cracked.
The goal is not to understand application, but run it without valid license.
With regards to licensing, my plan is to create an API you can send a request to. The data will include your license key and your device fingerprint. Upon receiving this data, the API will check for the license key in the database, and ensure the device fingerprint matches the fingerprint stored.
What you're describing is called License Server. It holds licenses and makes sure that the system users do not exceed their number.
and ensure the device fingerprint matches the fingerprint stored
Those fingerprints are called hostids and there are many types of them: bios id, harddrive serial number, MAC address, donlge (usb stick with license on it), username running application, etc. Most of them are pretty easy to forge. But as I said. The goal is to slow them as much as possible.
I am aware that this is not unbreakable.
That's very wise of you.
but I would like to make it as difficult to break as possible without investing a ridiculous amount of time
You've cat to be kitten me.
Unless license server will be in the same network as your software, it won't be able to run without internet connection. It might not be an issue for you, but it is for many companies.
I'm not saying it's a bad idea. Writing such a system is great exercise and I very recommend it to every programmer, but that's not an easy piece of bread.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I need to interview a candidate with over 8 years of experience in Linux using C/C++.
What would be the best way to judge such a candidate?
Do I need to test his understanding of algorithms?
Do I need to test his programming skills by asking to write a program?
How should I test his understanding of Linux?
It depends entirely on what you want him to do. You haven't said anything about the position that you are hiring for but if, say, you want him to write C# then you need him to prove his adaptibility.
Do you need him to write (or modify or bugfix) algorithms? If not, then it is pointless determining how good at them he is.
On the other hand, in order to understand his abilities, you may be better off talking to him about a domain that he is familiar with. You should certainly get him to describe a recent project that he has been involved in, what his contribution was, what the challenges were, what went well, what lessons he learnt.
"Over 8 years of experience in Linux using C/C++" is a fairly vague requirement without reasons for the time length. What are the specific reasons for that time length? Would you prefer more C/C++ experience if some of it were BSD or Solaris or other Unix? Would you prefer less time or a wider experience with different distributions; would you prefer 5 years experience with Red Hat or 7 years experience spanning Red Hat, Debian, SUSE, Gentoo, and others. What are you trying to get from the person you hire, that relates to the amount of time?
The best way to judge a candidate, any candidate, is on how well he can do the job, not how good the qualifications are. You mentioned Lead Developer, owning a product feature and eventually new features. What sort of feature? A highly responsive and adaptive UI? A UI-free recursive data mining calculation? Offline document scanning/indexing code? Custom device drivers?
Basic understanding of algorithms is important, but that can be tested easily in a phone interview. The ability to map out an algorithm for problem solving, and clearly state the reasons for preferring one over another is much more useful, and harder to test.
Test his programming skills by asking to write a program is a fairly useful BS indicator test; there are quite a few people who are adept at slinging manure who can't actually write a line of code. Another useful test is to give him some code with a defect and ask him what's wrong with it, and how he would fix it.
To test his understanding of Linux, I would look at a basic BS test; fire up a Linux box and ask him to perform some basic tasks, including maybe write and compile "Hello world". This will identify the BS artists. Then I would just go with some stock test, showing that he understands the basics of the Linux design; some file system knowledge, some knowledge of tools, ask about how he would add removable device permissions for a user using SE Linux, how he'd configure access to an application that needs elevated privileges so users without those privileges can use the application.
But ultimately, these are all pretty generic ideas; IMHO, it's much more useful to think in terms of "what do we want the candidate to accomplish", than "how do we test basic skills".
Maybe you should focus on what you need. Can he help you? Has he solved problems similar to yours? What are his expectations, what are yours?
I interview people like this all the time. The answer is that no matter how much experience he has, you must prove to yourself that he is capable of the job.
Joel Spolsky is right, hiring badly is destructive to a team and organization. It should be avoided at all costs.
The more I think about it, the more I begin to think good professional developers must be good communicators - in their code and with people. Think of the old saying - the more you know, the more you realise you don't know.
That's not to say you want somebody who isn't confident: but neither do you want someone that is cocky and unwilling to interact with others.
Recently someone asked about whether they should become a programmer in this posting. No matter how a programmer starts out they will likely learn from many mistakes they've made and as a result have an element of humility about themselves and development in general.
A good programmer continues to learn and keeps a relatively open mind.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
There was a similar question discussed around collaboration tools but one point wasn't fully agreed upon. As we now have all of these collaboration and documentation tools (WIKIs, sharepoints, blogs, etc) to keep track of project plans, busienss requirements, technical documentation, etc, the question is "should we ever delete this data". As organizations evolve and reorganize and people come and go, a lot of this data is out of date or no longer relavant or correct.
One thought is that there maybe useful stuff inside this data so keep it around and preserve the info at that time and it would be good to have historic context.
An opposing argument is that this data provides too much noise and can lead to people finding it hard to get the up to date latest data
Thoughts?
We recently dealt with exactly this problem on our internal wiki. It's really important to keep the ratio of signal-noise high, or you will find users will stop using the tool for content, and will find alternative channels. The vast majority of all user searches on an internal knowledge base will be for current information. This strongly suggests that current information should be the easy-to-find default, and out of date content should be dealt with or made less accessible.
For example, in our organisation, there was a widespread perception that 'most' of the information on our intranet was out of date, and therefore could not be relied on. This lead to immense inefficiencies as individuals felt there was no option other than to contact one another directly, call meetings, make personal notes etc., in order to obtain current information. The combined administrative burden on the organisation was huge.
We chose to explicitly deprecate content which was no longer relevant, but had historical value. These pages are prominently marked with a 'deprecated' box at the top of the wiki page, and archived. They are still linked from their logical wiki sections for reference, but are clearly mothballed, and can be easily ignored if not required.
This makes it very clear that the information is not up-to-date. For truly useless old docs (as determined by the orignal author, or the wiki maintainer - me), we delete. But even in these cases, the pages are not truly gone. We use Mediawiki, which preserves the full history of every deleted page. These are still available to administrators, but the benefit of deletion is that they don't appear in searches, and can't be navigated to by ordinary users.
The result for us has been a clear win. We now have an intranet which is genuinely useful to actual users. In the end that's much more important than worrying about endless 'what if this obsolete information is somehow relevant in the future' questions. The vast majority of it will never be required, by anyone, ever.
In short, don't be afraid to rigorously prune old stuff. The signal-to-noise ratio is what really matters.
I suppose a big part of the question is "can we afford to never delete?" as in, does the org have the drive space?
Memory is cheap, but drive space allocation can sometimes be conservative, probably to discouraging projects and departments from being sloppy, etc.
I would say that if the space is there, always backup and version, because with Enterprise stuff, having a paper trail and history is more likely to pay off then be a waste of space. For the terabytes of data that will never get seen again, there is a line of code or documentation or an email that will be priceless when it's needed.
Having said that, I also think redundancy should be avoided. If your wiki has seven articles on basically the same thing, that is not the same as a back up, because it means having to update seven places for every change, and this will lead to misinformation that could count against the value of a backup. If someone needs to know how something worked 2 years ago and pull up the article that didn't get updated (or was just wrong), this has made the entire backup system a risk instead of an asset.
Ironically, I do think when fixing redundancy, that the redundancy should be part of the back up. This is where my viewpoints obviously clash, which is why I think its important to a) always try to centralize sources and have things point to them, and b) fix redundancies early. If you can somehow tie them all together so that a search for that needed info will ensure that the seeker will know of the other 6 articles, that would be an ideal patch, so long as it didn't create a crutch.
Long story short, it's better to archive data that never gets used then to wish you hadn't.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 13 years ago.
Improve this question
I am a technical team leader of a small programming team, working on a project for an external client.
I was recently asked to produce written evaluations of my team members. I feel uncomfortable doing this, because I don't see myself as a management person and never thought of my colleagues much deeper than "A is reliable and B is a lazy bum".
But I am expected to produce more elaborate stuff to be read by actual managers, and my manager hinted that the purpose of this is rather to test my evaluation skills.
Any hints or resources on how to produce a quality evaluation? Are there standardized forms? How should I address this?
Thank you.
I have found that Joel's Professional Development Ladder and this construx site provided great advice on how to start. It helps to understand the various knowledge areas and what developers are expected to know and do. You can then evaluate developers on how competent they are in various knowledge areas and assign them a level accordingly.
You of course have to evaluate their work ethic and attitude etc which have nothing to do with development as such.
First thing, don't be intimidated by the task. Second, you are a team lead, so your opinion of the people counts; it may be a test, but you should be up to it. Third, if you were doing this informally over a coffee and your boss asked you about someone you would probably have no trouble chatting for a few minutes about your observations of them and what you thought were their strengths and weaknesses. That's what you should write down in your review notes.
Ask your boss if there is a standard format - if you are in a large organisation HR might have forms and/or systems in place for these sorts of reviews. Otherwise, just give him a paragraph or two in plain English (or your language of choice) on what you think.
You can add colour to your reports by citing work they have done and where they have succeeded or failed.
Some golden rules...
don't get personal
try and be objective and fair
don't hide the truth, however uncomfortable
Good luck, it's all part of stepping up to be a manager and is fun in a way - your opinion is counting.
Tough question! I would suggest you first look back at evaluations that have been performed by your manager on YOU. This is usually a good example of what you are expected to produce for your team mates. If you have not had any formal evaluation yet, I suggest you look to your HR department, or management for a copy of a standard template for such purposes. Most large companies have them.
Evaluating team members can be tricky, especially as a team leader and not a 'front line' manager. Remember the following,
Be honest, with them and yourself
Evaluate based on performance not gut feeling, or emotion
Never ever evaluate someone better simply because you 'like' them or have empathy for their situation. It always comes back to you in the end.
Edit:
Some further things I thought of, been awhile since I did evals as a team lead..
When evaluating performance, look at not only what the person needs to improve, but also what they have done well. Try to present both sides of the story (even if you feel the person is a lazy bum)
Look at quantifiable results.. what has the person PRODUCED and how useful was it to the team as a whole. Remember, even if they pump out thousands of lines of code, that doesn't mean it was all useful, maintainable or even worth the time.
Good luck!
You could conduct a 360 degree feedback with your team (http://en.wikipedia.org/wiki/360-degree_feedback), motivating each team member to give feedback to his colleagues (and you).
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
As I've increasingly absorbed Agile thinking into the way I work, yagni ("you aren't going to need it") seems to become more and more important. It seems to me to be one of the most effective rules for filtering out misguided priorities and deciding what not to work on next.
Yet yagni seems to be a concept that is barely whispered about here at SO. I ran the obligatory search, and it only shows up in one question title - and then in a secondary role.
Why is this? Am I overestimating its importance?
Disclaimer. To preempt the responses I'm sure I'll get in objection, let me emphasize that yagni is the opposite of quick-and-dirty. It encourages you to focus your precious time and effort on getting the parts you DO need right.
Here are some off-the-top ongoing questions one might ask.
Are my Unit Tests selected based on user requirements, or framework structure?
Am I installing (and testing and maintaining) Unit Tests that are only there because they fall out of the framework?
How much of the code generated by my framework have I never looked at (but still might bite me one day, even though yagni)?
How much time am I spending working on my tools rather than the user's problem?
When pair-programming, the observer's role value often lies in "yagni".
Do you use a CRUD tool? Does it allow (nay, encourage) you to use it as an _RU_ tool, or a C__D tool, or are you creating four pieces of code (plus four unit tests) when you only need one or two?
TDD has subsumed YAGNI in a way. If you do TDD properly, that is, only write those tests that result in required functionality, then develop the simplest code to pass the test, then you are following the YAGNI principle by default. In my experience, it is only when I get outside the TDD box and start writing code before tests, tests for things that I don't really need, or code that is more than the simplest possible way to pass the test that I violate YAGNI.
In my experience the latter is my most common faux pas when doing TDD -- I tend to jump ahead and start writing code to pass the next test. That often results in me compromising the remaining tests by having a preconceived idea based on my code rather than the requirements of what needs to be tested.
YMMV.
Yagni and KISS (keep it simple, stupid) are essentially the same principle. Unfortunately, I see KISS mentioned about as often as I see "yagni".
In my part of the wilderness, the most common cause of project delays and failures is poor execution of unnecessary components, so I agree with your basic sentiment.
The freedom to change drives YAGNI. In a waterfall project, the mantra is control scope. Scope is controlled by establishing a contract with the customer. Consequently, the customer stuffs all they can think of in the scope document knowing that changes to scope will be difficult once the contract has been signed. As a result, you end up with applications that has a laundry list of features, not a set of features that have value.
With an agile project, the product owner builds a prioritized product backlog. The development team builds features based on priority i.e., value. As a result, the most important stuff get built first. You end up with an application that has features that are valued by the users. The stuff that is not important falls off the list or doesn't get done. That is YAGNI.
While YAGNI is not a practice, it is a result of the prioritized backlog list. The business partner values the flexibility afforded the business given that they can change and reprioritized the product backlog from iteration to iteration. It is enough to explain that YAGNI is the benefit gained when we readily accept change, even late in the process.
The problem I find is that people tend to bucket even writing factories, using DI containers (unless you've already have that in your codebase) under YAGNI. I agree with JB King there. For many people I've worked with YAGNI seems to be the license to cut corners / to write sloppy code.
For example, I was writing a PinPad API for abstracting multiple models/manufacturers' PINPad. I found unless I've the overall structure, I can't write even my Unit Tests. May be I'm not a very seasoned practioner of TDD. I'm sure there'll be differing opinions on whether what I did is YAGNI or not.
I have seen a lot of posts on SO referencing premature optimization which is a form of yagni, or at least ydniy (you don't need it yet).
I don't see YAGNI as the opposite of quick-and-dirty, really. It is doing just what is needed and no more and not planning like the software someone writes has to last 50 years. It may come rarely because there aren't really that many questions to ask around it, at least to my mind. Similar to the "don't repeat yourself" and "keep it simple, stupid" rules that become common but aren't necessarily dissected and analyzed in 101 ways. Some things are simple enough that it is usually gotten soon after doing a little practice. Some things get developed behind the scenes and if you turn around and look you may notice them may be another way to state things.