Options to Common Criteria - evaluation

There are some critics about the international Common Criteria like [Under-attack].1
What are in your opinion the pros and cons of developing IT products with CC?

I'm a Common Criteria evaluator for the BSI (Germany) and NIAPP (USA) schemes. I've had a small amount of experience, but I think I'm qualified enough to answer this question.
Pros:
The first and foremost plus to developing with CC is to be able to do business with the US government. I die inside every time I say this to someone, because I'd really like the top reason to be security. But alas...
Secondly, it enormously increases the quality of your design documentation because much of the CC revolves around analyzing documentation, and good docs are a requirement. Find a good lab, and they may do all of that for you.
It will make you aware of security questions you never thought about, like how does the customer know the product I shipped to them is really from me and not someone impersonating me?
Lastly, sadly, it will improve the technical security of the product. Get a good lab, and you will leave with a very strong secure product and a certification. Get a bad one and you'll just leave with a certification.
Cons:
Extremely expensive. Unless you have the deep-enough coffers to absorb a hit of hundreds of thousands of dollars, you're not cut out for CC. However, if you have the intention of working with the federal government, you may get them to pay your way if they really like your product.
Extremely time consuming. Our evaluations last 9-16 months, depending on the complexity of the product and the evaluation assurance level. To give you an idea, a general linux distribution at EAL 4 could take a full year to complete.
The certificate only applies to an exact version number of your product. Make an update and the cert is invalid. (However, its up to the requisitioning officer in the DoD whether to accept the patched product, so not all hope is lost.
It's value is almost worthless anywhere outside the federal market.
Depending on the scheme you pick, you'll be facing certain kinds of politics, lack of resources, and extra requirements. Best thing to do is find a good lab who will help you through everything.
Note that I'm giving you pro's and con's from the developer's point of view. There are a different set of pro's and con's when talking about technically how the criteria is set up and what it's effectiveness is.

Related

How to prove my stakeholder and manager my software works? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
What do software engineers encounter after another stressfull release? Well, the first thing we encounter in our group are the bugs we have released out in the open. The biggest problem that we as software engineers encounter after a stressfull release is spaghetti-code, also called the big ball of mud.
The time and money to chase perfection are seldom available, nor should they be. To survive, we must do what it takes to get our software working and out the door on time. Indeed, if a team completes a project with time to spare, today’s managers are likely to take that as a sign to provide less time and money or fewer people the next time around.
You need to deliver quality software on time, and under budget
Cost: Architecture is a long-term investment. It is easy for the people who are paying the bills to dismiss it, unless there is some tangible immediate benefit, such a tax write-off, or unless surplus money and time happens to be available. Such is seldom the case. More often, the customer needs something working by tomorrow. Often, the people who control and manage the development process simply do not regard architecture as a pressing concern. If programmers know that workmanship is invisible, and managers don't want to pay for it anyway, a vicious circle is born.
But if this was really the case than each longterm software project would eventually always lead to a big ball of mud.
We know that does not, always, happen. How come? Because the statement that managers do not regard architecture as a pressing concern is false. At least nowadays. Managers in the IT field very well know that maintainability is key to the business.
business becomes dependent upon the data driving it. Businesses have become critically dependent on their software and computing infrastructures. There are numerous mission critical systems that must be on-the-air twenty-four hours a day/seven days per week. If these systems go down, inventories can not be checked, employees can not be paid, aircraft cannot be routed, and so on. [..]
Therefore it is at the heart of the business to seek ways to keep systems far away from the big ball of mud. That the system is still maintainable. That the system actually works and that you, as programmer can prove it does. Does your manager ask you if you have finished your coding today, does she ask you if the release that has fixes A, B and C can be done today or does she ask if the software that will be released actually works? And have you proved it works? With what?
Now for my question:
What ways do we have to prove our managers and/or stakeholders that our software works? Are those green lights of our software-unit tests good enough? If yes, won't that only prove our big-ball of mud is still doing what we expect it to do? That the software is maintainable? How can you prove your design is right?
[Added later]
Chris Pebble his answer below is putting my team on the right track. The Quality Assurance is definitely the thing we are looking for. Thanks Chris. Having a QA policy agreed with the stakeholders is than the logical result of what my team is looking for.
The follow-up question is what should all be in that QA policy?
Having that buildserver running visible for my stakeholders
Having the buildserver not only 'just build' but adding tests that were part of the QA policy
Having an agreement from my stakeholders on our development process (where developers review each others code is part of)
more..
Some more information: The team I'm leading is building webservices that are consumed by other software teams. That is why a breaking webservice is immediately costing money. When the developers of the presentationlayer team, or the actual testers can't move forward we are in immediate stress and have to fix bugs ASAP, which in turn lead to quick hacks..
[ added later ]
Thanks for all the answers. It is indeed about 'trust'. We cannot do a release if the software is not trusted by the stakeholders, who are actively testing our software themselves using the website that is consuming our webservice.
When issues arise, the first question of our testers is: Is it a servicelayer problem or a presentationlayer problem? Which directs me to have a QA policy that ensures that our software is ok for the tests they are doing.
So, the only way I can (now) envision enabling trust with testers is to:
- Talk with the current test-team, go over the tests that they are able to manually execute (from their test-script and scenario's) and make sure that our team has those tests as unit-tests already checked against our webservice. That would be a good starting point for a 'sign-off' before we do a release that the presentationlayerteam has to integrate. It will take some effort to clarify that creating automatic tests for all those scenario's will take some time. But it will definately be usefull to ensure what we build is actually working.
I'm part of a team working on a large project for a governmental client.
The first module of phase 1 was a huge disaster, the team was un-managed, there were'nt a QA team and the developers were not motivated to work better. Instead, the manager kept shouting and deducting wages for people who didnt work overtime!!
The client, huh, dont ask about that, they were really pissed off, but they stayed with our company because they know that no one understand the business as we do.
So what was the solution:
First thing separated the management from programmers, and put a friendly team leader.
Second, get a qualified QA team. In the first few weeks, bugs were in 100s.
Third, put 2-3 developers as support team, there responsibility is not to do any new task, just fix bugs, work directly with the QA.
Fourth, motivate the guys, sometimes its not about the money or extra vacations, sometimes a good word will be perfect. Small example, after working 3 days in row for almost 15 hours a day, the team leader made a note to the manager. Two days later i received a letter from the CEO thanking me on my efforts and giving me 2 vacation days.
We will soon deliver the 4th module of the system, and as one of the support team i could say its at least 95% bug free. Which is a huge jump from our first modules.
Today we have a powerful development team, qualified QA and expert bug fixers.
Sorry for the long story, but thats how our team (during 4 months) proved to the manager and client that we are reliable, and just need the right environment.
You cant prove it beyond the scope of the tests, and unless there is a bulletproof specification (which there never is) then the tests never prove anything beyond the obvious.
What you can do as a team is approach your software design in a responsible manner and not give in to the temptations of writing bad code to please managers, demanding the necessary resources and time constraints, and treating the whole process as much as a craft as a job. The finest renaissance sculptors knew no-one would see the backs statues to be placed in the corners of cathedrals but still took the effort to make sure they weren't selling themselves short.
As a team the only way to prove your software is reliable is to build up a track record: do things correctly from the start, fix bugs before implementing new features, never give in to the quick hack fix, and make sure everyone shares the same enthusiasm and respect for the code.
In all but trivial cases, you cannot 'prove' that your software is correct.
That's the role of User Acceptance Testing: to show that that an acceptable level of usefulness has been reached.
I think this is putting the cart before the horse. It's tantamount to a foot soldier trying to explain to the general what battle maneuvers are and why protecting your flanks is important. If management can't tell the difference between quality code and a big ball of mud, you're always going to end up delivering a big ball of mud.
Unfortunately it is completely impossible to "prove" that your software is working bug-free (the windows xp commercials always annoyed me by announcing "the most secure version of windows ever", that's impossible to prove at release). It's up to management to setup and enforce a QA process and establish metrics as to what a deliverable product actually looks like and what level of bugs or unexpected behaviors is acceptable in the final release.
That said, if you're a small team and set your own QA policies with little input from management I think it would be advantageous to write out a basic QA process and have management sign off on it. For our web apps we currently support 4 browsers -- and management knows this -- so when the application breaks in some obscure handheld browser everyone clearly understands that this is not something we designed the application to support. It also provides good leverage for hiring additional development or test resources when management decides it wants to start testing for x.
As Billy Joel once say, "It's always been a matter of trust."
You need to understand that software development is "black magic" to all except those who write software. It's not obvious (actually, quite counter intuitive) to to the rest of your company that many of your initiatives lead to increasing quality and reducing risk of running over time and/or budget.
The key is to build a trusting, respectful relationship between development and other parts of the business. How do you build this trust? Well that's one of those touchy-feely people problems... you will need to experiment a little. I use the following tools often:
Process visibility - make sure everyone knows what you are doing and how things are progressing. Also, make sure everyone can see the impact of and changes that occur during development.
Point out the little victories - to build up trust, point out when things went exactly as you planned. Try to find situations where you had to make a judgement call and use the term "mitigated the risk" with your management.
Do not say, "I told you so." - let's say that you told management that you needed 2 months to undertake some task and they say, "well you only have three weeks." The outcome is not likely to be good (assuming your estimate is accurate). Make management aware of the problem and document all the things you had to do to try to meet the deadline. When quality is shown to be poor, you can work through the issues you faced (and recorded) in a professional manner rather than pointing the finger and saying, "I told you so."
If you have a good relationship with you manager, you can suggest that they read some books specific to software development so they can understand industry best practices.
Also, I would point out to your boss that not allowing you to act as a professional software developers is hurting your career. You really want to work somewhere that lets you grow professionally rather than somewhere that turns you into a hacker.

What is the most useful way to document assessment of technological choices for a business problem?

I would like to know if there are any templates for doing this in a clear and concise way to give the gist of the application and its inner workings and how it meets the business needs. I do not want to write a mythological story so looking for any new ways of doing this.
Mostly this is about documenting what you actually need from the system. You can't make a good choice if you don't know what you need.
Here is a doc-style approach.
This is a decision matrix approach outline. The formatting is rough, but this is a good approach. This one has better formatting, but is not about software (it doesn't really matter).
I'm not exactly sure if this is what you are asking for, but check out this paper. It's a sample implementation of the CMMI's "Decision and Analysis Resolution" process area. It basically documents a method for comparing alternatives, reaching a decision, and documenting that decision.
The SEI's site has the original definition of DAR (see page 181), as well as a pretty good presentation about it. You have to realize that their whole goal is to help companies define their processes, not to push a particular process. So the documents you find there tend to be pretty high level, discussing the goals that your process should achieve and the specific practices that should be covered.
Consult Eric Evans' "Domain Driven Design". At the end of the day, you're going to have to use your experience and judgment - and that of your team - to make the design decisions big and small, but Evans recommends formulating a one-page manifesto, written in business terms, to share with biz types that explains the value of your view of the domain to the business.

Another one about measuring developer performance [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I know the question about measuring developer performance has been asked to death, but please bear with me. I know the age old debate about how you cannot measure performance of developers, but the reality is, at our company there is a "need" to do that one way or another.
I work for a relatively small company (small in terms of developers), and management felt the need to measure developer performance based on "functionality that passes test (QA) at first iteration".
We somehow managed to convince them that this was a bad idea for various reasons, and came up instead on measuring developers by putting code in test where all unit tests passes. Since in our team there is no "requirement" per se to develop unit tests before, we felt it was an opportunity to formalise the need to develop unit tests - i.e. put some incentive on developers to write unit tests.
My problem is this: since arguably we will not be releasing code to QA that do not pass all unit tests, how can one reasonably measure developer performance based on unit tests? Based on unit tests, what makes a good developer stand out?
Functionality that fail although unit test passes?
Not writing unit test for a given functionality at all, or not adequate unit tests written?
Quality of unit test written?
Number of Unit tests written?
Any suggestions would be much appreciated. Or am I completely off the mark in this kind of performance measurement?
Perhaps I am completely off the mark in this kind of performance measurement?
The question is not "what do we measure?"
The question is "What is broken?"
Followed by "how do we measure the breakage?"
Followed by "how do we measure the improvement?"
Until you have something you're trying to fix, here's what happens.
You pick something to measure.
People respond by doing what "looks" best according to that metric.
You realize you're measuring the wrong thing.
Specifically.
"functionalities that pass test (QA) at first iteration" Which means what? Save the code until it HAS to work. Later looks better. So, delay until you pass QA on the first iteration.
"Functionality that fail although unit test passes?" This appears to be "incomplete unit tests". So you overtest everything. Take plenty of time to write all possible tests. Slow down delivery so you're not penalized by this measurement.
"Not writing unit test for a given functionality at all, or not adequate unit tests written?" Not sure how you measure this, but it sounds the same as the previous one.
.
"Quality of unit test written?" Subjective measurement. Always a good plan. Define how you're going to measure quality, and you'll get stuff that maximizes that specific measurement. Want more comments? Count those. What more whitespace? Count that.
"Number of Unit tests written?" Nothing motivates me to write redundant tests like counting the number of tests. I can easily copy and paste nearly identical code if it makes me look good according to this metric.
You get what you measure. No matter what metric you put in place, you will find that the specific thing measured will subvert most other quality concerns. Whatever you measure, but absolutely sure you want people to maximize that measurement while reducing others.
Edit
I'm not saying "Don't Measure". I'm saying "you get what you measure". Pick a metric that you want maximized at the expense of others. It's not hard to pick a metric. Just know the consequence of telling management what to measure.
I would argue that unit tests are a quality tool and not a productivity tool. If you want to both encourage unit testing and to give management a productivity metric, make unit testing mandatory to get code into production, and report on productivity based on code/features that makes it into production over a given time frame (weekly, bi-weekly, whatever). If we take as a given that people will game any system, then design the game to meet your goals.
I think Joel had it spot-on when he said that this sort of measurement will be gamed by your developers. It will not achieve what it set out to and you will likely end up with quality suffering (from the perception of everyone using the system) whilst your measurements of quality all suggest things have never been better!
edit. You say that management are demanding this. You are a small company; your management cannot afford everyone to up sticks and leave. Tell them that this is rubbish and you'll play no part in it.
If the whole idea is so that they can rank people to make them redundant (it sounds like it might be at this time), just ask them how many people have to go and then choose those developers you believe to be the worst, using your intelligence and judgement and not some dumb rule-of-thumb
For some reason the defect black market comes to mind... although this is somewhat in reverse.
Any system based on metrics when it comes to developers simply isn't going to work, because it isn't something you can measure using conventional methods. Whatever you try to put in place with regards to anything like this will be gamed (because solving problems is what we do all day, and this is just another problem to be solved) and it will be detrimental to your code (for example I wrote a simple spelling corrector the other day with about 5 unit tests which were sufficient to check it worked, but if I was measured on unit tests I could have spent another day writing another 100 which would all pass but would add no value).
You need to work out why management want this system in place. If it's to give rewards then you should have a look at Joel Spolsky's article about incentive pay which is not far off the mark from what I've seen (think about bonus day and see how many people are really happy -- none as they just got what they thought they deserved -- and how many people are really pissed off -- anyone who got less than they thought they deserved).
To quote Steve Yegge:
shouldn't there be a rule that companies aren't allowed to do things that have been formally ridiculed in a Dilbert comic?
There was just some study I read in the newspaper here at home in Norway. In a nutshell it said that office types of jobs generally had no benefit from performance pay. The reason being that measuring performance in most office types of jobs was almost impossible.
However simpler jobs like e.g. strawberry picking benefited from performance pay because it is really easy to measure performance. Nobody is going to feel bad because a high performer get a higher pay because everybody can clearly see that he or she has picked more berries.
In an office it is not always clear that the other person did a better job. And so a lot of people will be demotivated. They tested with performance pay on teachers and found that it gave negative results. People who got higher pay often didn't see why they did better than others and the ones who got lower usually couldn't see why they got lower.
What they did find though was that non-monetary rewards usually helped. Getting encouraging words from the boss for well done jobb etc.
Read iCon on how Steve Jobs managed to get people to perform. Basically he made people believe that they were part of something big and were going to change the world. That is what makes people put in an effort and perform. I don't think developers will put in a lot of effort for just money. It has to be something they really believe in and/or think is fun or enjoyable.
If you are going to tie people's pay to their unit test performance, the results are not going to be good.
People are going to try to game the system.
What I think you are after is:
You want people to deploy code that works and has a minimum number of bugs
You want the people that do that consistently to be rewarded
Your system will accomplish neither.
By tying people's pay to whether or not their tests fail, you are creating a disincentive to writing tests. Why would someone write code that, at beast, yields no benefit, and at worst limits their salary? The overall incentive will be to keep the size of the test bed minimal, so that the likely hood of failure is minimized.
This means that you will get more bugs, except they will be bugs you just don't know about.
It also means that you will be rewarding people that introduce bugs, rather than those that prevent them.
Basically you'll get the opposite of your objectives.
These are my initial thoughts on your four specific questions:
Tricky this one. At first glance it looks OK, but if the code passes unit test then, unless the developers are cheating (see below) or the test itself is wrong, it's difficult to see how you'd demonstrate this.
This seems like the best approach. All functions should have a unit test and inspection of the code should be able to reveal which ones are present and which are absent. However, one drawback could be that the developers write an empty test (i.e. one that just returns "passed" without actually testing anything). You might have to invest in lengthy code reviews to spot this one.
How are you going to assess quality? Who is going to assess quality? This assumes that your QA team has access to highly skilled independent developers - which may be true, but seems unlikely.
Counting the number of anything (lines of code, unit tests written) is a non starter. Developers will simply write large number of useless tests.
I agree with oxbow_lakes, and in fact the other answers that have appeared since I started writing this - most forms of measurement will be gamed or worse resented by developers.
I believe time is the only, albeit subjective, way to measure a developers performance.
Given enough time in any one company, good developers will stand out. Projectleaders will know who their best assets are. Bad developers will be exposed given enough time. Unfortunatly, therein lies the ultimate problem, enough time.
Basic psychology - People work to incentives. If my chances of getting a bonus / keeping my job / whatever are based on the number of tests I write, I'll write tons of meaningless tests - probably at the expense of actually doing my real job, which is getting a product out the door.
Any other basic metric you can come up with will suffer the same problem and be equally meaningless.
If you insist on "rating" devs, you could use something a bit more lateral. Scores on one of the MS certification tests perhaps (which has the side effect of getting people trained up). At least that's objective and independently verified by a neutral third party so you can't "game" it. Of course that score also bears no resemblance to the person's effectiveness in your team but it's better than an arbitrary internal measurement.
You might also consider running code through some sort of complexity measurement tool (simpler==better) and scoring people on their results. Again, it has the effect of helping people to become better coders, which is what you really want to achieve.
Poor Ash...
Kudos for using managerial inorance to push something completely unrelated, but now you have to come up with a feasible measure.
I cannot come up with any performance measurement that is not ridiculous or easily gamed. Unit tests cannot change it. Since Kopecks and Black Market were linked within minutes, I'd rather give you ammunition for not requiring individual performance measurements:
First, Software is an optimization between conflicting goals. Evaluating one or a few of them - like how many tests come up during QA - will lead to severe tradeoffs in other areas that hurt the final product.
Second, teamwork means more than just the product of a few individuals glued together. The synergistic effects cannot be tracked back to the effort or skill of a single individual - and when developing software in a team, they have huge impact.
Third, the total cost of software unfolds only after time. Maintenance, scalability, compatibility with new platforms, interaction with future products all carry a significant long term cost. Measuring short term cost (year-over-year, or release to production) does not cover the long term cost at all, and once the long term cost is known it is pointless to track it back to the originator.
Why not have each developer "vote" on their collegues: who helped us achieve our goals most in the last year? Why not trust you (as - apparently - their manager or lead) in judging their performance?
There should be a combination of a few factors to the unit tests that should be fairly easy for someone outside the development group to have a scorecard in terms of measuring the following:
1) How well do the unit tests cover the code and any common input data that may be entered for UI elements? This may seem like a basic thing but it is a good starting point and is something that can be quantified easily with tools like nCover I think.
2) Are there boundary conditions often tested,e.g. nulls for parameters or letters instead of numbers and other basic validation tests? This is also something that can be quantified easily by looking at parameters for various methods as well as having coding standards to prevent bypassing things here, e.g. all the object's methods besides the constructor take 0 parameters and thus have no boundary tests.
3) Granularity of a unit test. Does the test check for one specific case and not try to do lots of different cases in one test? Are test classes containing thousands of lines of code?
4) Grade the code and tests in terms of readability and maintainability. Would someone new have to spend days figuring out what is going on or is the code somewhat self-documenting? Examples would include method names and class names being meaningful and documentation being there?
That last 3 things are what I suspect a manager, team lead or someone else that is outside the group of developers could rank and handle. There may be some games to this to exploit things but the question is what end results do you want to have? I'm thinking well-documented, high quality, easily understood code = good code.
Look up Deming and Total Quality Management for his thoughts on why performance appraisals should not be done at all for any job.
How about this instead, assume all employees are acceptable employees unless proved different.
If someone does something unacceptable or does not perform to level you need, write them up as a performance problem. Determine how many writeups they get before you boot them out of the company.
If someone does something well, write them up for doing something good. If you want to offer a bonus, give it at the time the good performance happens. Even better make sure you announce when people get an attaboy. People will work towards getting them. Sure you will have the policial types who will try to game the system and get written up onthe basis of others achievements but you get that anyway in any system. By announcing who got them at the time of the good performance, you have removed the secrecy that allows the office politics players to function best. If everyone knows Joe did something great and you reward Mary instead, people will start to speak up about it. At the very least Joe and Mary might both get an attaboy.
Each year, give everyone the same percentage pay raise as you have only retained the workers who have acceptable performance and you have rewarded the outstanding employees through the year anytime they did something good.
If you are stuck with measuring, then measure how many times you wrote someone up for poor performance and how many times you wrote someone up for good performance. Then you have to be careful to be reasonably objective about it and write up even the people who aren't your ffriends when they do good and the people who are your friends when they do bad. But face it the manager is going to be subjective in the process no matter how you insist on objective criteria becasue there is no object criteria in the real world.
Definetely, and following the accepted answer unit tests are not a good way to measure development performance. They in fact can be a investment with little to no return.
… Automated tests, per se, do not increase our code quality but do need code output
– From Measuring a Developer's impact
Reporting on productivity based on code/features that makes it into production over a given time frame and making unit tests mandatory is actually a good system. The problem is that you get little feedback from it, and there might be too many excuses to meet a goal. Also, features / refactors / enhancements, can be of very different sizes and nature, so it wouldn't be fair to compare in most ocassions as relevant for the organisation.
Using a version control system, as git, we can atomize the minimum unit of valuable work into commits / PRs. Visualization (as in the quote linked above) is a better and more noble objective for management to perceive, rather than having a flat ladder or metric to compare their developers into.
Don't try to measure raw output. Try to understand developer work, go visualize it.

What are some useful criteria for deciding on which software package to go for?

How does one go about choosing a software vendor after having seen many presentations from many software vendors, from a user preference perspective?
I ask this question on behalf of a friend who has been put in charge of making such an evaluation, without any prior experience. I thought the experience of the SO community might generate something considerably more useful than a bit of googling.
Domain specificity is not important, but if it helps at all the systems currently being evaluated are Treasury systems and ERP (Electronic Resource Planning).
Thanks in advance for any help/ideas offered.
A common tool for software evaluation is a SWOT analysis:
SWOT Analysis is a strategic planning method used to evaluate the Strengths, Weaknesses, Opportunities, and Threats involved in a project or in a business venture.
[...]
Strengths: attributes of the organization that are helpful to achieving the objective.
Weaknesses: attributes of the organization that are harmful to achieving the objective.
Opportunities: external conditions that are helpful to achieving the objective.
Threats: external conditions which could do damage to the business's performance.
See if you can find existing users of the packages either through people you actually know or through forums online. See what the pain points and advantages are. Also, you should press the vendors with questions about your specific usage scenario and demand real answers and not just sales/marketing spin; you're trying to see if the package will actually solve your problem.
You should find out how their customer support works. Something might break in the product and you'll need technical support. If it's not there when you need it you risk just sitting there waiting and losing time and money.
I would want to know if the vendor is willing to give at least some of your money back if the system fails to deliver to pre-agreed criteria.
Are they going to be involved or committed?
(They say with breakfast the chicken is involved but the pig is committed)

Inform potential clients about security vulnerabilities?

We have a lot of open discussions with potential clients, and they ask frequently about our level of technical expertise, including the scope of work for our current projects. The first thing I do in order to gauge the level of expertise on staff they have now or have previously used is to check for security vulnerabilities like XSS and SQL injection. I have yet to find a potential client who is vulnerable, but I started to wonder, would they actually think this investigation was helpful, or would they think, "um, these guys will trash our site if we don't do business with them." Non-technical folks get scared pretty easily by this stuff, so I'm wondering is this a show of good faith, or a poor business practice?
I would say that surprising people by suddenly penetration-testing their software may bother people if simply for the fact that they didn't know ahead of time. I would say if you're going to do this (and I believe it's a good thing to do), inform your clients ahead of time that you're going to do this. If they seem a little distraught by this, tell them the benefits of checking for human error from the attacker's point of view in a controlled environment. After all, even the most securely minded make mistakes: the Debian PRNG vulnerability is a good example of this.
I think this is a fairly subjective decision and different prospects would react differently if you told them.
I think an idea might be to let them know after they have given business to someone else.
At least this way, the ex-prospect will not think that you are trying to pressure them into giving you the business.
I think the problem with this would be, that it would be quite hard to do checks on XSS without messing up their site. Also, things like SQL injection could be quite dangerous. If you stuck with appending selects, you might not have too much of a problem, but then the question is, how do you know it's even executing the injected SQL?
From the way you described it, it seems like a poor business practice that could be a beneficial one with some modification.
First off, any vulnerability assessment or penetration test you conduct on a customer should be agreed upon in writing by that customer, period. This covers your actions legally. Without a written agreement, if you inadvertently cause damage (application crash, denial-of-service, data leak, etc) during your inspection, you are liable and could be charged (under US law; other countries have different standards).
Even if you do not cause damage, a clueless or potentially malicious customer could take you to court claiming damages; a clueless judge might just award them.
If you have written authorization to do so, then a free vulnerability assessment to attract potential customers sounds like a show of good faith and demonstrates what you want -- your skills.