Django Project Done and Working. Now What? - django

I just finished what I would call a small django project and pretty soon it's going live. It's only 6 models but a fairly complex view layer and a lot of records saving and retrieving.
Of course, forgetting the obvious huge amount of bugs that will, probably, fill my inbox to the top, what would it be the next step towards a website with best performance. What could be tweaked?
I'm using jmeter a lot recently and feel confident that I have a good baseline for future performance comparisons, but the thing is: I'm not sure what is the best start, since I'm a greedy bastard that wants to work the least possible and gather the best results.
For instance, should I try an approach towards infrastructure, like a distributed database, or should I go with the code itself and in that case, is there something that specifically results in better performance? In your experience, whats pays off more?
As a personal contribution: I sometimes have the impression that some operations, when done through django's signals, are faster then the usual view way. But hey, I'm biased. I freaking loooove signals. :)
Personal anecdotes like mine, are welcome as a way to stimulate some research, but some fact based opinions are much more appreciated. :)
Thanks very much.

here is what we did...
used django-debug-toolbar to analyze performance of each page (# of queries and response times)
used Django cache framework...most importantly memcache
used Firebug's pagespeed to optimize HTTP page loads
used Google Analytics for general site usage stats (find out what's being used)
used Apache HTTP server benchmarking tool for quick performance stats
In general, don't try to optimize performance up front. First, collect usage/performance stats, then pick off the most rewarding changes (effort vs. benefit) until you get decent results. The goal should be to avoid unnecessary complexity (distributed databases, etc)
Then, if you still aren't happy, consider these (in order): more RAM (goes a long way), a dedicated database server, load balancing multiple app servers (using perlbal, etc), a dedicated media server, etc...see these for more details (deployment guide, performance tips)
good luck...

Now what?
Deploy. If you have an MVP that is.
Other thoughts:
You didn't mention anything about testing. Do you have unit tests? Do you feel that the test coverage is adequate? I'd recommend reading Karen M. Tracey's book Django 1.1 Testing and Debugging.
Have you watched Jacob Kaplan-Moss's Deployment Workshop?
Have you done any usability testing? You can check out Joel Test article by Joel Spolsky, or you can read Rocket Surgery Made Easy or Don't Make Me Think both by Steve Krug.
Speaking of Spolsky, how does your process rank on the Joel Test?
I know that your question was slanted toward performance, and it may seem that my thoughts aren't performance related. However, thinking about some of these seemingly unrelated items may lead you in a direction that will impact performance. For instance, usability testing may reveal that a certain feature could be reduced in scope yielding better performance due to less data being delivered to the end-user.

Related

Twisted Django Comet(Orbited): the interaction of upper and middle level

I'm developing a monitoring system(something like real-time web app). And the question is about system architecture.
Device connects to server and sends information about controlled parameters state. Sever should save information to Database and notify Comet server. Comet server sends message to user saying that new data avaliable. User gets new information.
What's the best way to analyze and save information(create alarm messages if needed) about device state:
Twisted app it self analyzes and interacts with DB(adbapi) and Comet server(Orbited).
Twisted pushes received data to Django(how to push?) and Django analyzes and saves data to DB and sends "NEW" flag to orbited.
Any Your suggestions, if there is a better way.
More information you can find on the pictures below:
This question is fairly open ended. Someone could probably write a dozen pages on each of the options you described, and that much again on a handful of other approaches as a bonus.
Instead of doing that, I'll take an alternate route.
Make sure you have a good understanding of your requirements. Think about which approach is going to be easiest for you (or for the developers on your team) to satisfy those requirements. Take that approach, documenting the overall idea and unit testing everything you write (preferably using TDD).
When you're done, you might not have the optimal solution, but you'll have a solution, and 99 times out of 100 that's indistinguishable from being optimal.
If I do think about your proposed approaches a little bit, then what mostly occurs to me is that they don't differ from each other very much. Your analysis is just some Python code somewhere that you're going to invoke. Whether you invoke it closer to some Twisted-using code or closer to some Django-using code doesn't seem to make a huge difference to the outcome. Perhaps some part of your requirements would make one approach better than the other. However, if you have unit tests and understand your requirements, then I expect you'll actually find it quite easy to switch between those two approaches.
After you've implemented something, you'll have a much better understanding of the trade-offs involved and you'll be in a better position to decide if one implementation is going to work better or worse than another.
Note that unit tests are a pretty essential part of this idea. Without them, you won't really know if you've implemented your requirements, you won't know if your functionality still works after any particular refactoring, and refactoring itself will be harder because your units will not be as well-defined and isolated as they would be if you were doing test-driven development.

Using ColdFusion frameworks

Can anyone expound on disadvantages, if there are any, to using a ColdFusion development framework? I'm developing an application traditionally, and I'm tempted to use a framework having seen how simple some things can be done.
I'm new to ColdFusion and frameworks in general. I want to understand the implications of using a framework, including advantages and disadvantages.
Disadvantages:
learning curve (pick a lean framework to reduce this)
front controller makes ugly URL, often needs URL rewrite on web server layer
risk of framework being discontinued (no support, hard to maintain, break on new CF ver)
framework bugs (pick a popular framework with good & fast support)
harder to debug sometimes, since actions are generally not a .cfm anymore. Tip: make use of cfdump and cfabort to see the dump in the controller layer
some frameworks takes longer to reinit. Since most frameworks will cache the configurations and controller layer for performance, during the development phase, you'll need to reinit all the time. CF9 eases this problem 'cause it is much faster.
lastly, sometimes you'll be using framework's API, an abstraction from CFML, and missed out on the native ColdFusion way of solving the same problem.
Performance generally is a non issue. Don't worry.
Henry's already given a good answer, but I would just like to pick up on this part of your question:
But does it not come with a performance tax?
The performance overhead of a framework is negligible.
In fact, you may even get better performance from frameworks such as ColdBox, which have built-in caching.
Remember, most frameworks are mature codebases used by lots of people - most likely, your newly written untested code is going to be the culprit, not the framework.
However, as a general rule (not specific to frameworks) performance is not a problem unless you've got measurable results that say it is.
i.e. don't just think "I'm going to do X instead of Y because I think it'll be faster" - go with the simplest option that meets user's needs, and only change it if you can prove that it has a performance problem and that your proposed solution is better.
It depends the nature of project you are into. I think its always advisable to use a frameowrk for better code organization, scalability, conventions and other. If you are supposed to start with a enterprise level application then coldbox is the best framework as far as my expriece goes. It has a bigger learning curve but its worth learning. If its simple start up project then FW1 is good. You can find a list here
http://www.riaxe.com/blog/top-coldfusion-frameworks/

How to reduce the time spent on testing?

I just looked back through the project that nearly finished recently and found a very serious problem. I spent most of bank time on testing the code, reproducing the different situations "may" cause code errors.
Do you have any idea or experience to share on how to reduce the time spent on testing, so that makes the development much more smoothly?
I tried follow the concept of test-driven for all my code , but I found it really hard to achieve this, really need some help from the senior guys here.
Thanks
Re: all
Thanks for the answers above here, initially my question was how to reduce the time on general testing, but now, the problem is down to how to write the effecient automate test code.
I will try to improve my skills on how to write the test suit to cut down this part of time.
However, I still really struggle with how to reduce the time I spent on reproduce the errors , for instance, A standard blog project will be easy to reproduce the situations may cause the errors but a complicate bespoke internal system may "never" can be tested throught out easily, is it worthy ? Do you have any idea on how to build a test plan on this kind of project ?
Thanks for the further answers still.
Test driven design is not about testing (quality assurance). It has been poorly named from the outset.
It's about having machine runnable assumptions and specifications of program behavior and is done by programmers during programming to ensure that assumptions are explicit.
Since those tasks have to be done at some point in the product lifecycle, it's simply a shift of the work. Whether it's more or less efficient is a debate for another time.
What you refer to I would not call testing. Having strong TDD does mean that the testing phase does not have to be relied upon as heavily for errors which would be caught long before they reach a test build (as they are with experience programmers with a good spec and responsive stakeholders in a non-TDD environment).
If you think the upfront tests (runnable spec) is a serious problem, I guess it comes down to how much work the relative stages of development are expected to cost in time and money?
I think I understand. Above the developer-test level, you have the customer test level, and it sounds like, at that level, you are finding a lot of bugs.
For every bug you find, you have to stop, take your testing hat off, put your reproduction hat on, and figure out a precise reproduction strategy. Then you have to document the bug, perhaps put it in a bug-tracking system. Then you have to put the testing hat on. In the mean time, you've lost whatever setup you were working on and lost track of where you were on whatever test plan you were following.
Now - if that didn't have to happen - if you had far few bugs - you could zip along right through testing, right?
It's doubtful that GUI-driving test automation will help with this problem. You'll spend a great amount of time recording and maintaining the tests, and those regression tests will take a fair amount of time to return the investment. Initially, you'll go much slower with GUI-Driving customer-facing tests.
So (I submit) that what might really help is higher /initial/ code quality coming out of development activities. Micro-tests -- also called developer-tests or test-driven-development in the original sense - might really help with that. Another thing that can help is pair programming.
Assuming you can't grab someone else to pair, I'd spend an hour looking at your bug tracking system. I would look at the past 100 defects and try to categorize them into root causes. "Training issue" is not a cause, but "off by one error" might be.
Once you have them categorized and counted, put them in a spreadsheet and sort. Whatever root cause occurs the most often is the root cause you prevent first. If you really want to get fancy, multiply the root cause by some number that is the pain amount it causes. (Example: If in those 100 bugs you have 30 typos on menus, which as easy to fix, and 10 hard-to-reproduce javascript errors, you may want to fix the javascript issue first.)
This assumes you can apply some magical 'fix' to each of those root causes, but it's worth a shot. For example: Transparent icons in IE6 may be because IE6 can not easily process .png files. So have a version control trigger that rejects .gif's on checkin and the issue is fixed.
I hope that helps.
The Subversion team has developed some pretty good test routines, by automating the whole process.
I've begun using this process myself, for example by writing tests before implementing the new features. It works very well, and generates consistent testing through the whole programming process.
SQLite also have a decent test system with some very good documentation about how it's done.
In my experience with test driven development, the time saving comes well after you have written out the tests, or at least after you have written the base test cases. The key thing being here is that you actually have to write our your automated tests. The way your phrased your question leads me to believe you weren't actually writing automated tests. After you have your tests written you can easily go back later and update the tests to cover bugs they didn't previously find (for better regression testing) and you can easily and relatively quickly refactor your code with the ease of mind that the code will still work as expected after you have substantially changed it.
You wrote:
"Thanks for the answers above here,
initially my question was how to
reduce the time on general testing,
but now, the problem is down to how to
write the efficient automate test
code."
One method that has been proven in multiple empirical studies to work extremely well to maximize testing efficiency is combinatorial testing. In this approach, a tester will identify WHAT KINDS of things should be tested (and input it into a simple tool) and the tool will identify HOW to test the application. Specifically, the tool will generate test cases that specify what combinations of test conditions should be executed in which test script and the order that each test script should be executed in.
In the August, 2009 IEEE Computer article I co-wrote with Dr. Rick Kuhn, Dr. Raghu Kacker, and Dr. Jeff Lei, for example, we highlight a 10 project study I led where one group of testers used their standard test design methods and a second group of testers, testing the same application, used a combinatorial test case generator to identify test cases for them. The teams using the combinatorial test case generator found, on average, more than twice as many defects per tester hour. That is strong evidence for efficiency. In addition, the combinatorial testers found 13% more defects overall. That is strong evidence for quality/thoroughness.
Those results are not unusual. Additional information about this approach can be found at http://www.combinatorialtesting.com/clear-introductions-1 and our tool overview here. It contains screen shots and and explanation of how of our the tool makes testing more efficient by identifying a subset of tests that maximize coverage.
Also free version of our Hexawise test case generator can be found at www.hexawise.com/users/new
There is nothing inherently wrong with spending a lot of time testing if you are testing productively. Keep in mind, test-driven development means writing the (mostly automated) tests first (this can legitimately take a long time if you write a thorough test suite). Running the tests shouldn't take much time.
It sounds like your problem is you are not doing automatic testing. Using automated unit and integration tests can greatly reduce the amount of time you spend testing.
First, it's good that you recognise that you need help -- now go and find some :)
The idea is to use the tests to help you think about what the code should do, they're part of your design time.
You should also think about the total cost of ownership of the code. What is the cost of a bug making it through to production rather than being fixed first? If you're in a bank, are there serious implications about getting the numbers wrong? Sometimes, the right stuff just takes time.
One of the hardest things about any project of significant size is designing the underlying archetecture, and the API. All of this is exposed at the level of unit tests. If you're writing your tests first, then that aspect of design happens when your coding your tests, rather than the program logic. This is compounded by added effort of making code testable. Once you've got your tests, the program logic is usually quite obvious.
That being said, there seem to be some interesting automatic test builders on the horizon.

Does anyone have any useful resources to share or tips to offer for developing a MUD?

As a hobby project I am trying to create a ROM (Diku-Merc based) derivative. (Now defunct) I would appreciate it if anybody has done something similar and has some useful resources to share or tips to offer. I'm finding that a lot the resources such as mailing lists are no longer active and many links are dead.
I've picked ROM because that is what I am familiar as a player, but the source is more complicated than anything I have come across and I wouldn't mind picking a code base that was easier to understand. Any recommendations before I dive in in earnest would also be appreciated.
As for mudding communities in general I don't know of much beyond the mud connector because I've always been in more of a user/player role than developer. A forgiving and active place where I can get answers to my questions is what I value most.
After extensive research I've decided to go with a tba code base. I may elaborate later but very broadly
Coding experience is more important than experience as a player and this has convinced me to abandon my roots. I wanted a well documented, reasonably modern, managable code base undergoing active development and this seems to fit the bill.
Anyways muds are truly a labour of love and you have to have a few screws loose if you plan to run one. Moreover the glory days have passed (it seems like there many muds shut down en masse around 2000) and in my opinion the community is largely inactive and fragmented. An exerpt from from some of the tba docs sums this up nicely:
So, you're sure you want to run your own MUD? If you're already an
old hand at playing MUDs and you've
decided you want to start one of your
own, here is our advice: sleep on it,
try several other MUDs first. Work
your way up to an admin position and
see what running a MUD is really
about. It is not all fun and games.
You actually have to deal with people,
you have to babysit the players, and
be constantly nagged about things you
need to do or change. Running a MUD is
extremely time consuming if you do it
well, if you are not going to do it
well then don't bother. Just playing
MUDs is masochistic enough, isn't
it? Or are you trying to shave that
extra point off your GPA, jump down
that one last notch on your next job
evaluation, or get rid of that pesky
Significant Other for good? If you
think silly distractions like having
friends and seeing daylight are
preventing you from realizing your
full potential in the MUD world, being
a MUD Administrator is the job for
you.
Anyways I don't have any high hopes for success, but this is something I will find interesting, improve my code-fu and will keep me busy for many years to come :D
There is no active ROM developer mailing list, so tba definitely is a better choice. There was some effort to clean up ROM with the RaM project.
Dead Souls sees active development as well (the main dev is a hero in my eyes for the amount of work he produces).
I would not recommend MUCK as the userbase is rather small. However that is not to say there isn't good work being done -- look up the user Valente on the code subforum of the wora.netlosers.com forum, as he's probably one of the foremost MUCK developers at the moment.
However if you thought that ROM was complicated I should caution you about tackling an established/canon codebase for any purpose other than getting a familiarity with mud servers. For actual development you may be better off with a barebones codebase such as NakedMUD (C/Python) or even something slimmer than that such as Socketmud (ports in many languages).
There are of course dozens of mud servers you can look at; all will be educational in some manner, but in the beginning stages it won't be obvious what is good practice and what is not. You may want to look up ColdC (similar to LP) and TeensyMUD (Ruby) to study. The author of Teensy, Jon Lambert, has a useful developer site up at http://sourcery.dyndns.org/.
However you'll find very experienced ROM and tba (i.e., Circle) developers at MudBytes, and I'll second Sam to say that is the most active mud developer site currently. It's a little surprising but in the last year there has been a significant growth in activity at MB. I think people are coming in from the fold so to speak and gathering at MB. There also is a good-sized code repository at MB as well.
Your other options are The Mudconnector which you already know, Top Mud Sites which has a somewhat smaller crowd of mostly developers (typically of established and long-running muds), and Mudlab, which is much quieter but usually with a good signal to noise ratio. MudGamers is an interesting new site with a fairly quiet forum, but a new approach to creating a more contemporary-looking portal for playing muds.
Not to be overlooked is the archive for the old mud-dev mailing list. There is a staggering amount of information to be gleaned there. The raw archive can be found at muddev.wishes.net/. Richard Tew also has done some noble work in combing through old usenet archives to find valuable mud development related threads, which you can find through his mud tag at posted-stuff.blogspot.com/search/label/mud.
I should note that many muds use the IMC chat network to link muds (MB has a portal to this as well on the front page of their site). Once your mud is running it can be useful to get on IMC if you're in need of real-time chat to fix a problem (of course, there are many IMC channels and you'll want to choose which one you use prudently).
Despite the fact that muds today are niche at best and unheard of at worst, there is no shortage of new muds in development. They offer a design and programming challenge that is still accessible to the solo developer, unlike any graphical game of equal size or complexity.
Furthermore you shouldn't be discouraged if it feels like you'll never release a playable game. Like many larger projects you may start and abandon it many times over, but you'll be building proficiencies across a wide spectrum of programming skillsets and applications -- not many projects will allow you to take such a whole systems approach. Good luck!
An active community seems to be around for the Dead Souls MUDlib
http://en.wikipedia.org/wiki/Dead_Souls_MUDlib
I was an old player of Nightmare LPMud which sadly disappeared. I'm not much in for the coding of these MUDs, but I have been following this community loosely just due to so many positive MUDding memories.
Take a look at Nameless MUCK. It's a solid piece of software.
First concentrate on getting or finding a solid Telnet Socket library going, this is generally the main protocol for a MUD.
Next, create a FULL list of features that you want to implement, you should probably get some sort of feature or bug tracking system setup (even if it is a spreadsheet). Then prioritize the features based on dependencies of other systems.
Check out http://www.gamasutra.com for some architectural discussions on creating games in general, creating basic AI, character systems, and multi-player games.
Once you understand the theory, it is just a butt load of programming to build in everything you want to support.
I'd make the MUD engine abstract enough to run behind both a terminal client, a web-based Ajax client, and maybe stand-alone clients - i.e., don't tie the front end in with the actual game logic. I'm not averse to a MUD actually using a decent font for the text, and real graphics (as interstitials or to make notes on the bulletin board look like notes, etc), not in place of the text based interface) where necessary instead of ASCII, etc.
You might also want to have some MUD script file converters into your own format, so that you don't have to spend ages creating zones.
I find the problem with MUDs is that there is too much emphasis on killing NPCs, and not many puzzles or other interesting aspects. So a more interesting, story-oriented (possibly to the extend of sharding zones for single-player or single-team use) engine could be a nice feature to have.
I will take this opportunity to recommend MudBytes, which is probably the most active MUD developer site available right now.

Another one about measuring developer performance [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I know the question about measuring developer performance has been asked to death, but please bear with me. I know the age old debate about how you cannot measure performance of developers, but the reality is, at our company there is a "need" to do that one way or another.
I work for a relatively small company (small in terms of developers), and management felt the need to measure developer performance based on "functionality that passes test (QA) at first iteration".
We somehow managed to convince them that this was a bad idea for various reasons, and came up instead on measuring developers by putting code in test where all unit tests passes. Since in our team there is no "requirement" per se to develop unit tests before, we felt it was an opportunity to formalise the need to develop unit tests - i.e. put some incentive on developers to write unit tests.
My problem is this: since arguably we will not be releasing code to QA that do not pass all unit tests, how can one reasonably measure developer performance based on unit tests? Based on unit tests, what makes a good developer stand out?
Functionality that fail although unit test passes?
Not writing unit test for a given functionality at all, or not adequate unit tests written?
Quality of unit test written?
Number of Unit tests written?
Any suggestions would be much appreciated. Or am I completely off the mark in this kind of performance measurement?
Perhaps I am completely off the mark in this kind of performance measurement?
The question is not "what do we measure?"
The question is "What is broken?"
Followed by "how do we measure the breakage?"
Followed by "how do we measure the improvement?"
Until you have something you're trying to fix, here's what happens.
You pick something to measure.
People respond by doing what "looks" best according to that metric.
You realize you're measuring the wrong thing.
Specifically.
"functionalities that pass test (QA) at first iteration" Which means what? Save the code until it HAS to work. Later looks better. So, delay until you pass QA on the first iteration.
"Functionality that fail although unit test passes?" This appears to be "incomplete unit tests". So you overtest everything. Take plenty of time to write all possible tests. Slow down delivery so you're not penalized by this measurement.
"Not writing unit test for a given functionality at all, or not adequate unit tests written?" Not sure how you measure this, but it sounds the same as the previous one.
.
"Quality of unit test written?" Subjective measurement. Always a good plan. Define how you're going to measure quality, and you'll get stuff that maximizes that specific measurement. Want more comments? Count those. What more whitespace? Count that.
"Number of Unit tests written?" Nothing motivates me to write redundant tests like counting the number of tests. I can easily copy and paste nearly identical code if it makes me look good according to this metric.
You get what you measure. No matter what metric you put in place, you will find that the specific thing measured will subvert most other quality concerns. Whatever you measure, but absolutely sure you want people to maximize that measurement while reducing others.
Edit
I'm not saying "Don't Measure". I'm saying "you get what you measure". Pick a metric that you want maximized at the expense of others. It's not hard to pick a metric. Just know the consequence of telling management what to measure.
I would argue that unit tests are a quality tool and not a productivity tool. If you want to both encourage unit testing and to give management a productivity metric, make unit testing mandatory to get code into production, and report on productivity based on code/features that makes it into production over a given time frame (weekly, bi-weekly, whatever). If we take as a given that people will game any system, then design the game to meet your goals.
I think Joel had it spot-on when he said that this sort of measurement will be gamed by your developers. It will not achieve what it set out to and you will likely end up with quality suffering (from the perception of everyone using the system) whilst your measurements of quality all suggest things have never been better!
edit. You say that management are demanding this. You are a small company; your management cannot afford everyone to up sticks and leave. Tell them that this is rubbish and you'll play no part in it.
If the whole idea is so that they can rank people to make them redundant (it sounds like it might be at this time), just ask them how many people have to go and then choose those developers you believe to be the worst, using your intelligence and judgement and not some dumb rule-of-thumb
For some reason the defect black market comes to mind... although this is somewhat in reverse.
Any system based on metrics when it comes to developers simply isn't going to work, because it isn't something you can measure using conventional methods. Whatever you try to put in place with regards to anything like this will be gamed (because solving problems is what we do all day, and this is just another problem to be solved) and it will be detrimental to your code (for example I wrote a simple spelling corrector the other day with about 5 unit tests which were sufficient to check it worked, but if I was measured on unit tests I could have spent another day writing another 100 which would all pass but would add no value).
You need to work out why management want this system in place. If it's to give rewards then you should have a look at Joel Spolsky's article about incentive pay which is not far off the mark from what I've seen (think about bonus day and see how many people are really happy -- none as they just got what they thought they deserved -- and how many people are really pissed off -- anyone who got less than they thought they deserved).
To quote Steve Yegge:
shouldn't there be a rule that companies aren't allowed to do things that have been formally ridiculed in a Dilbert comic?
There was just some study I read in the newspaper here at home in Norway. In a nutshell it said that office types of jobs generally had no benefit from performance pay. The reason being that measuring performance in most office types of jobs was almost impossible.
However simpler jobs like e.g. strawberry picking benefited from performance pay because it is really easy to measure performance. Nobody is going to feel bad because a high performer get a higher pay because everybody can clearly see that he or she has picked more berries.
In an office it is not always clear that the other person did a better job. And so a lot of people will be demotivated. They tested with performance pay on teachers and found that it gave negative results. People who got higher pay often didn't see why they did better than others and the ones who got lower usually couldn't see why they got lower.
What they did find though was that non-monetary rewards usually helped. Getting encouraging words from the boss for well done jobb etc.
Read iCon on how Steve Jobs managed to get people to perform. Basically he made people believe that they were part of something big and were going to change the world. That is what makes people put in an effort and perform. I don't think developers will put in a lot of effort for just money. It has to be something they really believe in and/or think is fun or enjoyable.
If you are going to tie people's pay to their unit test performance, the results are not going to be good.
People are going to try to game the system.
What I think you are after is:
You want people to deploy code that works and has a minimum number of bugs
You want the people that do that consistently to be rewarded
Your system will accomplish neither.
By tying people's pay to whether or not their tests fail, you are creating a disincentive to writing tests. Why would someone write code that, at beast, yields no benefit, and at worst limits their salary? The overall incentive will be to keep the size of the test bed minimal, so that the likely hood of failure is minimized.
This means that you will get more bugs, except they will be bugs you just don't know about.
It also means that you will be rewarding people that introduce bugs, rather than those that prevent them.
Basically you'll get the opposite of your objectives.
These are my initial thoughts on your four specific questions:
Tricky this one. At first glance it looks OK, but if the code passes unit test then, unless the developers are cheating (see below) or the test itself is wrong, it's difficult to see how you'd demonstrate this.
This seems like the best approach. All functions should have a unit test and inspection of the code should be able to reveal which ones are present and which are absent. However, one drawback could be that the developers write an empty test (i.e. one that just returns "passed" without actually testing anything). You might have to invest in lengthy code reviews to spot this one.
How are you going to assess quality? Who is going to assess quality? This assumes that your QA team has access to highly skilled independent developers - which may be true, but seems unlikely.
Counting the number of anything (lines of code, unit tests written) is a non starter. Developers will simply write large number of useless tests.
I agree with oxbow_lakes, and in fact the other answers that have appeared since I started writing this - most forms of measurement will be gamed or worse resented by developers.
I believe time is the only, albeit subjective, way to measure a developers performance.
Given enough time in any one company, good developers will stand out. Projectleaders will know who their best assets are. Bad developers will be exposed given enough time. Unfortunatly, therein lies the ultimate problem, enough time.
Basic psychology - People work to incentives. If my chances of getting a bonus / keeping my job / whatever are based on the number of tests I write, I'll write tons of meaningless tests - probably at the expense of actually doing my real job, which is getting a product out the door.
Any other basic metric you can come up with will suffer the same problem and be equally meaningless.
If you insist on "rating" devs, you could use something a bit more lateral. Scores on one of the MS certification tests perhaps (which has the side effect of getting people trained up). At least that's objective and independently verified by a neutral third party so you can't "game" it. Of course that score also bears no resemblance to the person's effectiveness in your team but it's better than an arbitrary internal measurement.
You might also consider running code through some sort of complexity measurement tool (simpler==better) and scoring people on their results. Again, it has the effect of helping people to become better coders, which is what you really want to achieve.
Poor Ash...
Kudos for using managerial inorance to push something completely unrelated, but now you have to come up with a feasible measure.
I cannot come up with any performance measurement that is not ridiculous or easily gamed. Unit tests cannot change it. Since Kopecks and Black Market were linked within minutes, I'd rather give you ammunition for not requiring individual performance measurements:
First, Software is an optimization between conflicting goals. Evaluating one or a few of them - like how many tests come up during QA - will lead to severe tradeoffs in other areas that hurt the final product.
Second, teamwork means more than just the product of a few individuals glued together. The synergistic effects cannot be tracked back to the effort or skill of a single individual - and when developing software in a team, they have huge impact.
Third, the total cost of software unfolds only after time. Maintenance, scalability, compatibility with new platforms, interaction with future products all carry a significant long term cost. Measuring short term cost (year-over-year, or release to production) does not cover the long term cost at all, and once the long term cost is known it is pointless to track it back to the originator.
Why not have each developer "vote" on their collegues: who helped us achieve our goals most in the last year? Why not trust you (as - apparently - their manager or lead) in judging their performance?
There should be a combination of a few factors to the unit tests that should be fairly easy for someone outside the development group to have a scorecard in terms of measuring the following:
1) How well do the unit tests cover the code and any common input data that may be entered for UI elements? This may seem like a basic thing but it is a good starting point and is something that can be quantified easily with tools like nCover I think.
2) Are there boundary conditions often tested,e.g. nulls for parameters or letters instead of numbers and other basic validation tests? This is also something that can be quantified easily by looking at parameters for various methods as well as having coding standards to prevent bypassing things here, e.g. all the object's methods besides the constructor take 0 parameters and thus have no boundary tests.
3) Granularity of a unit test. Does the test check for one specific case and not try to do lots of different cases in one test? Are test classes containing thousands of lines of code?
4) Grade the code and tests in terms of readability and maintainability. Would someone new have to spend days figuring out what is going on or is the code somewhat self-documenting? Examples would include method names and class names being meaningful and documentation being there?
That last 3 things are what I suspect a manager, team lead or someone else that is outside the group of developers could rank and handle. There may be some games to this to exploit things but the question is what end results do you want to have? I'm thinking well-documented, high quality, easily understood code = good code.
Look up Deming and Total Quality Management for his thoughts on why performance appraisals should not be done at all for any job.
How about this instead, assume all employees are acceptable employees unless proved different.
If someone does something unacceptable or does not perform to level you need, write them up as a performance problem. Determine how many writeups they get before you boot them out of the company.
If someone does something well, write them up for doing something good. If you want to offer a bonus, give it at the time the good performance happens. Even better make sure you announce when people get an attaboy. People will work towards getting them. Sure you will have the policial types who will try to game the system and get written up onthe basis of others achievements but you get that anyway in any system. By announcing who got them at the time of the good performance, you have removed the secrecy that allows the office politics players to function best. If everyone knows Joe did something great and you reward Mary instead, people will start to speak up about it. At the very least Joe and Mary might both get an attaboy.
Each year, give everyone the same percentage pay raise as you have only retained the workers who have acceptable performance and you have rewarded the outstanding employees through the year anytime they did something good.
If you are stuck with measuring, then measure how many times you wrote someone up for poor performance and how many times you wrote someone up for good performance. Then you have to be careful to be reasonably objective about it and write up even the people who aren't your ffriends when they do good and the people who are your friends when they do bad. But face it the manager is going to be subjective in the process no matter how you insist on objective criteria becasue there is no object criteria in the real world.
Definetely, and following the accepted answer unit tests are not a good way to measure development performance. They in fact can be a investment with little to no return.
… Automated tests, per se, do not increase our code quality but do need code output
– From Measuring a Developer's impact
Reporting on productivity based on code/features that makes it into production over a given time frame and making unit tests mandatory is actually a good system. The problem is that you get little feedback from it, and there might be too many excuses to meet a goal. Also, features / refactors / enhancements, can be of very different sizes and nature, so it wouldn't be fair to compare in most ocassions as relevant for the organisation.
Using a version control system, as git, we can atomize the minimum unit of valuable work into commits / PRs. Visualization (as in the quote linked above) is a better and more noble objective for management to perceive, rather than having a flat ladder or metric to compare their developers into.
Don't try to measure raw output. Try to understand developer work, go visualize it.