Do you really think that the money is not supposed to be a motivator in a start up? - business-rules

I always have heard this motto from many entrepeneurs: do what you are passionate about, do not do anything just for money!
I agree on the money as a mean of interchange, but nevertheless I consider that as an important factor.
What do you advise/think, IT entrepeneurs: focus on market/money or just what you are passionate about?

Startups, in general, take up all your waking time, are high-stress, require reasonably high-quality and creative work, and offer no guaranteed return. For something like that, most people need all the motivation they can get, and large quantities of money (not guaranteed) aren't enough. If you're not passionate about what you're doing, you're likely to fail, and almost certain to be miserable.
That being said, if what you're passionate about isn't something people will pay for, leave it as a hobby.

Don't do something you don't want to do just for money. If it's something you want to do, take the money.
If very large bags of money are involved, preferably with a large dollar sign printed on the sides, that overrules everything else.

You're looking at this as a black/white issue, when it's really a grey issue. I would argue that for a business, you should do something you're passionate about, but also has a reasonable expectation of profitability.
Money is not the main objective, but it is necessary to keep your options available.

There's always a happy medium. I think it's important to do what you are passionate about... but still be able to put food on the table. If your passion means living with rat traps and letting your children starve, then you may have suck it up and find something you are less passionate about. Or, maybe do what you are passionate about during the day but deliver pizzas at night to compensate. However, I think it is fine if doing what you love means buying a beater car and taking stay-cations instead of going to places with swim up bars.

Everyone has their break even point. Lets face it, if someone is offering you 1 billion dollars to do something despicable you very well might do it.
So, in the end it is all about what you value.
When it comes to ME:
I personally believe that money is important, it provides security and sense of mind. I also find it very hard to sleep well at night if I am sure that my job is secure. This said, I couldn't bring myself to get up in the morning if I loathed what I was doing. You also don't have to do something for the rest of your life. It it propells you towards a larger goal taking a less then desireable job for a year or two might be a means to an end.
So, saying "don't do anything for money" is a little myopic. In my oh so humble opinion.

Find that point where the two collide... money and your technical passion... but don't forget that managing it all is outside both realms.

Unless you have a lot of capitol to back your ideas I think the money has to be more the motivator. But you need to find a good mix between money and market. You can come up with this great idea but the market won't support it OR come up with a booming market but no great idea to build for the market. In any application I build I look at the audience of the utility/website, etc and see if there is already a utility doing something similar and I see how well it works, sells, etc and that will usually tell me if the market is saturated with tools or if my idea could sell as well. Don't do just what your passionate for you need a good mix. You can always do the passionate ideas as the side job while your creating solid apps that will make you the money.

A whole lot of people followed their passion without a viable profit-making model during the .com bubble. As a result, the Internet is littered with sites that raised twenty million dollars, built something cool, and then vanished due to lack of revenue.
If people are putting money into a new business, it's invariably to get money out. If you don't make a profit, they eventually stop funding your hobby.

Related

How do you understand a large chunk of code?

I am a fresh college grad student that just started my job. In my ramp up period, I need to learn a lot of product code. There are some design docs but they do not help much.
Can you provide some general techniques to browse and understand huge product code (specifically C++)?
Run it through doxygen. This will generate html documentation which will be helpful even if the code does not have proper doxygen-style comments.
Another good advice is to look through the unit tests, if there are any. If there are no unit tests, a good way to understand the code is to write your own unit tests. The effort to do this will pay for itself many times over.
Use every method available to you (in no particular priority):
Use the product itself and understand what it does
Talk to the devs that maintained it or have worked with it previously
Debug through it and see how data flows and how classes interact ("when I click this button, what exactly happens, who is responsible?")
Look at architecture, UML, or class diagrams
One of my favorites: create your own diagrams of class hierarchies, class interactions, general control flow, high-level components, process/DLL interactions, object lifetimes and management
If they're not totally out-of-date, read the dev/test/user specs (goes well with #1)
Read the documentation on it
Most of all: be tenacious and persistent. If you don't put in the work, don't expect to understand it. If you don't understand something, dig and dig until you do. Software is not magic, it's just hard work :)
Some people will tell you to start with the data structures, but in a large system even that's not terribly helpful much of the time. I can think of four major points:
Take your time. Often, it's more like a whole series of gestalt shifts than it is a single, linear, gradual understanding. So be patient.
No matter how big it is, you should be able to put a breakpoint in and walk it in a debugger. Even in a large, complicated, multi-threaded system, you should be able walk through and see what's happening.
Ask for bugs, and start fixing them, no matter how crazy they seem. It's akin to dropping yourself into a foreign country; you'll pickup the language eventually.
Find a mentor. A jungle guide is invaluable.
I think there have been a few good responses already. My 2c worth...
Not sure what you class as huge (10 KLOC, 1000 KLOC, 10000 KLOC, etc), but one would hope that this is broken down in some way and is not a monolithic single program. Perhaps your management has some guidance on which 'module(s)' you are most likely to be spending time in at the moment. Hopefully this can help break down the problem scope.
Firstly, before you try to understand the code try to understand the product. What does it do? Then how does it do it? What does it interact with? Then how does it interact? etc...
When getting to the code try to understand the high level design and philosophy first, and work on the breadth before the depth. I agree with some of the above re fixing some bugs, but I also strongly suggest you continue to get a handle on the high level even if you need to get into the details to fix some bugs.
I also agree with the above in terms of generating some diagrams for yourself if you can't find any already in existence. And then share them, perhaps a team/product wiki? I'm curious as to why the existing doco does not help very much. Typically this is because this type of doco was generated from the early concepts and the product no longer bears any similarity, but if this is not the case then what can you contribute to this issue. One assumes that where you are today someone else will be in short enough order, and you are in an ideal position to know what essential doco is missing!
If the product is actually 'huge' then you have to accept that you will never be able to hold all of it in your head, so the best thing you can do is be familiar enough to know where to start looking (comes back to understanding the product, and approaching code breadth first).
This is obviously a pretty common question, and it's similar to this one (and the questions related to it): How to understand the design and code flow of any product quickly?
Dig through some of those answers / comments, for starters. Else, we'll just end up repeating them. :)

(student) interview questions - programming for a robotics lab

my robotics lab is looking for programmers to work on some projects we have at the moment.
We nailed down the requirements (mainly, c++ and experience with openGL and 3D), but due to obvious money constraints we can't afford to hire Great Developers. Instead we're going to settle for Talented Students, offering them projects for their dissertation/thesis and hoping for some fresh ideas and creativity from their side. We can also afford to pay students that just graduated (first job experience).
So my question is:
In your experience, how did you spot a Talented Student (computer scientist or engineer)? What questions did you ask? What else did help you in finding a candidate that turned out to be a Good Programmer? (note: they might not know much about a specific language, but might have the ability to learn pretty fast)
or, if you were the interviewee,
Which questions were asked that made you jump on the bandwagon? Or, if you had an awful experience, what - in retrospect - was an obvious warning signal that you ignored?
Please note that I am not looking for an argumentative answer. We can talk all day long about what's best for us and never agree.
Instead, I'm looking for tales from your experience. Anecdotes, stories, hints, everything will help.
Background:
A bit more background: working for academia here is slightly different than working for the private sector (here = Italy). There are no 'deadlines' to 'sell' a product; instead, it's all proof-of-concept based. Nothing you start working on has the guarantee to be functional.
A comic best describes it: reinventing the wheel
I am considering doing Coding Questions for their interview, but all my colleagues are scoffing at me (too scary, nobody will ever come to work for us again, nobody really know how to code, etc).
Coding-wise, programming done by researchers is ... ugly. I am fighting to get a version control system in constant use, people have to be chased down to report bugs and document their code, everything is coded-so-that-it-works and rarely we go back to old code to 'fix bugs'. Basically once it's somewhat working, the project is closed and people go work on another project.
Lots has been reinvented and rewritten over and over again (just because nobody knew it was already there). People come and go, future is uncertain, but we play with robots so it's very cool :)
Furthermore, being really understaffed, nobody can follow you and guide you in your project. At best you're the one that has to come up with a plan, background literature and a working prototype.
Hence, we are looking for people that:
have some background to get started
can be highly independent
do want to learn and build their own expertise in new fields
Actually, here's my best advice:
Recruit among your students.
Since you work for an academic institution I assume that either you or your colleagues teach. This provides you with a wealth of information about what potential recruits are capable of -- how fast they learn, how motivated they are, what they are good and bad at, how the code they turn in for the lab assignments and projects look like, etc.
Firstly, in industry, coding questions are very much the norm - I'd be worried if coding questions weren't asked!
I've been responsible for doing technical interviews for about ten years now. And yes, I ask coding questions. But the questions themselves aren't really the point. What I'm more interested in is getting an idea of whether the candidate can think, and articulate their thoughts.
One question I ask (asuming the simple earlier stuff has gone well) is about inheritance heirachies. There is no one right answer, although there are a lot of wrong ones. But what's important is how they approach the question, the points they come up with in favour of one design over another.
Background knowledge is useful, in that it shows that they have an interest in the area in which you're working - but really knowledge can be gained. Intelligence is much more important.
However it's quite possible to have intelligent people who are impossible to work with! I haven't figured out how to determine who they are.
I have done such a project when I was a student: ie a 4-months project, working half-time. It was not about robots, per se.
I think that the most obvious requirement is motivation/passion. Since they'll be mostly on their own you will need them to be somewhat independent and able to think for themselves, this requires motivation first and foremost.
In order to determine whether the candidate is motivated or not, begin by asking them about the project itself. If they only gave it a cursory glance, they're likely not motivated. Also look at their experience / courses: optional courses in CS, projects they've done, etc... anything indicating that they really care about CS / development in general and are not there just because they've heard it paid well.
Then comes the question of ability. Like you said it might not be easy to spot those who will be smart enough to figure stuff out by themselves and DO things. Once again, you can ask them about past projects, having them detail the issues they faced and how they solved it.
Finally, I agree with you that some demonstration of their abilities is in order. They might be a bit tense initially, so I would do this at the end, once the interview is already going, you might have had a chance to get them to relax with the previous questions this way.
You don't necessarily need them to do coding questions, I think it's most about reasoning. Try to pick problems related to your area work, for example one you really had in the past, and get them to analyze the problem. If possible they should take the lead and ask you questions about the problem itself here.
We've had an issue with the robot not being able to analyze the images the camera took, it could not correctly determine the moving objects itself, do you have an idea how you would do that ?
Then you'll need to get them to think about a solution. You need a whiteboard here, and ask them to think aloud so that you can follow their reflexion. You'll probably need to nudge them a bit from time to time to keep them on track, their reaction to your input is also a key-point here, since you want them to be able to accept criticism and build on it, otherwise you might have issues directing them afterward.
Frankly, try to avoid asking them for the quicksort algorithm, or the introsort, or the radix sort... If they need sorting, they'll just fire up their computer and browse the internet. On the other hand, getting them to analyze an existing algorithm unknown to them (for example, the median of 5 sort) and checking that they understand why it works, could be worth it. If they need to work on their own, they'll need to be able to learn on their own too :)
As others say, try to hire someone that's motivated!
For master thesis students I put more emphasis on knowing the basic skills (programming, knowing how to use version control) as they don't stay on long enough to learn everything along the way.
If they're going to work mostly on their own and you have no special requirements on language I wouldn't focus much on language questions. But every decent programmer knows at least one language fairly well, get a sample or their prior work or make them code some simple application to test that they don't suck.
I'd focus more on algorithm and data structures. Ask rudimentary stuff that every programmer should know - when to use a list and when to use a vector, why summing a row-major matrix by iterating over the columns first is bad, basic complexity analysis questions, etc. That will sort out many of the bad weeds.
Ask some design questions too perhaps, e.g. what is "coupling" and why is it bad, ask them if they know what a design pattern is, etc.
Check that the applicants have a solid grasp of linear algebra and coordinate system changes in particular if they're going to work with any 3D stuff like OpenGL. In my experience, learning the API is simple, wrapping your head around how the transformations work less so.
Obviously, if you except them to perform any real robotics-specific you should check for that knowledge as well. E.g. estimation (understanding simple EKF and particle filters is a requirement in my book), control theory, pattern recognition, machine learning, vision, or whatever is useful for the particular task.
If I were hiring someone for theoretical work I would perhaps loosen up on the CS/programming skills and focus more on math knowledge. Someone with solid math skills will pick up the CS easily, and programming is just programming.
Ask for references or to see some of the prior work. Many great students already have some interesting project to show after graduation.
I'm not sure how common this is at universities in general, but I would look for a games programming (or robotics) course on the transcript where the candidate, as a student, succeeded in completing a project with a team. It would ask the candidate to describe how that project worked (important technical details) and the role he had in the team. The only way to really tell if someone is good at something is to see what happens when they try it, and since you're in academia, recruiting students, that should not be a problem.

Another one about measuring developer performance [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I know the question about measuring developer performance has been asked to death, but please bear with me. I know the age old debate about how you cannot measure performance of developers, but the reality is, at our company there is a "need" to do that one way or another.
I work for a relatively small company (small in terms of developers), and management felt the need to measure developer performance based on "functionality that passes test (QA) at first iteration".
We somehow managed to convince them that this was a bad idea for various reasons, and came up instead on measuring developers by putting code in test where all unit tests passes. Since in our team there is no "requirement" per se to develop unit tests before, we felt it was an opportunity to formalise the need to develop unit tests - i.e. put some incentive on developers to write unit tests.
My problem is this: since arguably we will not be releasing code to QA that do not pass all unit tests, how can one reasonably measure developer performance based on unit tests? Based on unit tests, what makes a good developer stand out?
Functionality that fail although unit test passes?
Not writing unit test for a given functionality at all, or not adequate unit tests written?
Quality of unit test written?
Number of Unit tests written?
Any suggestions would be much appreciated. Or am I completely off the mark in this kind of performance measurement?
Perhaps I am completely off the mark in this kind of performance measurement?
The question is not "what do we measure?"
The question is "What is broken?"
Followed by "how do we measure the breakage?"
Followed by "how do we measure the improvement?"
Until you have something you're trying to fix, here's what happens.
You pick something to measure.
People respond by doing what "looks" best according to that metric.
You realize you're measuring the wrong thing.
Specifically.
"functionalities that pass test (QA) at first iteration" Which means what? Save the code until it HAS to work. Later looks better. So, delay until you pass QA on the first iteration.
"Functionality that fail although unit test passes?" This appears to be "incomplete unit tests". So you overtest everything. Take plenty of time to write all possible tests. Slow down delivery so you're not penalized by this measurement.
"Not writing unit test for a given functionality at all, or not adequate unit tests written?" Not sure how you measure this, but it sounds the same as the previous one.
.
"Quality of unit test written?" Subjective measurement. Always a good plan. Define how you're going to measure quality, and you'll get stuff that maximizes that specific measurement. Want more comments? Count those. What more whitespace? Count that.
"Number of Unit tests written?" Nothing motivates me to write redundant tests like counting the number of tests. I can easily copy and paste nearly identical code if it makes me look good according to this metric.
You get what you measure. No matter what metric you put in place, you will find that the specific thing measured will subvert most other quality concerns. Whatever you measure, but absolutely sure you want people to maximize that measurement while reducing others.
Edit
I'm not saying "Don't Measure". I'm saying "you get what you measure". Pick a metric that you want maximized at the expense of others. It's not hard to pick a metric. Just know the consequence of telling management what to measure.
I would argue that unit tests are a quality tool and not a productivity tool. If you want to both encourage unit testing and to give management a productivity metric, make unit testing mandatory to get code into production, and report on productivity based on code/features that makes it into production over a given time frame (weekly, bi-weekly, whatever). If we take as a given that people will game any system, then design the game to meet your goals.
I think Joel had it spot-on when he said that this sort of measurement will be gamed by your developers. It will not achieve what it set out to and you will likely end up with quality suffering (from the perception of everyone using the system) whilst your measurements of quality all suggest things have never been better!
edit. You say that management are demanding this. You are a small company; your management cannot afford everyone to up sticks and leave. Tell them that this is rubbish and you'll play no part in it.
If the whole idea is so that they can rank people to make them redundant (it sounds like it might be at this time), just ask them how many people have to go and then choose those developers you believe to be the worst, using your intelligence and judgement and not some dumb rule-of-thumb
For some reason the defect black market comes to mind... although this is somewhat in reverse.
Any system based on metrics when it comes to developers simply isn't going to work, because it isn't something you can measure using conventional methods. Whatever you try to put in place with regards to anything like this will be gamed (because solving problems is what we do all day, and this is just another problem to be solved) and it will be detrimental to your code (for example I wrote a simple spelling corrector the other day with about 5 unit tests which were sufficient to check it worked, but if I was measured on unit tests I could have spent another day writing another 100 which would all pass but would add no value).
You need to work out why management want this system in place. If it's to give rewards then you should have a look at Joel Spolsky's article about incentive pay which is not far off the mark from what I've seen (think about bonus day and see how many people are really happy -- none as they just got what they thought they deserved -- and how many people are really pissed off -- anyone who got less than they thought they deserved).
To quote Steve Yegge:
shouldn't there be a rule that companies aren't allowed to do things that have been formally ridiculed in a Dilbert comic?
There was just some study I read in the newspaper here at home in Norway. In a nutshell it said that office types of jobs generally had no benefit from performance pay. The reason being that measuring performance in most office types of jobs was almost impossible.
However simpler jobs like e.g. strawberry picking benefited from performance pay because it is really easy to measure performance. Nobody is going to feel bad because a high performer get a higher pay because everybody can clearly see that he or she has picked more berries.
In an office it is not always clear that the other person did a better job. And so a lot of people will be demotivated. They tested with performance pay on teachers and found that it gave negative results. People who got higher pay often didn't see why they did better than others and the ones who got lower usually couldn't see why they got lower.
What they did find though was that non-monetary rewards usually helped. Getting encouraging words from the boss for well done jobb etc.
Read iCon on how Steve Jobs managed to get people to perform. Basically he made people believe that they were part of something big and were going to change the world. That is what makes people put in an effort and perform. I don't think developers will put in a lot of effort for just money. It has to be something they really believe in and/or think is fun or enjoyable.
If you are going to tie people's pay to their unit test performance, the results are not going to be good.
People are going to try to game the system.
What I think you are after is:
You want people to deploy code that works and has a minimum number of bugs
You want the people that do that consistently to be rewarded
Your system will accomplish neither.
By tying people's pay to whether or not their tests fail, you are creating a disincentive to writing tests. Why would someone write code that, at beast, yields no benefit, and at worst limits their salary? The overall incentive will be to keep the size of the test bed minimal, so that the likely hood of failure is minimized.
This means that you will get more bugs, except they will be bugs you just don't know about.
It also means that you will be rewarding people that introduce bugs, rather than those that prevent them.
Basically you'll get the opposite of your objectives.
These are my initial thoughts on your four specific questions:
Tricky this one. At first glance it looks OK, but if the code passes unit test then, unless the developers are cheating (see below) or the test itself is wrong, it's difficult to see how you'd demonstrate this.
This seems like the best approach. All functions should have a unit test and inspection of the code should be able to reveal which ones are present and which are absent. However, one drawback could be that the developers write an empty test (i.e. one that just returns "passed" without actually testing anything). You might have to invest in lengthy code reviews to spot this one.
How are you going to assess quality? Who is going to assess quality? This assumes that your QA team has access to highly skilled independent developers - which may be true, but seems unlikely.
Counting the number of anything (lines of code, unit tests written) is a non starter. Developers will simply write large number of useless tests.
I agree with oxbow_lakes, and in fact the other answers that have appeared since I started writing this - most forms of measurement will be gamed or worse resented by developers.
I believe time is the only, albeit subjective, way to measure a developers performance.
Given enough time in any one company, good developers will stand out. Projectleaders will know who their best assets are. Bad developers will be exposed given enough time. Unfortunatly, therein lies the ultimate problem, enough time.
Basic psychology - People work to incentives. If my chances of getting a bonus / keeping my job / whatever are based on the number of tests I write, I'll write tons of meaningless tests - probably at the expense of actually doing my real job, which is getting a product out the door.
Any other basic metric you can come up with will suffer the same problem and be equally meaningless.
If you insist on "rating" devs, you could use something a bit more lateral. Scores on one of the MS certification tests perhaps (which has the side effect of getting people trained up). At least that's objective and independently verified by a neutral third party so you can't "game" it. Of course that score also bears no resemblance to the person's effectiveness in your team but it's better than an arbitrary internal measurement.
You might also consider running code through some sort of complexity measurement tool (simpler==better) and scoring people on their results. Again, it has the effect of helping people to become better coders, which is what you really want to achieve.
Poor Ash...
Kudos for using managerial inorance to push something completely unrelated, but now you have to come up with a feasible measure.
I cannot come up with any performance measurement that is not ridiculous or easily gamed. Unit tests cannot change it. Since Kopecks and Black Market were linked within minutes, I'd rather give you ammunition for not requiring individual performance measurements:
First, Software is an optimization between conflicting goals. Evaluating one or a few of them - like how many tests come up during QA - will lead to severe tradeoffs in other areas that hurt the final product.
Second, teamwork means more than just the product of a few individuals glued together. The synergistic effects cannot be tracked back to the effort or skill of a single individual - and when developing software in a team, they have huge impact.
Third, the total cost of software unfolds only after time. Maintenance, scalability, compatibility with new platforms, interaction with future products all carry a significant long term cost. Measuring short term cost (year-over-year, or release to production) does not cover the long term cost at all, and once the long term cost is known it is pointless to track it back to the originator.
Why not have each developer "vote" on their collegues: who helped us achieve our goals most in the last year? Why not trust you (as - apparently - their manager or lead) in judging their performance?
There should be a combination of a few factors to the unit tests that should be fairly easy for someone outside the development group to have a scorecard in terms of measuring the following:
1) How well do the unit tests cover the code and any common input data that may be entered for UI elements? This may seem like a basic thing but it is a good starting point and is something that can be quantified easily with tools like nCover I think.
2) Are there boundary conditions often tested,e.g. nulls for parameters or letters instead of numbers and other basic validation tests? This is also something that can be quantified easily by looking at parameters for various methods as well as having coding standards to prevent bypassing things here, e.g. all the object's methods besides the constructor take 0 parameters and thus have no boundary tests.
3) Granularity of a unit test. Does the test check for one specific case and not try to do lots of different cases in one test? Are test classes containing thousands of lines of code?
4) Grade the code and tests in terms of readability and maintainability. Would someone new have to spend days figuring out what is going on or is the code somewhat self-documenting? Examples would include method names and class names being meaningful and documentation being there?
That last 3 things are what I suspect a manager, team lead or someone else that is outside the group of developers could rank and handle. There may be some games to this to exploit things but the question is what end results do you want to have? I'm thinking well-documented, high quality, easily understood code = good code.
Look up Deming and Total Quality Management for his thoughts on why performance appraisals should not be done at all for any job.
How about this instead, assume all employees are acceptable employees unless proved different.
If someone does something unacceptable or does not perform to level you need, write them up as a performance problem. Determine how many writeups they get before you boot them out of the company.
If someone does something well, write them up for doing something good. If you want to offer a bonus, give it at the time the good performance happens. Even better make sure you announce when people get an attaboy. People will work towards getting them. Sure you will have the policial types who will try to game the system and get written up onthe basis of others achievements but you get that anyway in any system. By announcing who got them at the time of the good performance, you have removed the secrecy that allows the office politics players to function best. If everyone knows Joe did something great and you reward Mary instead, people will start to speak up about it. At the very least Joe and Mary might both get an attaboy.
Each year, give everyone the same percentage pay raise as you have only retained the workers who have acceptable performance and you have rewarded the outstanding employees through the year anytime they did something good.
If you are stuck with measuring, then measure how many times you wrote someone up for poor performance and how many times you wrote someone up for good performance. Then you have to be careful to be reasonably objective about it and write up even the people who aren't your ffriends when they do good and the people who are your friends when they do bad. But face it the manager is going to be subjective in the process no matter how you insist on objective criteria becasue there is no object criteria in the real world.
Definetely, and following the accepted answer unit tests are not a good way to measure development performance. They in fact can be a investment with little to no return.
… Automated tests, per se, do not increase our code quality but do need code output
– From Measuring a Developer's impact
Reporting on productivity based on code/features that makes it into production over a given time frame and making unit tests mandatory is actually a good system. The problem is that you get little feedback from it, and there might be too many excuses to meet a goal. Also, features / refactors / enhancements, can be of very different sizes and nature, so it wouldn't be fair to compare in most ocassions as relevant for the organisation.
Using a version control system, as git, we can atomize the minimum unit of valuable work into commits / PRs. Visualization (as in the quote linked above) is a better and more noble objective for management to perceive, rather than having a flat ladder or metric to compare their developers into.
Don't try to measure raw output. Try to understand developer work, go visualize it.

Inform potential clients about security vulnerabilities?

We have a lot of open discussions with potential clients, and they ask frequently about our level of technical expertise, including the scope of work for our current projects. The first thing I do in order to gauge the level of expertise on staff they have now or have previously used is to check for security vulnerabilities like XSS and SQL injection. I have yet to find a potential client who is vulnerable, but I started to wonder, would they actually think this investigation was helpful, or would they think, "um, these guys will trash our site if we don't do business with them." Non-technical folks get scared pretty easily by this stuff, so I'm wondering is this a show of good faith, or a poor business practice?
I would say that surprising people by suddenly penetration-testing their software may bother people if simply for the fact that they didn't know ahead of time. I would say if you're going to do this (and I believe it's a good thing to do), inform your clients ahead of time that you're going to do this. If they seem a little distraught by this, tell them the benefits of checking for human error from the attacker's point of view in a controlled environment. After all, even the most securely minded make mistakes: the Debian PRNG vulnerability is a good example of this.
I think this is a fairly subjective decision and different prospects would react differently if you told them.
I think an idea might be to let them know after they have given business to someone else.
At least this way, the ex-prospect will not think that you are trying to pressure them into giving you the business.
I think the problem with this would be, that it would be quite hard to do checks on XSS without messing up their site. Also, things like SQL injection could be quite dangerous. If you stuck with appending selects, you might not have too much of a problem, but then the question is, how do you know it's even executing the injected SQL?
From the way you described it, it seems like a poor business practice that could be a beneficial one with some modification.
First off, any vulnerability assessment or penetration test you conduct on a customer should be agreed upon in writing by that customer, period. This covers your actions legally. Without a written agreement, if you inadvertently cause damage (application crash, denial-of-service, data leak, etc) during your inspection, you are liable and could be charged (under US law; other countries have different standards).
Even if you do not cause damage, a clueless or potentially malicious customer could take you to court claiming damages; a clueless judge might just award them.
If you have written authorization to do so, then a free vulnerability assessment to attract potential customers sounds like a show of good faith and demonstrates what you want -- your skills.

How to make junior programmers write tests? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
We have a junior programmer that simply doesn't write enough tests.
I have to nag him every two hours, "have you written tests?"
We've tried:
Showing that the design becomes simpler
Showing it prevents defects
Making it an ego thing saying only bad programmers don't
This weekend 2 team members had to come to work because his code had a NULL reference and he didn't test it
My work requires top quality stable code, and usually everyone 'gets it' and there's no need to push tests through. We know we can make him write tests, but we all know the useful tests are those written when you're into it.
Do you know of more motivations?
This is one of the hardest things to do. Getting your people to get it.
Sometimes one of the best ways to help junior level programmers 'get it' and learn the right techniques from the seniors is to do a bit of pair programming.
Try this: on an upcoming project, pair the junior guy up with yourself or another senior programmer. They should work together, taking turns "driving" (being the one typing at they keyboard) and "coaching" (looking over the shoulder of the driver and pointing out suggestions, mistakes, etc as they go). It may seem like a waste of resources, but you will find:
That these guys together can produce code plenty fast and of higher quality.
If your junior guy learns enough to "get it" with a senior guy directing him along the right path (eg. "Ok, now before we continue, lets write at test for this function.") It will be well worth the resources you commit to it.
Maybe also have someone in your group give the Unit Testing 101 presentation by Kate Rhodes, I think its a great way to get people excited about testing, if delivered well.
Another thing you can do is have your Jr. Devs practice the Bowling Game Kata which will help them learn Test Driven Development. It is in java, but could easily be adapted to any language.
Have a code review before every commit (even if it's a 1 minute "I've changed this variable name"), and as part of the code review, review any unit tests.
Don't sign off on the commit until the tests are in place.
(Also - If his work wasn't tested - why was it in a production build in the first place? If it's not tested, don't let it in, then you won't have to work weekends)
For myself, I have started insisting that every bug I find and fix be expressed as a test:
"Hmmm, that's not right..."
Find possible problem
Write a test, show that the code fails
Fix the problem
Show that the new code passes
Loop if the original problem persists
I try to do this even while banging stuff out, and I get done in about the same time, only with a partial test suite already in place.
(I don't live in a commercial programming environment, and am often the only coder working a particular project.)
Imagine I am a mock programmer, named... Marco. Imagine I have graduated school not that long ago, and never really had to write tests. Imagine I work in a company that doesn't really enforce or asks for this. OK? good! Now imagine, that the company is switching to using tests, and they are trying to get me inline with this. I will give somewhat snarky reaction to items mentioned so far, as if I didn't do any research on this.
Let's get this started with the creator:
Showing that the design becomes simpler.
How can writing more, make things simpler. I would now have to keep tabs on getting more cases, and etc. This makes it more complicated if you ask me. Give me solid details.
Showing it prevents defects.
I know that. This is why they are called tests. My code is good, and I checked it for issues, so I don't see where those tests would help.
Making it an ego thing saying only bad programmers don't.
Ohh, so you think I am a bad programmer just because I don't do as much used testing. I'm insulted and positively annoyed at you. I would rather have assistance and support than sayings.
#Justin Standard: On start of new propect pair the junior guy up with yourself or another senior programmer.
Ohh, this is so important that resources will be spent making sure I see how things are done, and have some assist me on how things are done. This is helpful, and I might just start doing it more.
#Justin Standard: Read Unit Testing 101 presentation by Kate Rhodes.
Ahh, that was an interesting presentation, and it made me think about testing. It hammered some points in that I should consider, and it might have swayed my views a bit.
I would love to see more compelling articles, and other tools to assist me in getting in line with thinking this is the right way to do things.
#Dominic Cooney: Spend some time and share testing techniques.
Ahh, this helps me understand what is expected of me as far as techniques, and it puts more items in my bag of knowledge, that I might use again.
#Dominic Cooney: Answer questions, examples and books.
Having a point person (people) to answer question is helpful, it might make me more likely to try. Good examples are great, and it gives me something to aim for, and something to look for reference. Books that are relevant to this directly are great reference.
#Adam Hayle: Surprise Review.
Say what, you sprung something that I am completely unprepared for. I feel uncomfortable with this, but will do my best. I will now be scared and mildly apprehensive of this coming up again, thank you. However, the scare tactic might have worked, but it does have a cost. However, if nothing else works, this might just be the push that is needed.
#Rytmis: Items are only considered done when they have test cases.
Ohh, interesting. I see I really do have to do this now, otherwise I'm not completing anything. This makes sense.
#jmorris: Get Rid / Sacrifice.
glares, glares, glares - There is a chance I might learn, and with support, and assistance, I can become a very important and functional part of the teams. This is one of my handicaps now, but it won't be for long. However, if I just don't get it, I understand that I will go. I think I will get it.
In the end, the support of my team with play a large part in all this. Having a person take their time to assist, and get me started into good habits is always welcome. Then, afterward having a good support net would be great. It would always be appreciated to have someone come a few times afterward, and go over some code, to see how everything is flowing, not in a review per se, but more as a friendly visit.
Reasoning, Preparing, Teaching, Follow up, Support.
I've noticed that a lot of programmers see the value of testing on a rational level. If you've heard things like "yeah, I know I should test this but I really need to get this done quickly" then you know what I mean. However, on an emotional level they feel that they get something done only when they're writing the real code.
The goal, then, should be to somehow get them to understand that testing is in fact the only way to measure when something is "done", and thus give them the intrinsic motivation to write tests.
I'm afraid that's a lot harder than it should be, though. You'll hear a lot of excuses along the lines of "I'm in a real hurry, I'll rewrite/refactor this later and then add the tests" -- and of course, the followup never happens because, surprisingly, they're just as busy the next week.
Here's what I would do:
First time out... "we're going to do this project jointly. I'm going to write the tests and you're going to write the code. Pay attention to how I write the tests, coz that's how we do things around here and that's what I'll expect of you."
Following that... "You're done? Great! First let's look at the tests that are driving your development. Oh, no tests? Let me know when that is done and we'll reschedule looking at your code. If you're needing help to formulate the tests let me know and I'll help you."
He's already doing this. Really. He just doesn't write it down. Not convinced? Watch him go through the standard development cycle:
Write a piece of code
Compile it
Run to see what it does
Write the next piece of code
Step #3 is the test. He already does testing, he just does it manually. Ask him this question: "How do you know tomorrow that the code from today still works?" He will answer: "It's such a little amount of code!"
Ask: "How about next week?"
When he hasn't got an answer, ask: "How would you like your program to tell you when a change breaks something that worked yesterday?"
That's what automatic unit testing is all about.
As a junior programmer, I'm still trying to get into the habit of writing tests. Obviously it's not easy to pick up new habits, but thinking about what would make this work for me, I have to +1 the comments about code reviews and coaching/pair programming.
It may also be worth emphasising the long-term purpose of testing: ensuring that what worked yesterday is still working today, and next week, and next month. I only say that because in skimming the answers I didn't see that mentioned.
In doing code reviews (if you decide to do that), make sure your young dev knows it's not about putting him down, it's about making the code better. Because that way his confidence is less likely to get damaged. And that's important. On the other hand, so is knowing how little you know.
Of course, I don't really know anything. But I hope the words have been useful.
Edit: [Justin Standard]
Don't put yourself down, what you have to say is pretty much right on.
On your point about code reviews: what you will find is that not only will the junior dev learn in the process, but so will the reviewers. Everyone in a code review learns if you make it a collaborative process.
Frankly, if you are having to put that much effort into getting him to do something then you may have to come to terms with the thought that he may just not be a good fit for the team, and may need to go. Now, this doesn't necessarily mean firing him... it may mean finding someplace else in the company his skills are more suited. But if there is no place else...you know what to do.
I'm assuming he is also a fairly new hire (< 1 year) and probably recently out of school...in which case he may not be accustomed to how things work in a corporate setting. Things like that most of us could get away with in college.
If this is the case, one thing I've found works is to have a sort of "surprise new hire review." It doesn't matter if you've never done it before...he won't know that. Just sit him down and tell him your are going to go over his performance and show him some real numbers...take your normal review sheet (you do have a formal review process right?) and change the heading if you want so it looks official and show him where he stands. If you show him in a very formal setting that not doing tests is adversely affecting his performance rating as opposed to just "nagging" him about it, he will hopefully get the point. You've got to show him that his actions will actually affect him be it pay wise or otherwise.
I know, you may want to stay away from doing this because it's not official... but I think you are within reason to do it and it's probably going to be a whole lot cheaper than having to fire him and recruit someone new.
As a junior programmer myself, I thought that Id reveal what it was like when I found myself in a similar situation to your junior developer.
When I first came out of uni, I found that it had severly un equipped me to deal with the real world. Yes I knew some JAVA basics and some philosophy (don't ask) but that was about it. When I first got my job it was a little daunting to say the least. Let me tell you I was probably one of the biggest cowboys around, I would hack together a little bug fix / algorithm with no comments / testing / documentation and ship it out the door.
I was lucky enough to be under the supervision of a kind and very patient senior programmer. Luckily for me, he decided to sit down with me and spend 1-2 weeks going through my very hacked togethor code. He would explain where I'd gone wrong, the finer points of c and pointers (boy did that confuse me!). We managed to write a pretty decent class/module in about a week. All I can say is that if the senior dev hadn't invested the time to help me along the right path, I probably wouldn't have lasted very long.
Happily, 2 years down the line, I would hope that some of my collegues might even consider me an average programmer.
Take home points
Most Universities are very bad at preparing students for the real world
Paired programming really helped me. Thats not to say that it will help everyone but it worked for me.
Assign them to projects that don't require "top quality stable code" if that's your concern and let the jr. developer fail. Have them be the one to 'come in on the weekend' to fix their bugs. Have lunch a lot and talk about software development practices (not lectures, but discussions). In time they will acquire and develop the best practices to do the tasks they are assigned.
Who knows, they might even come up with something better than the techniques your team currently uses.
If the junior programmer, or anyone, doesn't see the value in testing, then it will be hard to get them to do it...period.
I would have made the junior programmer sacrifice their weekend to fix the bug. His actions (or lack there of) are not affecting him directly. Also, make it apparent, that he will not see advancement and/or pay increases if he doesn't improve his skills in testing.
In the end, even with all your help, encouragement, mentoring, he might not be a fit for your team, so let him go and look for someone who does get it.
I second RodeoClown's comment about code reviewing every commit. Once he's done it a fair few times he'll get in the habit of testing stuff.
I don't know if you need to block commits like that though.
At my workplace everyone has free commit to everything, and all SVN commit messages (with diffs) are emailed to the team.
Note: you really want the thunderbird colored-diffs addon if you plan on doing this.
My boss or myself (the 2 'senior' coders) will end up reading over the commits, and if there's any stuff like "you forgot to add unit tests" we just flick an email or go and chat to the person, explaining why they needed unit tests or whatever. Everyone else is encouraged to read the commits too, as it's a great way of seeing what's going on, but the junior devs don't comment so much.
You can help encourage people to get into the habit of this by periodically saying things like "Hey, bob, did you see that commit I did this morning, I found this neat trick where you can do blah blah whatever, read the commit and see how it works!"
NB: We have 2 'senior' devs and 3 junior ones. This may not scale, or you might need to adjust the process a bit with more developers.
It's his Mentor's responsibility to Teach him/her. How well are you teaching him/her HOW to test. Are you pair programming with him? The Junior more than likely doesn't know HOW to set up a good test for xyz.
As a Junior freshout of school he knows many Concepts. Some technique. Some experience. But in the end, all a Junior is POTENTIAL. Almost every feature they work on, there will be something new that they have never done before. Sure the Junior may have done a simple State pattern for a project in class, opening and shutting "doors", but never a real world application of the patterns.
He/she will only be as good as how well you teach. If they were able to "Just get it" do you think they would have taken a Junior position in the first place?
In my experience Juniors are hired and given almost same responsibility as Seniors, but are just paid less and then ignored when they start to falter. Forgive me if i seem bitter, it's 'cause i am.
Make code coverage part of the reviews.
Make "write a test that exposes the bug" a prerequisite to fixing a bug.
Require a certain level of coverage before code can be checked in.
Find a good book on test-driven development and use it to show how test-first can speed development.
Lots of psychology and helpful "mentoring" techniques but, in all honestly, this just boils down to "write tests if you want to still have a job, tomorrow."
You can couch it in whatever terms you think are appropriate, harsh or soft, it doesn't matter. But the fact is, programmers are not paid to just throw together code & check it in -- they're paid to carefully put together code, then put together tests, then test their code, THEN check the whole thing in. (At least that's what it sounds like, from your description.)
Hence, if someone is going to refuse to do their job, explain to them that they can stay home, tomorrow, and you'll hire someone who WILL do the job.
Again, you can do all this gently, if you think that's necessary, but a lot of people just need a big hard slap of Life In The Real World, and you'd be doing them a favor by giving it to them.
Good luck.
Change his job description for a while to solely be writing tests and maintaining tests. I've heard that many companies do this for fresh new inexperienced people for a while when they start.
Additionally, issue a challenge while he's in that role: Write tests that will a) fail on current code a) fulfill the requirements of the software. Hopefully it'll cause him to create some solid and thorough tests (improving the project) and make him better at writing tests for when he re-integrates into core development.
edit> fulfull the requirements of the software meaning that he's not just writing tests to purposely break the code when the code never intended or needed to take that test case into account.
If your colleague lacks experience writing tests maybe he or she is having difficulty testing beyond simple situations, and that is manifesting itself as inadequate testing. Here's what I would try:
Spend some time and share testing techniques, like dependency injection, looking for edge cases, and so on with your colleague.
Offer to answer questions about testing.
Do code reviews of tests for a while. Ask your colleague to review changes of yours that are exemplary of good testing. Look at their comments to see if they're really reading and understanding your test code.
If there are books that fit particularly well with your team's testing philosophy give him or her a copy. It might help if your code review feedback or discussions reference the book so he or she has a thread to pick up and follow.
I wouldn't especially emphasize the shame/guilt factor. It is worth pointing out that testing is a widely adopted, good practice and that writing and maintaining good tests is a professional courtesy so your team mates don't need to spend their weekends at work, but I wouldn't belabor those points.
If you really need to "get tough" then institute an impartial system; nobody likes to feel like they're being singled out. For example your team might require code to maintain a certain level of test coverage (able to be gamed, but at least able to be automated); require new code to have tests; require reviewers to consider the quality of tests when doing code reviews; and so on. Instituting that system should come from team consensus. If you moderate the discussion carefully you might uncover other underlying reasons your colleague's testing isn't what you expect.
# jsmorris
I once had the senior developer and "architect" berate me and a tester(it was my first job out of college) in email for not staying late and finishing such an "easy" task the night before. We had been at it all day and called it quits at 7pm, I had been thrashing since 11am before lunch that day and had pestered every member our team for help at least twice.
I responded and cc'd the team with:
"I've been disappointed in you for a month now. I never get help from the team. I'll be at the coffee shop across the street if you need me. I'm sorry i couldn't debug the 12 parameter, 800 line method that just about everything is dependent on."
After cooling off at the coffee shop for an hour, i went back in the office, grabbed my crap and left. After a few days they called me asking if I was coming in, I said I would but I had an interview, maybe tomorrow.
"So your quitting then?"
On your source repository : use hooks before each commits (pre-commit hook for SVN for example)
In that hook, check for the existence of at least one use case for each method. Use a convention for unit test organisation that you could easily enforce via a pre-commit hook.
On an integration server compile everything and check regularely the test coverage using a test coverage tool. If test coverage is not 100% for a code, block any commit of the developer. He should send you the test case that covers 100% of the code.
Only automatic checks can scale well on a project. You cannot check everything by hand.
The developer should have a mean to check if his test cases covers 100% of the code. That way, if he doesn't commit a 100% tested code, it is his own fault, not a "oops, sorry I forgot" fault.
Remember : People never do what you expect, they always do what you inspect.
First off, like most respondents here point out, if the guy doesn't see the value in testing, there's not much you can do about it, and you've already pointed out that you can't fire the guy. However, failure is not an option here, so what about the few things you can do?
If your organization is large enough to have over 6 developers, I strongly recommend having a Quality Assurance department (even if its just one person to start). Ideally, you should have a ratio of 1 tester to 3-5 developers. The thing about programmers is ... they are programmers, not testers. I have yet to interview a programmer that has been formally taught proper QA techniques.
Most organizations make the fatal flaw of assigning the testing roles to the new-hire, the person with the LEAST amount of exposure to your code -- ideally, the senior developers should be moved to the QA role as they have the experience in the code, and (hopefully) have developed a sixth sense for code smells and failure points that can crop up.
Furthermore, the programmer that made the mistake is probably not going to find the defect because its usually not a syntax error (those get picked up in the compile), but a logic error -- and the same logic is at work when they write the test as when they write the code. Don't have the person who developed the code test that code -- they'll find less bugs than anyone else would.
In your case, if you can afford the redirected work effort, make this new guy the first member of your QA team. Get him to read "Software Testing In The Real World: Improving The Process", because he obviously will need some training in his new role. If he doesn't like it, he'll quit and your problem is still solved.
A slightly less vengeful approach would be let this person do what they are good at (I'm assuming this person got hired because they are actually competent at the programming part of the job) , and hire a tester or two to do the testing (University students often have practicum or "co-op" terms, would love the exposure, and are cheap)
Side Note: Eventually, you'll want the QA team reporting to a QA director, or at least not to a software developer manager, because having the QA team report to the manager who's primary goal is to get the product done is a conflict of interest.
If your organization is smaller than 6, or you can't get away with creating a new team, I recommend paired programming (PP). I'm not a total convert of all the extreme programming techniques, but I'm definitely a believer in paired programming. However, both members of the paired programming team have to be dedicated, or it simply doesn't work. They have to follow two rules: the inspector has to fully understand what is being coded on the screen or he has to ask the coder to explain it; the coder can only code what he can explain -- no "you'll see" or "trust me" or hand-waving will be tolerated.
I only recommend PP if your team is capable of it, because, like testing, no amount of cheering or threatening will persuade a couple of ego-filled introverts to work together if they don't feel comfortable doing so. However, I find that between the choice of writing detailed functional specs and doing code reviews vs. paired programming, the PP usually wins out.
If PP is not for you, then TDD is your best bet, but only if its taken literally. Test Driven Development mean you write the tests FIRST, run the tests to prove they actually do fail, then write the simplest code to make it work. The trade off is now you (should) have a collection of thousands of tests, which is also code, and is just as likely as production code to contain bugs. I'll be honest, I'm not a big fan of TDD, mainly because of this reason, but it works for many developers who would rather write test scripts than test case documents -- some testing is better than none. Couple TDD with PP for a better likelihood of test coverage and less bugs in the script.
If all else fails, have the programmers equivalence of a swear jar -- each time the programmer breaks the build, they have to put $20, $50, $100 (whatever is moderately painful for your staff) into a jar that goes to your favorite (registered!) charity. Until they pay up, shun them :)
All joking aside, the best way to get your programmer to write tests is don't let him program. If you want a programmer, hire a programmer -- If you want tests, hire a tester. I started as a junior programmer 12 years ago doing testing, and it turned into my career path, and I wouldn't trade it for anything. A solid QA department that is properly nurtured and given the power and mandate to improve the software is just as valuable as the developers writing the software in the first place.
This may be a bit heartless, but the way you describe the situation it sounds like you need to fire this guy. Or at least make it clear: refusing to follow house development practices (including writing tests) and checking in buggy code that other people have to clean up will eventually get you fired.
The main reason junior engineers/programmers don't take lots of time to design and perform test scripts, is because most CS certifications do not heavily require this, so other areas of engineering are covered further in college programs, such as design patters.
In my experience, the best way to get the junior professionals into the habit, is to make it part of the process explicitly. That is, when estimating the time an iteration should take, the time of design, write and/or execute the cases should be incorporated into this time estimate.
Finally, reviewing the test script design should be part of a design review, and the actual code should be reviewed in the code review. This makes the programmer liable for doing proper testing of each line of code he/she writes, and the senior engineer and peers liable to provide feedback and guidance on the code and test written.
Based on your comment, "Showing that the design becomes simpler" I'm assuming you guys practice TDD. Doing a code review after the fact is not going to work. The whole thing about TDD is that it's a design and not a testing philosophy. If he didn't write the tests as part of the design, you aren't going to get a lot of benefit from writing tests after the fact - especially from a junior developer. He'll end up missing a whole lot of corner cases and his code will still be crappy.
Your best bet is to have a very patient senior developer to sit with him and do some pair programming. And just keep at it until he learns. Or doesn't learn, in which case you need to reassign him to a task he is better suited to because you will just end up frustrating your real developers.
Not everyone has the same level of talent and/or motivation. Development teams, even agile ones, are made up of people on the "A-Team" and people on "B-Team". A-Team members are the one who architect the solution, write all the non-trivial production code, and communicate with the business owners - all the work that requires thinking outside the box. The B-Team handle things like configuration management, writing scripts, fixing lame bugs, and doing maintenance work - all the work that has strict procedures that have small consequences for failure.