As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
A few weeks ago, I was assigned to evaluate all our programmers. I'm very uncomfortable with this since I was the one who taught everyone the shop's programming language (they all got out of college not knowing the language and as luck would have it, I'm very proficient with it.). On the evaluation, I was very biased on their performance (perfect scores).
I'm glad that our programming shop doesn't require an average performance level but I heard horror stories of shops which do require an average level.
My question are as follows:
As a programmer, what evaluation questions would you like to see?
As a manager, what evaluation questions would you like to see?
As the evaluator, how can you prevent bias in your evaluation?
I would love to remove the evaluation test. Is there any advantages to having an evaluation test? Any disadvantage?
Gets things done is really all you need to evaluate a developer. After that you look at the quality that the developer generates. Do they write unit tests and believe in testing and being responsible for the code they generate? Do they take initiative to fix bugs without being assigned them? Are they passionate about coding? Are they always constantly learning, trying to find better ways to accomplish a task or make a process better? These questions are pretty much how I judge developers directly under me. If they are not directly under you and you are not a direct report for them, then you really shouldn't be evaluating them. If you are assigned in evaluating those programmers that aren't under you, then you need to be proactive to answer the above questions about them, which can be hard.
You can't remove the evaluation test. I know it can become tedious sometimes, but I actually enjoy doing it and it's invaluable for the developer you are evaluating. You need to be a manager that cares about how your developers do. You are a direct reflection on them and as they are of you. One question I always leave up to the developer is for them to evaluate me. The evaluation needs to be a two lane road.
I have to also evaluate off a cookie cutter list of questions, which I do, but I always add the above and try to make the evaluation fun and a learning exercise during the time I have the developer one on one, it is all about the developer you are reviewing.
I would first consider not necessarily the number of lines of code, but the value of the code that the person adds as reflective of course to what they are assigned to do. Someone told to maintain code verses building a new app is very different. Also consider how the person uses new techniques to make the code relevant and updated? How maintainable is the code the person creates? Do they do things in a manner that is logical and understandable to the rest of the team? Does their coding improve the app or just wreck it? Last and not least does their coding improve over time?
What about getting everyone's input? Everyone that a person is working with will have a unique insight into that person. One person might think someone is a slacker, while another person sees that they are spending a lot of time planning before they start coding, etc.
What about getting everyone's input? Everyone that a person is working with will have a unique insight into that person.
That would work if (1) evaluation is conducted with open doors and (2) you've worked with that person on one project or even on the same module. As the person evaluating them, I couldn't judge the programmers who I didn't directly work with.
One person might think someone is a slacker, while another person sees that they are spending a lot of time planning before they start coding
Unfortunately, this is debatable. Someone who looks like a slacker might be in deep thoughts, or maybe not. And is someone who spend a long time planning, necessarily a bad programmer?
I believe a good evaluation question would be able to answer this.
Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I am running a programming club at a high-school, and I have introduced my students to OOP using simplistic classes in C++. I believe at least theoretically they get the idea. I would like to be able to offer them a specific project they can work on together. The question I have is which approach to take. When I took programming classes in college, I saw two different approaches, but in my opinion they both had serious shortcomings. I ended up sleeping through most of them and learning the stuff on my own from books and examples.
Now that I am in the teacher's shoes, I would like to get your opinion on which approach is preferable or if there is a third option.
Approach 1 that was used, was to write a program on the board (or on a computer with a projection screen). The class definitions were always written first. Usually students would look really bewildered at this point, because the purpose of variables and methods would seem entirely obscure to them. The only time that they learned about what each variable and method was for and how they interacted was when the instructor would finally write the implementation (I called this the outside-in method)
Approach 2 was to explain what we are trying to achieve, and creating classes and members as needed. This has the opposite problem. He would be writing a method that would use these imaginary classes that would have to be implemented later. But the students has no idea how these other classes would work.
As it happens, I worked my way through University by working as a teacher. I am now a software engineer.
In my experience, it is paramount that the students be emotionally invested in a programming project. I'll get to your question in a minute, this is a necessary preamble.
To get there, I made the topic of the program something that really interested them, regardless of how silly it seemed, as long as it was something that connected with them in their world.
So, it could be (depending on the age of your students) about ranking singing stars by their talent level, including Justin Bieber. You can imagine the uproar at that one.
Like, Load their lyrics and count the number of times they say the word "baby". Something creative, something fun.
This will make "dry" questions come alive. Like, what should the "singer" class look like. Why it should have properties like "octave range" will be immediately intuitive.
Should the singer class have a method called 'barfOnStage'? (The Biebs barfed on stage a while ago). Sure, why not!? They will easily see the difference between methods and properties.
I mean, I'm just talking off the top of my head, I'm sure you can apply your own inventiveness and creativity to whatever's appropriate for your kids.
I would love to hear what you went with, and how the kids' project turned out.
For the beginner level I would go with a modified number 2 approach where you start with an easy problem and then build on it. Experience is the best teacher....like the time my High School teacher had us 'iterate' through twenty discrete variables as a 'list' and then taught us about arrays....
You need to select the right problem. It needs to be a problem that exercises the 'object oriented muscles' not the 'algorithm muscles'. It needs to be something that you can build requirements on it that exercise object orientation. A simple CRUD program should be adequate. You'll just have to constrain them to using objects and not arrays as I assume they'll be comfortable doing. I'll leave the exact specification up to you.
First, have them write a program that just lets them add records to the 'database'. Just by Creating a 'row' in their 'database' they'll be forced to learn how to create an object and instantiate an object.
Next, have them modify their program to display the contents of their 'database'. When they Read their 'database' they'll exercise the .show function or however you implement that capability.
Third, have them make it so they can change the contents of a 'record'. Updating will reinforce how to tell an object to modify itself.
Finally, they should modify the program to allow for 'record' removal. This will reinforce proper object destruction.
Taking it to the next level (and since this is c++) you could:
require they implement their 'database' as a linked list
write the specification so you can do something that requires objects
being added together
add to the scope of data so the program structure would be better
facilitated by a template or inheritance (vehicle 'database' that has
both cars and motorcycles for example)
From my experience doing is the best teacher. Having someone show me how to do something (or doing it as a group on a board) short circuits the learning. Wrestling with it and having some Socratic guidance teaches a deeper understanding and yields a better programming brain.
If they aren't ready to do the 'create' function then coding it out as a group on the whiteboard will work, but once they have a template of how everything fits together they need to be behind a computer figuring it out.
If it's a beginner programming course I'd say the OOP aspect is of minor issue. Focus on expressions, statements and control flow.
If the focus is on OOP I'd say begin with the history of OOP and what OOP focus is. From that one can look at how one describe these concepts in different languages. (i.e. ADT, Simula etc http://retis.sssup.it/~lipari/courses/oosd2010-1/02.oop.pdf )
...and then experiment.
"If we look at the whole history, we see that the proto-OOP stuff started with ADT, ..."
-- Alan Kay (http://userpage.fu-berlin.de/~ram/pub/pub_jf47ht81Ht/doc_kay_oop_en)
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 12 years ago.
I know good programming practices always help in the "long run" for a project, but sometimes they just seem to cost a lot of time. For instance, its suggested that I maintain a header file and a cpp file for each class that I make, keep only the declarations in the headers while definitions in cpp. Even with 10-12 classes, this process becomes very cumbersome. Updating the makefile each time a new class is added dependecies and evthing .. takes a lot of time...
While I am busy doing all this, others would just write evthing in a single fie, issue a single compile command and run their programs ... why should I also not do the same? Its fast and it works?
Even trying to come up with short, meaningful names for variables and functions takes a lot of time, otherwise you end up typing 30 character long names, completely unmanagable without auto complete
Edit :
Okay let me put it a little differently : Lets say i am working on a small-medium size project, that is never going to require any maintenance by a different developer (or even me). Its basically a one time development. Are programming practices worth following in such a case. I guess my question is, do the good programming practices actually help during the development, or they just pay off during maintenance ?
I haven't been working in the field for long, but not slacking off and documenting as I go, defining variables with useful names, etc...definitely saves time in the long run. It saves time when the other developers/myself go back to the code for maintenance.
Not stuck wondering what this variable means, why did I do that, etc! :)
Laziness may pay off right now, but it will only pay off once. Taking the time to do it right doesn't pay off immediately, but it will do so multiple times and for a longer period of time.
Also, there is nothing wrong with really long variable and method names, unless you subscribe to the naive view that most of the time you spend programming is used on typing and not solving problems.
Addendum: If it is hard to name succinctly, it probably needs to be broken down into more modular units. Method or variables that are hard to name is a definite code smell.
Its all about long term supportability. Clearly you have either not been coding on a team or not had to look at code you have written years ago. I can open code I have written 15 years ago and modify it with very small relearning curves if I have given meaningful variable names, while if I have not it will take some time to figure out what I was doing with that X and that H, and why T should not be more than 4.
Try sharing code with 10 people on a team and have each of them just put code in any place they like... I have worked with people like that. If lynchings still had public support, I would have lead many. Picture this... I know I need to modify the signature on Foo.SetFoos(int FoosInFooVille), but I looked for Foo.h and it was not found, well now I just look for Foo.cpp right? Oops, to save... time?... they jammed Foo.cpp into Chew.cpp... so I look there... its not at the top of the file! Do I find Foo in that file and see if its above that... sure... nope, not found... its in Chew.h. Now I am ready to check the SVN log and target my USB powered missile launcher at that jerk next time he passes by.
The downside of the ad-hoc is in the long-run, when it comes to maintenance (especially when the maintenance coders are people other than yourself). Such techniques might be OK for quickie proof-of-concepts, but will cause more problems in the future if you don't rebuild properly.
Yes, it's worth doing it "right" ie good, because, basically it's pay me now or pay me later, and, you're not the only person who will ever see the code.
If it takes you 15 minutes now to do it "good" - how long does it take you 6 months (or more) from now to figure out what was meant - in your own code!
Now, you could use Martin Fowler's 3 strikes idea for refactoring.
First time in the code to fix some thing , notice it could be refactored, but you're too busy and let it go. Second time back in the same code, same thing. Third time: refactor the code.
The effectiveness of programming practices doesn't seem to be your problem, here. What you should be concerned about are the tools you're using to develop. There are plenty of IDE's and other options for keeping your make files automatically up-to-date, for example.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I have switched to a new company and I am working on a product that has a huge code base without documentation. I want to quickly get acquainted with the design and the code flow of the product so that I may become a productive member ASAP
Slowly and steadily one does gets to understand the code, but what should be the best and smart way one should approach the code base so that he understands the code quickly and start delivering?
Note: I tried my hands on Star UML and tried to reverse engineer the class diagrams so that I may have a rough idea of the product internal designs but failed miserably.
EDIT: The question is not about gaining knowledge about what the product does but how the internals are designed.
Fixing bugs and Debugging using breakpoints does provide one way of achieving this but I was looking if there is even a faster way we could achieve this
In Keith's Words:
This may work for some code-bases, but in general I think its a bad idea. You tend to be too focused on the details, while at first you want to get the big picture: what the classes are, what the communication patterns are, etc. Plus, if you have a distributed application (client-server, n-tier, etc), or code that takes a long time to run it may not be practical to run it through a debugger
I'm a contract engineer, and this situation is routine several times per year—for the last few decades.
I find it quite helpful to first run the application and play with it—before looking at any code:
What the heck does it do? If necessary, read the user documentation.
What happens with extreme values?
What if I leave out some values?
What happens if I click on a control rapidly?
Is there any way to misuse the program?
Explore the edges of the application: are there seldom used or hard-to-find sub-menus? Is there a configuration facility which exposes more functionality?
While I'm doing that, I'm constructing a mental model of how I would have implemented it. Surprisingly, this user-oriented first encounter with the product usually causes my understanding of the application to be head and shoulders above the developers who have worked on it for a long time. A side effect of this approach is that I tend to find quite a few bugs (often quite an avalanche of them), and think of quite a few improvements which should be made.
After that, I look at the general structure of the program, whether it be modules, classes, files, or schema. Not looking at individual lines of code, except those showing the program's architecture. Once I think I understand over half of the structure, I try to make a small bug fix or improvement—something which takes a few minutes to write, but may take hours to properly understand. If it works, I make a slightly bigger change somewhere, preferably in another section of the code.
In this way, I've found it possible to understand well enough approximately 50,000 to 100,000 lines of code per day.
If you have a development environment to run the code in the best way I've found is to use a debugger and watch the flow of code while executing it. You can setup break points and step through it to see how the code interacts.
The way I have always learned, besides just reading through the code / data model is to start fixing some bugs. That gives me exposure to various parts of the system, and having the 'purpose' while reading the code makes it a bit more meaningful.
Ask everyone you can find for help and ask them to ask anyone else they think could be helpful.
There are tools which suck up the source code and draw pictures. Try Enterprise Architect from Sparx. It's under $200 per seat and will show you the object layout very effectively.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I am aware of this question: https://stackoverflow.com/questions/428691/how-to-encourage-implementation-of-tdd
In my team, we write a lot of unit tests. But, in general the programmers tend to write unit tests after wrting the code. So, we first finish the module functionality and then write tests. Our coverage is around 70% for most modules. I have tried convincing my technical manager and my team members to do pure TDD wherein we first write tests and then the code, but invain. I think writing tests first allows us to discover design better. Am I just being finicky, especially when our coverage is quite high? If the answer to this question is no, then how do I talk to people to have a test-first approach.
EDIT: I think writing tests after writing code is an easier thing to do. People in my team have got accustomed to do this and are opposing any change.
I don't know that there is a whole lot you can tell people to convince them of the value of TDD. You can cite what the experts have told us about it, and your own personal experiences, but if folks are not willing to give it a try, chances are low that you sharing this information with them will help.
My experience with TDD was basically that it sounded like a really good idea, but it never really worked out the way it was supposed to. Then one day I tried it again on a new task and ended up with a solution to the problem that was simpler than what I would have thought possible, due entirely to the fact that I had used TDD. I think when developers have this sort of experience it changes the way they look at things, and makes them more willing to try it in other situations.
The challenge is being able to demonstrate this to the other developers. One way you may be able to do this is with the use of a TDD Kata like this one from Roy Osherove (he uses it in his TDD Master Course). It is designed specifically to demonstrate the value in working in small steps, implementing only the code that is needed to make each test pass. This may show folks how the process works, and make them more comfortable with giving it a try.
There was also a coding exercise I heard about where you gave two groups/teams of developers a reasonably simple task, and asked one of the groups to use TDD, and make sure they followed the "simplest thing that could possibly work" rules, while the other team did things however they wanted. Then, once that is done, you have the teams switch tasks, but throw out the code written by each team, leaving only the tests. The teams are then supposed to recreate the code for the task. Typically you will find that the team who inherits the TDD code has a much easier time doing this.
Given all that, though, I think the best thing you can do personally is to start doing TDD yourself for as much of your work as possible. This has the potential to give you some very specific references for where and how TDD has proved to be beneficial within the context of the current project. In particular if you do code reviews your peers may notice the code that you are writing TDD is more concise, and easier to maintain than the code that has been writing without TDD. Your QA team may also notice a difference in the quality of the code, which is one of the things that you hear a lot about companies who move to TDD.
A couple suggestions. Your practicality may vary:
Win over one or two people: your boss, an intern, etc, over to your side first. Your first follower will make you a leader.
Start pair programming or mentoring. Even if its just with an intern or two, working closely with someone can be a good way to influence their style. If you are willing, you could try becoming a manager.
Give a technical presentation on the subject. Make the focus on the why and the problem you are solving, instead of TDD. You want people to buy into the problem rather than your specific solution. Include a couple other alternatives so it doesn't seem like you are just trying to push what works for you.
Get some outside training from Object Mentor or the like. Works best if you can convince your boss and the team isn't a bunch of hardened soulless cynics.
To be honest, you should use always just use a development/test cycle that works.
A lot of people like TDD, and a lot of big players like Google have embraced it; because of the high test coverage.
However, it seems that you and your team tend to be doing pretty well without it—and remember, and change in development style decreases productivity at least temporarily. So remember the old adage, don't change what works.
However, if you and your customers are finding that there still are a lot of bugs that the tests don't cover, TDD is an ideal way to up that—so you should tell management that TDD is a way to increase customer satisfaction, thus make money. (That's management-speak for you!)
Perhaps Leading by example can help:
Start working like this yourself
Perhaps create a tutorial\script to setup the environment (the IDE) that will not add overhead to the TDD process:
Run the tests in a single keyboard shortcut
The GUI of the test system should be present in the development view (not just in the testing view, so you don't have to move between them)
I am guessing that after a while, people will be curious and ask you if this TDD thing really works, you should have a prepared answer for that question :-)
Have you come across BDD at all? There's an associated change in vocabulary which I find really helps newcomers to TDD pick it up. Here's the vocab change:
http://lizkeogh.com/2009/11/06/translating-tdd-to-bdd/
I've found that using this language helps people focus on why it's useful to write the tests (or examples) first. I translated another example in the comments.
Even then, sometimes it's helpful to learn how tests are structured. If people have trouble learning how to write them first, writing them afterwards is a good learning step. You're right about the design benefits. It can take a while to grok.
In the past I've found that the best way to get TDD is to have a safe environment to practice in. Having my own toy app or running / attending workshops based on a toy app have both helped me a lot.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I'm a software test engineer embedded in a development team. A large part of my job involves checking over the state of the project's automated tests (mainly unit/integration tests).
I'm not a short-sighted zealot who wants to force testing down everyone's throats, but I do want to help everyone to get the best out of the time they spend writing tests. A lot of time is spent every week writing tests, so it is important to maximise the returns.
Right now, I do a few things to try and help. Firstly, I always make myself available to talk about testability concerns. E.g. try to identity a testing strategy, whether a particular design is testable and so forth.
In addition to explaining things to people and generally trying to help them out, I also review the finished code and the tests that they write (I have to sign off on stories, meaning that I am somewhat adversarial, too).
My current process is to sit down alone, work through their code and bookmark & comment all problem areas, places that things can be improved and the reason for it. I then get the developer around to my PC and talk through all of the review points. I then send them a decent write up so they have a record of it and they have easy reference.
I do not fix their code and tests for them, but I will add more test cases etc. if I spot gaps. The reason I have decided not to fix up the tests for them is that it's too easy for developers to say "thanks" but to tune out. My reasoning is that if they have to fix the problems I identified before I will sign off, it will lead to a better standard of testing on the project (i.e. more self-sufficient developer testing).
My question is: When it comes to aiding the team, could I be doing anything better? What approaches have you found that can be beneficial?
I'd particularly like to hear from people holding similar positions who have faced the same challenges (e.g. helping improve the quality of the testing, demonstrating the value testing can bring in relevant situations and also striking a good balance between being supportive and adversarial.)
*edit:
Thanks for the answers; all of them contained useful suggestions. I marked the top one as the best answer as I guess it comes down to developer support, and pair programming is something I have not yet tried (short of a few impromptu 'here's how I'd do this' demonstrations after the tests had been written). I'll give that a go with anyone who struggles with testing something :) Cheers.
If you have certain people that tend to be weak at testing, then sit down with them, pair programming, sort of, and as they work on their code, you can help them see how they may test it.
After a while these people should get better at unit testing, and your work load on this should decrease.
The other thing is that everyone should be looking at tests. If I touch a function, make any change, then I should be checking on the tests to make certain they are complete. If there is a problem I can discuss it with the developer.
You should also enlist the work of the team lead, as that is part of his responsibility, or should be, to ensure that everyone understands how to write tests well.
A few things I'd do:
Get them to run coverage and spot any missed areas of code and highlight how although they think they've got all the cases covered, they might not have. I've done this with a few people and they always seem quite surprised at areas they've missed when they thought they'd written watertight tests
Start a "recipe" page on your local Wiki. Every time someone comes up with a testing scenario that they can't figure out, or need your help with, stick it on the Wiki and make it easy to find. Get other people to contribute as well
It sounds like you're already doing this anyway, but ensure when anyone has a testing related question, make yourself available even if it's to the detriment of your normal workload. If you're passionate about it, it should inspire those who are interested to do the right thing too.
When I'm introducing someone to testing (or a new testing technique), I'll often spend alot of my time randomly wandering over to their workstation just to see how they're getting on and nudge them in the right direction. This can be fitted in quite nicely when going for tea/smoke breaks or when you're doing a build. I've had quite good feedback about this but YMMV.
Depending on the size of the team, I wonder if it may make sense after an initial review of the code, to pull someone else to be another set of eyes that can look through what changes you'd propose and act as a way to show that this isn't just your opinion on it. This could work as a way to highlight where there may be some tension in terms of what changes you'd like to see that a developer may reply, "Oh, that'll take weeks and likely isn't worth it..." or something similar if what you'd like to change isn't that simple.
In a similar vein, how does most of the team view testing? Are there leaders or those highly respected that have a positive view on it and help foster a positive attitude towards it? Is there general documentation about the testing guidelines that may help those new to the team to get up to speed quickly? These are just a few other areas I'd examine since sometimes tests can be a great thing and sometimes they can be a pain. Much like the glass that is half-empty or half-full depending on how you want to see it.
Not that I've had the same position, but as someone that has been a developer for a while, this is just what I'd like to see to help make testing be a good thing, as Martha Stewart would say.
One way to gently ease the team into getting tests started is to initiate the practice of writing tests when bugs are being fixed. So when a bug comes in, the first thing to do is write a test that will fail because the of the bug, fix the bug and then get the test to pass.
This approach can also be done when code gets modified internally (no public API changes) - write tests to cover the area being modified to ensure that it doesn't get broken by the code changes. Writing tests this way is a lot less work and clearly demonstrates the benefits once the developer catches their first regression bug.