Related
We're designing an app that will generate lots of different types of text output, eg email, html, sms, etc. The output will be generated using some kind of template, with data coming from a db. Our requirements include:
Basic logic / calculated fields within template. Eg "ifs" and "for" loops, plus some things like adding percentages for tax etc
Runtime editing. Our users need to be able to tweak the templates to their needs, such as change boilerplate text, add new logic, etc
Multi lingual. We need to choose the correct template for the current culture.
Culture sensitive. Eg dates and currencies will output according to current ui culture.
Flexibility. We need the templates to be able to handle multiple repeating groups, hierarchies, etc.
Cannot use commercial software as a solution (e.g. InfoPath). We need to be able to modify the source code at any time.
The app is c#.net. We are considering using T4, XML + XSLT or hosting the Razor engine. Given that the syntax cant be too overwhelming for non-techie users, we'd like to get your opinion on which you feel is the right templating engine for us. We're happy to consider ones notalready mentioned too.
Thanks.
I'm very hesitant to try and answer this question on a forum, because technology choices depend on far more factors than are conveyed in the question, including things such as attitude to risk, attitude to open source, previous good and bad experiences, politics and leadership on the project etc. The big advantage of XSLT over Razor is that it's a standard and has multiple implementations on multiple platforms (including at least three implementations on .NET!) so there's no lock-in; but that doesn't seem to be a factor in your statement of requirements. And the fact that you're using .NET suggests that supplier lock-in isn't something that worries you anyway.
One thing to bear in mind is that non-programmers often take to XSLT a lot more quickly than programmers do. Its rule-based declarative approach, and its XML syntax, sometimes make programmers uncomfortable (it's not like anything they have seen before) but end-users often take to it like ducks to water.
We've decided to go with Razor Hosting. The reason why I've posted this is an answer is that I thought it would help others if I include the following article link:
http://www.west-wind.com/weblog/posts/2010/Dec/27/Hosting-the-Razor-Engine-for-Templating-in-NonWeb-Applications
This excellent piece of work by Rick Strahl makes it really easy to host Razor.
Are there any official C++ recommendations that concern with the amount of information that should be disclosed in a method name? I am asking because I can find plenty of references in Internet but none that really explains this.
I'm working on a C++ class with a method called calculateIBANAndBICAndSaveRecordChainIfChanged, which pretty well explains what the method does. A shorter name would be easier to remember and would need no intellisense or copy & paste to type. It would be less descriptive, true, but functionality is supposed to be documented.
calculateIBANAndBICAndSaveRecordChainIfChanged considered to be a bad function name, it breaks the rule of one-function-does-one-thing.
Reduce complexity
The single most important reason to create a routine is to reduce a program's complexity. Create a routine to hide information so that you won't need to think about it. Sure, you'll need to think about it when you write the routine. But after it's written, you should be able to forget the details and use the routine without any knowledge of its internal workings. Other reasons to create routines—minimizing code size, improving maintainability, and improving correctness—are also good reasons, but without the abstractive power of routines, complex programs would be impossible to manage intellectually.
You could simply break this function into below functions:
CalculateIBAN
CalculateBIC
SaveRecordChain
IsRecordChainChanged
To name a procedure, use a strong verb followed by an object
A procedure with functional cohesion usually performs an operation on an object. The name should reflect what the procedure does, and an operation on an object implies a verb-plus-object name. PrintDocument(), CalcMonthlyRevenues(), CheckOrderlnfo(), and RepaginateDocument() are samples of good procedure names.
Describe everything the routine does
In the routine's name, describe all the outputs and side effects. If a routine computes report totals and opens an output file, ComputeReportTotals() is not an adequate name for the routine. ComputeReportTotalsAndOpen-OutputFile() is an adequate name but is too long and silly. If you have routines with side effects, you'll have many long, silly names. The cure is not to use less-descriptive routine names; the cure is to program so that you cause things to happen directly rather than with side effects.
Avoid meaningless, vague, or wishy-washy verbs
Some verbs are elastic, stretched to cover just about any meaning. Routine names like HandleCalculation(), PerformServices(), OutputUser(), ProcessInput(), and DealWithOutput() don't tell you what the routines do. At the most, these names tell you that the routines have something to do with calculations, services, users, input, and output. The exception would be when the verb "handle" was used in the specific technical sense of handling an event.
Most of above points are referred from Code complete II. Other good books are Clean Code, The Clean Coder from Robert C. Martin
To answer the direct question, I don't think function names need to be memorable. It's nice if they are, but like you say this stuff is supposed to be documented. I can look it up.
calculateIBANAndBICAndSaveRecordChainIfChanged is too long for my taste. Aside from the inconvenience of having to c/p or auto-complete to even use them, my fear with long function names is that I don't read them properly either, so names with similar "shapes" start to look confusingly similar to one another.
So I would advise looking for a shorter name. There must be some reason why these operations (calculating two things, and conditionally saving a record chain) have been grouped together. That reason isn't described in the question, it lies somewhere in the specification or the history of your project. You should identify that reason and look to it for a more succinct function name.
When naming a function you can also consider what reasons[*] the function might change in future. Why are there two things (IBANA and BIC) that are calculated at the same time? What is the relationship between them? Can you identify the reason for doing both at once and then saving?
For example: they are the "acronyms" for this object, it's common to want to recalculate the acronyms all at once, and if you recalculate then naturally the changes need saving. Then call the function refreshAcronyms. Maybe there will be a third acronym in future.
For another example: what callers really want is to save the object if changed, and it's an additional chore that to preserve integrity of the stored data, I must always recalculate the IBANA and the BIC before saving. In that case, all the rest is necessary precursors to saving, so I can call the function saveRecordChain. Users of the public interface just need to know that the save function does what needs to be done. There might be a serializeToFile() function in the private interface that saves if changed without doing the extra stuff.
[*] I say "reasons" plural, but Robert C Martin defines the "single responsibility principle" to be that there is only one possible reason to change a well-designed function.
Ideally one method should do only one thing. And your method name should reflect what it does (that one thing), then only your program become readable.
It''s a matter of personal preference although I would think that calculateIBANAndBICAndSaveRecordChainIfChanged is too long and therefore difficult to read and code with (unless you're using a smart editor that can auto-complete)
Two further points:
The function needs to be broken down into smaller parts, as other
posters have suggested.
There's no law against commenting your headers to give a more
detailed description of the function there so you don't have to
build every aspect of its functionality into the name.
You read and write too many methods over the course of your career to remember their names. Most programmers would need to look up a name of a function from their language's standard library, let alone names of functions that their or their team developed! The most memorable function name would be of no use to someone maintaining your code and seeing the call for the first time. Moreover, good chances are that in six months you wouldn't remember it either!
That is why I recommend going for descriptive names first, and not worrying about the ease of memorization: after all, IDEs with intellisense are not going away any time soon (and they were introduced for a good reason - to address our memory limitations).
For personal interaction that would be enough and useful, but any way after completing the app you have to re-factor every function name to exactly what they intend to do. And if working in a group or in company make it sure that function name reflects what its functionality is.
And in your eg function name i may name it like: saveRecordWithRespctToIBANandBIC()
I am a complete newbie to BDD and I would like to understand where does it come into play in a development cycle. In TDD approaches, we would write unit-tests commonly for libraries or apis, we would mock objects and that was great because it could even drive our design. These tests would be written before the actual code which is nice.
I understand that BDD is more about spec/scenario testing and I can see that it is a perfect fit to test a business requirement against the actual code. But what is the best practice for writing these tests? Do we still keep writing individual tests (as in TDD) mocking out dependencies and writing unit-tests for every single thing that could go wrong? Then write our bdd tests? Do we write the bdd tests first? Do we write only bdd tests even against individual components?
I use .NET and usually write asp.net mvc apps but this is more of a theory question and independent of the underlying programming language.
Thanks a lot.
Don't know about the right way but this is my experience. After analyzing the specification document I try to extract as many different 'stories' and describe them using BDD stories files. As you already know every sentence should start with either one of three keywords given, when and then.
After translating the whole specification to BDD test stories I write a class that implement steps i.e. execute each one of sentences used in stories.
Next step is starting writing an implementation which will be called by methods that execute script sentences by setting initial state (given), state transitions (when) and checking the destination state of the app (then).
Whenever I need to implement an actual class method I use TDD to do a thorough testing in isolation.
Of course, in order to run the test part of the code that is not yet implemented may be temporarily mocked.
BDD is a set of patterns around describing and building models of the behaviour of a system (and, at a higher level, a project vision) which can help us to have conversations about that behaviour, for various levels of granularity.
We use conversational patterns like "Given a context, when this event happens then this outcome should occur", and encourage questions like, "Should it? Always? Are there any other contexts we're missing?"
We can do that at a unit level, a scenario level or even into the analysis space.
I tend to work from the highest level inwards. Here's an article I wrote which describes what that looks like, right the way from project vision to unit tests.
The first bit of code I write will be the scenarios. Here are some scenarios written without any BDD frameworks, just plain old NUnit, showing how you can make these English-readable with domain language, etc.
Then I start with the User Interface. This could be a GUI, web-page, or an interface for another system to use my system. When this is done I can get feedback on whether my users like it. I frequently hard-code data, etc., just so that I can get that feedback.
Once I know roughly what my GUI will look like, I can start creating the behaviour behind it. I usually start with the behaviour of the controller. I will write class-level examples which describe the class's behaviour and responsibilities. I use mocks in place of collaborating classes I haven't written yet. This is the equivalent of TDD, but rather than writing tests which pin the code down so that nobody breaks it, I'm writing examples of how you can use my code, which show how it behaves and why it's valuable so that you can change it safely. I also use Given / When / Then for this! But I tend to use more technical language and don't worry about it being English-readable. Frequently my Given / When / Then are just in comments. Here's an example of class behaviour from the same codebase as the scenarios, so you can see what the difference is.
Hope this helps, and good luck with the BDD!
You can use it similarly as TDD, unit testing or otherwise. The difference is gonna come from testing "Business Behavior". Spec Flow and Features is one option, unit testing syntax would look something like this
[Given(#"a user exists with username ""(.*)"" and password ""(.*)""")]
public void GivenAUserExistsWithUsernameAdminAndPasswordPassword(string userName, string password) {
}
[Then(#"that user should be able to login using ""(.*)"" and ""(.*)""")]
public void ThenThatUserShouldBeAbleToLoginUsingAdminAndPassword(string userName, string password) {
Assert.IsTrue(_authService.IsValidLogin(userName, password));
}
I would like to know your opinion on whether it is a good idea to have developers put their name or signature on top of every test they write and why (not)?
I would say no. If you really need to know who wrote the test, you should be able to look back in your version control to see who's to blame. When people start putting their name on things, you lose the sense of collective ownership/blame, and people start to get more concerned with "their" code rather than the system as a whole.
I upvoted Andy's but I'd also add that putting the name in the code also is then something else that must be maintained. eg. Joe creates the test, but Jane changes it, is it Joe's test or Jane's test? And if Jane doesn't change the comment, you'll now go and talk to Joe about the code that Jane wrote... All too confusing. Use Blame and be done with it.
What would you do with the information?
There's no use case for having the author's name.
Generally, the information has one of two meanings.
The person's gone (gone from the company, gone from the project, or a contractor and someone who'll never be found again.)
The person's still around.
In the second case, you already knew that. Having their name in a source code file doesn't clarify the fact that they worked on this code, are still with the company and still on the project.
So, author's name has no use cases.
I favour self-explanatory test cases rather than signed tests.
Even if you know who wrote the test, and he's still working here, and he's available, you cannot be certain he remembers the reasons why he wrote this test.
Make sure the names of the test case are explicit enough. Add comments if necessary, reference bug ID, User Story, Customer ...
I think it depends on the attitude that already exists. If there are many conflicts, then removing all the names is useful, because the code stands for itself. However, if the names are put on the tests (as with code) then the developer is taking ownership.
Taking ownership is always a good thing because it encourages the developer to make it as perfect as possible. It also helps when you need to ask a question about the test, or if the test is failing, and you can't figure out why, you'll be able to ask the expert on the subject.
However, if there is a darker atmosphere, more about developers who are defensive, and are trying to undermine each other, then the names will cause them to focus on 'who made this code wrong' or 'this test failed because X coded it badly' rather than focusing on the error that the test might be detecting.
So there's always a balance when explicitly attaching names to tests like that.
And as Andy mentioned, there's always source control if you REALLY need to know who wrote something.
I think it really depends on what the rest of your culture is around code ownership. If your teams culture is that all code is owned by someone, and only that person can touch that code, then labeling whose code is whose might make sense.
However, I prefer to work on teams where there's collective ownership of code. Sometimes it's nice to have an original author on a file, just to see whose original design it was, but beyond that, I don't think tagging specific tests is useful.
As other people mentioned, if you really need to figure out who made a particular change, you can figure that out from version control. In general though, I think tests should be owned and maintained by the whole team.
Closed. This question is opinion-based. It is not currently accepting answers.
Closed 1 year ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
What are the best practices for naming unit test classes and test methods?
This was discussed on SO before, at What are some popular naming conventions for Unit Tests?
I don't know if this is a very good approach, but currently in my testing projects, I have one-to-one mappings between each production class and a test class, e.g. Product and ProductTest.
In my test classes I then have methods with the names of the methods I am testing, an underscore, and then the situation and what I expect to happen, e.g. Save_ShouldThrowExceptionWithNullName().
Update (Jul 2021)
It's been quite a while since my original answer (almost 12 years) and best practices have been changing a lot during this time. So I feel inclined to update my own answer and offer different naming strategies to the readers.
Many comments and answers point out that the naming strategy I propose in my original answer is not resistant to refactorings and ends up with difficult to understand names, and I fully agree.
In the last years, I ended up using a more human readable naming schema where the test name describes what we want to test, in the line described by Vladimir Khorikov.
Some examples would be:
Add_credit_updates_customer_balance
Purchase_without_funds_is_not_possible
Add_affiliate_discount
But as you can see it's quite a flexible schema but the most important thing is that reading the name you know what the test is about without including technical details that may change over time.
To name the projects and test classes I still adhere to the original answer schema.
Original answer (Oct 2009)
I like Roy Osherove's naming strategy. It's the following:
[UnitOfWork_StateUnderTest_ExpectedBehavior]
It has every information needed on the method name and in a structured manner.
The unit of work can be as small as a single method, a class, or as large as multiple classes. It should represent all the things that are to be tested in this test case and are under control.
For assemblies, I use the typical .Tests ending, which I think is quite widespread and the same for classes (ending with Tests):
[NameOfTheClassUnderTestTests]
Previously, I used Fixture as suffix instead of Tests, but I think the latter is more common, then I changed the naming strategy.
I like to follow the "Should" naming standard for tests while naming the test fixture after the unit under test (i.e. the class).
To illustrate (using C# and NUnit):
[TestFixture]
public class BankAccountTests
{
[Test]
public void Should_Increase_Balance_When_Deposit_Is_Made()
{
var bankAccount = new BankAccount();
bankAccount.Deposit(100);
Assert.That(bankAccount.Balance, Is.EqualTo(100));
}
}
Why "Should"?
I find that it forces the test writers to name the test with a sentence along the lines of "Should [be in some state] [after/before/when] [action takes place]"
Yes, writing "Should" everywhere does get a bit repetitive, but as I said it forces writers to think in the correct way (so can be good for novices). Plus it generally results in a readable English test name.
Update:
I've noticed that Jimmy Bogard is also a fan of 'should' and even has a unit test library called Should.
Update (4 years later...)
For those interested, my approach to naming tests has evolved over the years. One of the issues with the Should pattern I describe above as its not easy to know at a glance which method is under test. For OOP I think it makes more sense to start the test name with the method under test. For a well designed class this should result in readable test method names. I now use a format similar to <method>_Should<expected>_When<condition>. Obviously depending on the context you may want to substitute the Should/When verbs for something more appropriate. Example:
Deposit_ShouldIncreaseBalance_WhenGivenPositiveValue()
I like this naming style:
OrdersShouldBeCreated();
OrdersWithNoProductsShouldFail();
and so on.
It makes really clear to a non-tester what the problem is.
Kent Beck suggests:
One test fixture per 'unit' (class of your program). Test fixtures are classes themselves. The test fixture name should be:
[name of your 'unit']Tests
Test cases (the test fixture methods) have names like:
test[feature being tested]
For example, having the following class:
class Person {
int calculateAge() { ... }
// other methods and properties
}
A test fixture would be:
class PersonTests {
testAgeCalculationWithNoBirthDate() { ... }
// or
testCalculateAge() { ... }
}
Class Names. For test fixture names, I find that "Test" is quite common in the ubiquitous language of many domains. For example, in an engineering domain: StressTest, and in a cosmetics domain: SkinTest. Sorry to disagree with Kent, but using "Test" in my test fixtures (StressTestTest?) is confusing.
"Unit" is also used a lot in domains. E.g. MeasurementUnit. Is a class called MeasurementUnitTest a test of "Measurement" or "MeasurementUnit"?
Therefore I like to use the "Qa" prefix for all my test classes. E.g. QaSkinTest and QaMeasurementUnit. It is never confused with domain objects, and using a prefix rather than a suffix means that all the test fixtures live together visually (useful if you have fakes or other support classes in your test project)
Namespaces. I work in C# and I keep my test classes in the same namespace as the class they are testing. It is more convenient than having separate test namespaces. Of course, the test classes are in a different project.
Test method names. I like to name my methods WhenXXX_ExpectYYY. It makes the precondition clear, and helps with automated documentation (a la TestDox). This is similar to the advice on the Google testing blog, but with more separation of preconditions and expectations. For example:
WhenDivisorIsNonZero_ExpectDivisionResult
WhenDivisorIsZero_ExpectError
WhenInventoryIsBelowOrderQty_ExpectBackOrder
WhenInventoryIsAboveOrderQty_ExpectReducedInventory
I use Given-When-Then concept.
Take a look at this short article http://cakebaker.42dh.com/2009/05/28/given-when-then/. Article describes this concept in terms of BDD, but you can use it in TDD as well without any changes.
I recently came up with the following convention for naming my tests, their classes and containing projects in order to maximize their descriptivenes:
Lets say I am testing the Settings class in a project in the MyApp.Serialization namespace.
First I will create a test project with the MyApp.Serialization.Tests namespace.
Within this project and of course the namespace I will create a class called IfSettings (saved as IfSettings.cs).
Lets say I am testing the SaveStrings() method. -> I will name the test CanSaveStrings().
When I run this test it will show the following heading:
MyApp.Serialization.Tests.IfSettings.CanSaveStrings
I think this tells me very well, what it is testing.
Of course it is usefull that in English the noun "Tests" is the same as the verb "tests".
There is no limit to your creativity in naming the tests, so that we get full sentence headings for them.
Usually the Test names will have to start with a verb.
Examples include:
Detects (e.g. DetectsInvalidUserInput)
Throws (e.g. ThrowsOnNotFound)
Will (e.g. WillCloseTheDatabaseAfterTheTransaction)
etc.
Another option is to use "that" instead of "if".
The latter saves me keystrokes though and describes more exactly what I am doing, since I don't know, that the tested behavior is present, but am testing if it is.
[Edit]
After using above naming convention for a little longer now, I have found, that the If prefix can be confusing, when working with interfaces. It just so happens, that the testing class IfSerializer.cs looks very similar to the interface ISerializer.cs in the "Open Files Tab".
This can get very annoying when switching back and forth between the tests, the class being tested and its interface. As a result I would now choose That over If as a prefix.
Additionally I now use - only for methods in my test classes as it is not considered best practice anywhere else - the "_" to separate words in my test method names as in:
[Test] public void detects_invalid_User_Input()
I find this to be easier to read.
[End Edit]
I hope this spawns some more ideas, since I consider naming tests of great importance as it can save you a lot of time that would otherwise have been spent trying to understand what the tests are doing (e.g. after resuming a project after an extended hiatus).
See:
http://googletesting.blogspot.com/2007/02/tott-naming-unit-tests-responsibly.html
For test method names, I personally find using verbose and self-documented names very useful (alongside Javadoc comments that further explain what the test is doing).
I think one of the most important things is be consistent in your naming convention (and agree it with other members of your team). To many times I see loads of different conventions used in the same project.
In VS + NUnit I usually create folders in my project to group functional tests together. Then I create unit test fixture classes and name them after the type of functionality I'm testing. The [Test] methods are named along the lines of Can_add_user_to_domain:
- MyUnitTestProject
+ FTPServerTests <- Folder
+ UserManagerTests <- Test Fixture Class
- Can_add_user_to_domain <- Test methods
- Can_delete_user_from_domain
- Can_reset_password
I should add that the keeping your tests in the same package but in a parallel directory to the source being tested eliminates the bloat of the code once your ready to deploy it without having to do a bunch of exclude patterns.
I personally like the best practices described in "JUnit Pocket Guide" ... it's hard to beat a book written by the co-author of JUnit!