What is the competent programmer hypothesis in mutation testing? - unit-testing

While I am learning about mutation testing, I've read in Wikipedia:
The first is the competent programmer hypothesis. This hypothesis
states that most software faults introduced by experienced programmers
are due to small syntactic errors.
I didn't quite understand the competent programmer hypothesis. What do they mean by syntactic errors?
I know that syntactic error are caught by the compiler not by mutation testing. How does it relate to mutation testing?

First of all, the source for this quote is dated back to 1978 when the compilers were much less powerful and probably could catch only the dumbest errors :)
In general, I'm not sure there is a general definition for syntactic error applicable to all the (popular) programming languages, partly because some are interpreted not compiled.
So you would probably need to look at the quote in context. Or just don't bother. That Wiki article is quite academic. As long as you understand how mutation testing complements unit testing, you are fine :)

Wiki is wrong. If you read the seminal paper (which wiki references and my PhD advisor wrote), the competent programmer hypothesis is about behavior, not syntax. That is, competent programs behavior is close to being correct.

Related

What did Joe Armstrong refer to by "there are more advanced techniques" for unit testing

What did Joe Armstrong refer to by "you shouldn't write test cases at all, there are more advanced techniques you can use than silly old test cases" ?
What are those more advanced techniques?
source: A Guide for the Perplexed
Additionally, he even suggests to not reuse code but to rewrite it. That goes against everything my school has taught me. So I'm wondering what is the reasoning behind this statement.
For the test cases: there’s a tendency for people who do functional programming to despise unit tests. Rich Hickey has a quote where he compares using unit tests to a car relying on bouncing off the guard rails to stay on the road. And he has a point, OO enterprisey code is so complicated and messy that’s pretty much what we do. The theory is, If you are doing FP right, you should be writing mostly small pure functions where it is self evident what they do. Testing at a unit test level doesn’t add much in that case.
(I can't say for certain but I don't know of anything to make me think Armstrong had a particularly strong focus on proving correctness of programs. Erlang doesn't have compile-time typechecking, which would seem to me like it would make proving correctness harder.)
For rewrite-not-reuse, I think the idea is, the value isn’t in the software, it’s in your understanding. If you reuse a lot you may find yourself cobbling together pieces each of which was your first try at figuring something out. And whenever we build something the first time, we don’t know what we’re doing and if we’re lucky it may mostly work, but it is pretty much crap. Rewriting gives opportunities for making connections and learning, it may be better to revisit the problems in a new light and have a chance to create something cleaner and better.

Testing C++17 in safety critical systems

I'm currently thinking about C++ in safety-critical software (DO-178C DAL-D) and definitions of a coding standard. I was looking at MISRA C++ which is again 10 years old and misses all the C++11…17 features.
While being conservative regarding safety is often not a bad idea, the new language features might be beneficial to safety.
During reviews one has to argue about why you made certain decisions. And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
But it is hard to find language features that carry the safety aspects more prominently than "make things clearer". What aspects of modern C++ really help regarding safety?
I'm setting up a small exercise project to test these ideas and currently totally focused on the "let the compiler check your assumptions". For example, we have just started to use [[nodiscard]] and found at least two bugs this way within the first hour. But what aspects of modern c++ were designed and shall be used with safety in mind?
These come to my mind first :
atomic and memory_model : they allow writing portable code in concurrent / lockfree contexts.
unique_ptr : helps simplify memory handling
override lets you find bugs at compile time.
constexpr if makes the code be written closer to where it is used, which helps writing less bugs (sometimes, to specialize a behaviour according to a template parameter, you would write a class with n specializations. Now you can use if constexpr with n branches instead).
etc... in a way, considering the benefits on code clarity and portability, I think every feature of C++11/14/17 helps.
And one can always argue that the new language features make the code clearer …thus fewer errors regarding misunderstandings; especially if the compiler is able to test and verify your assumptions.
In my not so humble opinion, there are few language features, that is, standard general purpose programming language features that both, fall outside of the allowed standards AND are worth the time and energy to argue your way through in an assessment. If you are aiming for a higher level of abstraction (which is a good thing also for safety, although you'll hardly find anyone openly admitting this, because it would render half of the safety industry unemployed and the other half severly outdated) then you'd be better off to resort to a domain specific language and put the effort in a flawless compilation (to source) to a standard conforming platform. If you don't work in an engineering culture which allows this, then you can resort to some of the patches that the other answer here proposes, but it is always difficult to convincingly transport the intention and meaning of non-specific measures to other safety engineers (a dedicated domain specific language is much easier both to support or object).
That said I think the advances in parallel programming of modern C++ will find their way into the standards relatively quickly.

Should every C++ programmer read the ISO standard to become professional?

Should every C++ programmer read the ISO standard to become professional?
No. C++ standard is more like a dictionary - something where you look up specific things that concern you at any given moment. It doesn't make a good (or useful) reading if you treat it as a simple book to read from beginning to end.
If the question were whether every professional C++ programmer should have an ISO standard at hand, and use it for reference as needed, then I'd say "yes".
I think that every professional C++ programmer should have a copy of the standard to refer to. But a sit down and slog through it, cover-to-cover read would be pretty numbing. It's mostly written for implementers (compiler writers), and it has next to no explanation of rationale for why the standard requires certain things.
So, I'd say it's more important for a professional C++ programmer to have and read:
Stroustrup's "The C++ Programming Language"
Meyer's "Effective..." series and/or Sutter's "Exceptional..." series
Lippman's "Inside the C++ Object Model"
Stroustrup's "Design and Evolution of C++"
Or at least some decent subset of them. If you have a chunk of those books under your belt, you'll only be going to the Standard for minutiae or to settle arguments.
By the way, see this answer for pointers on how to get the standard documents:
Where do I find the current C or C++ standard documents?
I think a lot of things like, "is this ok to do?" are only really answered by looking at the standard.
You can learn a lot of things by reading the standard, because it includes all the tiny details people tend to skip out on.
Having standard on hand also helps you back up your statements, because if someone says, "This is okay to do", you can say, "Actually, according to the standard, it's not okay because..."
I think in conclusion, I'll repeat what I've said before:
Knowing it can't hurt you, but you don't need to have it memorized to be a good C++ programmer.
If they're getting paid to write C++, then they are already a professional :)
But I don't think it should be required of any language to get respect. I'm sure there are plenty other uses of such time that might benefit your skillset more.
Should every driver memorize the DVM laws to become professional? It might help, but it would also a ton of work that they probably don't have the time for. Maybe reading a book like Code Complete might be more beneficial.
I'd answer with a few questions:
how much value does it provide to the developer to read the ISO standard?
Are employers demanding this attribute of their developers?
How will it make a developer's code more maintainable, and readable?
Will the reading of the ISO standard help the developer make the developers around him/her any better?
(This sounds like a wiki question.)
No. That's not how people learn effectively, and most people won't retain information after reading a spec. You need practice for information to sink in, and there's no way to get that without implementing things. Also, a lot of programmers only need to use a subset of C++, and can page-fault new information in as-needed.
Rather than reading specs, Your time is better spent learning generally how to program, how to write documentation, how to implement algorithms, and learn the more detailed facets of the C++ as you go.
Understanding how and when to read the standard are more important than just reading to to cure insomnia. Of course this applies to all the standards out there that are related to whatever you are doing. There is defiantly a skill to reading and understanding a standard. And knowing when to read them rather than just throw questions to Stackoverflow or any other random web site is a skill that is missing in many programmers (new and old). But I agree with Michael Burr and asperous.us... there are other books that should be read first.

Is it legally negligent to not use unit testing [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is anyone aware of any legal precedents where the lack of unit testing in an application has lost someone a case
Or where someone has not been deemed negligent despite a lack of unit testing in an application.
Is there any highly regarded alternative to unit-testing that enables programmers to objectively demonstrate a committment to software quality.
For example in medicine you can use in your defence that your approach is one that is regarded as acceptable by a substantial and well regarded group of other doctors.
Legal is between you and whoever will pay for your software
If the contract said that you will do unit testing and you don't, then you're liable. It depends on every software and every agreement you make. When I buy Windows for critical software applications (laugh, laugh, it DOES happen), no one makes sure they unit tested everything
Unit testing is only a (very valid) alternative to test your code, it can't and could not be a legal requirement.
For example, you could prove some properties of a piece of code using a denotational semantic and some techniques that found their justification on well-founded relations, complete partial orders and fixed-point theorems.
A lawyer could probably find out for you, but it would probably be expensive.
In general, there's no liability in software. How good or bad a thing this is is debatable, but I haven't found a piece of software yet that doesn't disclaim liability. (There was a case where a tax preparation program several years ago had a problem, and the company actually recompensed people to some extent. I know of no other exceptions.)
Liability would normally only come about in embedded software, since a manufacturer is frequently liable for the behavior of a device, and the software is part of the device. In that case, demonstrating that sound software engineering practices were used might be useful, but I would be astonished to learn that failing to use unit tests would be considered negligence. (There is also, at least in the US, a concept of "strict liability", which means somebody's completely to blame, no excuses possible. It's been applied to navigational maps, but if you want to know for what else you need to consult a lawyer or do your own research.)
So, what I'm saying is that I don't know of any cases, it sounds dubious to me, I am not a lawyer, and this is not legal advice.
Unit testing, in a formal sense, is a relatively new concept in software (I'd guess less than 10 years old). Prior to that, some components and modules were tested, but it was more important that the overall system be tested.
Generally, the law lags behind contemporary practices by quite awhile. It takes a long time for laws, codes, and cases to establish a precedent. It would be very surprising to me if there is any consensus in law about a relatively new approach like unit testing.
User testing can be illegal in some cases (stackoverflow.com: what-legal-issues-surround-unit-testing-database-code-closed), because you can't allowed to do some things with personal data, for example in a database you wanted to include in your test.
In defense of a negligence lawsuit, an accused programmer might use extensive unit testing in his defense. If a contract specified unit testing, but none was conducted, then there would be cause for breach of contract.
Unit testing is not enough to reveal every possible fault, and there is not an ISO standard for it yet. A naive court might be convinced that it indicated neglect, but it is surely not founded upon a great body of legal precedent.
Unit testing is a good idea in general, but the field of software engineering has a long way to go until it's a standard practice that people can be sued for not doing. There are many cases where unit tests are simply not appropriate. Unless it's explicitly mentioned in a contract, there shouldn't be an expectation that unit testing will be used for a product.
Unless you are IBM you. Should not be in a position to gurantee your code is defect free.as strange as it sounds this is the clients responsibility.
I have worked on softwares which are critical applications that are installed only when the seller gives a buyer a complete list of testing done on the software and undersigned physically by the testers and the QA. this involves even if there is small unit which undergoes minor modification.
Mate, if your a real developer getting paid you should already know why unit tests are valuable in maintaning quality and other merits. I certainly might not sue you but i would fire you. Asking such a question demonstrates that you are a cowboy without a professional attitude in deliverying a quality product.

Understanding unit test constraints and NUnit syntax helpers

Me and a colleague are starting a new project and attempting to take full advantage of TDD. We're still figuring out all the concepts around unit testing and are thus far basing them primarily on other examples.
My colleague recently brought into question the point of the NUnit syntax helpers and I'm struggling to explain their benefit (since I don't really understand it myself other than my gut says they're good!). Here is an example assertion:
Assert.That(product.IsValid(), Is.False);
To me this makes complete sense, we're saying we expect the value of product.IsValid() to be false. My colleague on the other hand would prefer us to simply write:
Assert.That(!product.IsValid());
He says to him this makes more sense and he can read it easier.
So far the only thing we can agree on is that you are likely to get more helpful output when the test is failing from the former, but I think there must be a better explanation. I've looked up some information on the syntax helpers (http://nunit.com/blogs/?p=44) and they make sense, but I don't fully understand the concept of constraints other than they 'feel' right.
I wonder if someone could explain why we use the concept of constraints, and why they improve the unit test examples above?
Thanks.
I think it's mostly to do with the pure English reading of the statement.
The first reads
Assert that product is valid is false
The second reads
Assert that not product is valid
I personally find the first easier to process. I think it's all down to preference really. Some of the extension methods out there are interesting though that let you do you assertions like this:
product.IsValid().IsFalse();
I can see your version being better than your colleagues. However, I'd still be at least as comfortable with:
Assert.IsFalse(product.IsValid());
If you can convince me that the Assert.That syntax has an objective benefit over the above, I'd be very interested :) It may well just force of habit, but I can very easily read the "What kind of assertion are we making? Now what are we asserting it about?" style.
It's all sugar. Internally they're converted to constraints.
From Pragmatic Unit Testing, pg 37:
"NUnit 2.4 introduced a new style of assertions that are a little less procedural and allow for a more object oriented underlying implementation.
...
For instance:
Assert.That(actual, Is.EqualTo(expected));
Converts to:
Assert.That(actual, new EqualConstraint(expected));"
Using constraints also allows you to inherit from Constraint and create your own custom constraint(s) while keeping a consistent syntax.
I don't like Assert.That, specifically the fact that its most common scenario (comparing two objects' equality) is measurably worse than the 'classic' Assert.AreEqual() syntax.
On the other hand, I super love the MSpec NUnit extensions. I recommend you check them out (or look at the SpecUnit extensions, or NBehave extensions, or NBehaveSpec*Unit extensions, I think they're all the same).