Is there such a thing as a bad unit test? - unit-testing

Given you can't write tests for dynamic content, outside of this, is there ever a reason you should not add a unit test? Maybe a test project integrity might be considered not necessary, but I could argue this both ways.
Example:
Objective-C/xCode = Write a test to make sure fonts are listed in your constants are also listed in your projects info.plist UIAppFonts array.

Technically, tests are supposed to adhere to a few key metrics: they should be fast, easy to read and interpret, consistent results, etc.
If any of these (and more) qualities of a good unit test are not met, you will end up with a cost. If the unit tests are slow then you spend time twindling your thumbs or if they are too hard you're spending time interpreting tests instead of writing new ones/code, same goes for tests that have inconsistent results.
Therefore we can say that bad unit tests exist.
However if we look into your concrete example of "should we test X" then that is a lot more subjective.
If something is easy to test like a getter/setter (aka: trivial code) then some might not find it worth their time while others consider it no problem: by adding these quick, small tests you will never encounter an unexpected problem just because someone added logic to their getter/setter and there are no tests to catch any mistakes.
I have no knowledge about Objective-C but at first glance that seems like a reasonable concept to test.
General rule: unless you have an explicit reason not to test something, test it.

Unit tests are really just a tool to create a lower watermark for quality of your code.
If you're 100% confident that your code works as intended, then you have enough unit tests. Adding more tests in this case is just a waste of time.
Think "hello world". How many unit tests would you write for that? 1 or 0?
If you're unsure about something, then you need more unit tests. Reasons for this feeling can be:
You or someone else just found a bug. Always write unit tests for bugs.
Someone asked for a new feature and you're not confident how to implement -> write tests to design the API and to be sure the final result will meet the expectation (and to make sure that everyone knows and agrees on expectations).
You are using a new technology and want to document a) how it works and b) how you use it. These tests work as a kind of template when you wonder later "how did I do this?"
You just found a bug in a library that you use. When you fix the bug, you should also add a test case that tells you "this bug has now been fixed!" so you don't hunt in the wrong place later.
Examples for bad unit tests:
Integration test hiding inside of a unit test
Testing setters and getters
Disabled unit tests
Commented out unit tests
Unit tests that break one per day or week (they erode your confidence and your willingness to write unit tests)
Any test that takes more than 10s to execute
Unit tests that are longer than 50 lines (incl. all the setup code)

My answer would be yes, writing tests is still writing code and the best way to avoid bugs is to not write the code in the first place.
IMHO, writing good tests is generally harder than writing good code. You can't write a useable test until you understand the problem, both in how it should work and how it can fail. The former is generally much easier to understand than the latter.
Having said all that, you have to start somewhere and sometimes it's easiest to write the simplest tests first, even if then don't really test anything useful.
However, you should winnow those tests out as you work through the TDD process. Work towards having a test set that documents just the external interfaces of an object. This is important, as when you come back to the object for later refactoring, you want a set of tests that defines the responsibilities of the object to the rest of the program, not the responsibilities of the object to itself.
(i.e. you want to test the inputs and outputs of the object as a "black box", not the internal wiring of the object. This allows you as much freedom to change w/o causing damage outside of the object. )

Related

What is unit testing, and does it require code being written?

I've joined a new team, and I've had a problem understanding how they are doing unit tests. When I asked where the unit tests are written, they explained they don't do their unit tests that way.
They explained that what they're calling unit tests is when they actually check the code they wrote locally, and that all of the points are being connected. To me, this is integration testing and just testing your code locally.
I was under the impression that unit tests are code written to verify behavior in a small section of a code. For example, you may write a unit test to make sure it returns the right value, and make the appropriate calls to the database. use a framework like NUnit or MbUnit to help you out in your assertions.
Unit testing to me is supposed to be fast and quick. To me, you want these so you can automate it, and have a huge suite of tests for your application to make sure that it behaves AS YOU EXPECT.
Can someone provide clarification in my or their misunderstandings?
I have worked places that did testing that way and called it unit testing. It reminded me of a quote attributed to Abe Lincoln:
Lincoln: How many legs does a dog have?
Other Guy: 4.
Lincoln: What if we called the tail a leg?
Other Guy: Well, then it would have 5.
Lincoln: No, the answer is still 4. Calling a tail a leg doesn't make it so.
They explained that what they're calling unit tests is when they
actually check the code they wrote locally, and that all of the points
are being connected.
That is not a unit test. That is a code review. Code reviews are good, but without actual unit tests things will break.
Unit tests involve writing code. Specifically, a unit test operate on one unit, which is just a class or component of your software.
If a class under test depends on another class, and you test both classes together, you have an integration test. Integration tests are good. Depending on the language/framework you might use the same testing framework (e.g. junit for java) for both unit and integration tests. If you have a dependency but mock or stub that dependency, then you have a pure unit test.
Unit testing to me is supposed to be fast and quick. To me, you want
these so you can automate it, and have a huge suite of tests for your
application to make sure that it behaves AS YOU EXPECT.
That is essentially correct. How 'fast and quick' developing unit tests is depends on the complexity of what is being tested and the skill of the developer writing the test. You definitely want to build up a suite of tests over time, so you know when something breaks as a codebase becomes more complex. That is how testing makes your codebase more maintainable, by telling you what ceases to function as you make changes.
Your team-mates are not doing unit testing. They are doing "fly by the seat of your pants" development.
Your assumptions are correct.
Doing a project without unit-tests (as they do, don't be fooled) might seem nice for the first few weeks: less code to write, less architecture to think about, less problems to worry about. And you can see the code is working correctly, right?
But as soon as someone (someone else, or even the original coder) comes back to an existing piece of code to modify it, add feature, or simply understand how it worked and what it exactly did, things will become a lot more problematic. And before you realize it, you'll spend your nights browsing through log files and debugging what seemed like a small feature just because it needs to integrate with other code that nobody knows exactly how it works. ANd you'll hate your job.
If it's not worth testing it (with actual unit-tests), then it's not worth writing the code in the first place. Everyone who tried coding without and with unit tests know that. Please, please, make them change their mind. Every time a piece of untested code is checked in somewhere, a puppy dies horribly.
Also, I should say, it's a lot (A LOT) harder to add tests later to a project that was done without testing in mind, than to build the test and production code side-to-side from the very start. Testing not only help you make sure your code works fine, it improves your code quality by forcing you to make good decisions (i.e. coding on interfaces, loose coupling, inversion of control, etc.)
"Unit testing" != "unit tests".
Writing unit tests is one specific method of performing unit testing. It is a very good one, and if your unit tests are written well, it can give you good value over a long time. But what they're doing is indeed unit testing. It's just the kind of unit testing that doesn't help you at all the next time you need to carve on the same code. And that's wasteful.
To add my two cents, yes, that is indeed not Unit testing. IMHO, the main features of unit tests are that it should be fast, automated and isolated. You can using a mocking framework such as RhinoMocks to isolate external dependencies.
Unit tests also have to be very simple and short. Ideally no more than a screen length. It is also one of the few places in software engineering where copy and pasting code might be a better solution than creating highly reusable and highly abstract functions. The reason simplicity is given such a high priority is to avoid the "Who watches the Watchers" problem. You really don't want to be in a situation where you have complex bugs in your unit tests, because they themselves aren't being tested. Here you are relying on the extreme simplicity and tiny size of the tests to avoid bugs.
The names of the unit tests also should be very descriptive, again following the simplicity and self documenting paradigm. I should be able to read the name of the test method and know exactly what it is doing. A quick glance at the code should show me exactly what functionality is being tested and if any external dependencies are being mocked.
The descriptive test names also make you think about the application as a whole. If I look at the entire test run, ideally just by looking at the names of all the tests that were run, I should have a fairly good idea of what the application does.

Are unit tests useful for real? [duplicate]

This question already has answers here:
Closed 11 years ago.
Possible Duplicate:
Is Unit Testing worth the effort?
I know what is the purpose of unit tests generally speaking, but I found some things that bother me.
1)
Purpose of tests early bug discovery. So, in some of later iterations, if I make some changes in code, automated test have to alarm me and tell me that I've screwed up some long ago forgotten piece of my software.
But, say that I have class A and say that is interact with some others class' instance, call it class B.
When one writes unit test for class A, he has to mock class B.
So, in some future, if one makes some changes in class B, and that causes some bugs, they will reflect only in class B's unit test, not in A's ('cause A's test doesn't use real class B, but it's mock with fixed inputs and outputs).
So, I can't see how could unit test do early notifying of made bugs that one isn't aware of? I'm aware of possible bugs in class that I'm changing, I do not need unit test for it, I need test to alarm me of consequences of my changes that makes bug in some "forgotten" class, and that isn't possible with unit tests. Or am I wrong?
2)
When writing mocks and right expectations of calling methods, their inputs and returning values, one has to know how class-under-test is going to be implemented.
And I think that that contradicts with test driven development. In TDD one writes tests first, and, driven by them, one writes a code.
But I can't write right expectations (and tests in general) unless I write code that has to be tested. That contradicts with TDD, right?
Both of those problems could be solved if I used real objects instead of mocks. But that isn't unit testing then, right? In unit test class-under-test has to be isolated form the rest of the system, not using real classes but mocks.
I'm sure that I'm wrong somewhere, but I cannot find where. I've been reading and reading and I can't find what have I understood wrong.
I have just implemented TDD and unit testing for the first time. I think its great, and I have never used it in a contiguous integration environment, where I imagine its even more valuable.
My process:
Create the skeleton of class where business logic will go (be it a
serivce layer, or domain object, etc).
Write the test, and define the
method names and required functionaility in the test - which then prompts my
ide to write the skelton method (a small but nice plus)
Then write the actual class methods being tested.
Run tests, debug.
I can now no longer write code without first writing its test, it really does aid develop. You catch bugs straight away, and it really helps you mentally order your code and development in a structured simple manner. And of course, any code that breaks existing functionality is caught straightaway.
I have not needed to use mock objects yet (using spring I can get away with calling service layer, and controller methods directly).
Strictly speaking, you spend more time in writing unit test rather than concentrating on the actual code.
there is no way you can achieve 100% code coverage. I do unit test my application, but I feel its not worth it. But not every one agrees with it.
If you change a piece of code, your unit test fail and figure out why its failing but for me its not worth it.
I am not big fan of Unit testing... You mock everything but mocking is big task..........
But now a days companies are trying for TDD approach.
Simple answer: Yes, they are useful for real.
TDD is nice and all in theory but I've hardly ever seen seen it done in practice. Unittests are not bound to tdd, however. You can always write unittests for code that works to make sure it still works after changes. This usage of unittests have saved me countless hours of bugfixing because the tests immediately pointed out what went wrong.
I tend to avoid mocks. In most cases, they're not worth it in my opinion. Not using mocks might make my unittest a little more of an integrationtest but it gets the job done. To make sure, your classes work as designed.
Unit Tests itself aren't stupid, it depends of your project.
I think Unit Tests are good, not just in TDD development but in any kind of project. The real pro for me is that if you wrap crucial code in your project with Unit Tests, there's a way to know if what you meant with it is still working or not.
For me, the real problem is that if someone is messing your code, there should be a way for him/her to know if the tests are still succedding, but not by making them run manually. For instance, lets take Eclipse + Java + Maven example. If you use Unit tests on your Maven Java Project, each time anyone build it, the tests run automatically. So, if someone messed your code, the next time he build it, he'll get a "BUILD FAILED" Console error, pointing that some Unit Tests failed.
So my point is: Unit Tests are good, but people should know that they're screwing things up without running tests each time they change code.
0) No, it doesn't. Sound stupid, I mean. It is a perfectly valid question.
The sad fact is that ours is an industry riddled with silver bullets and slick with snake oil, and if something doesn't make sense then you are right to question it. Any technique, any approach to any part of engineering is only ever applicable in certain circumstances. These are typically poorly documented at best, and so it all tends to deteriorate into utterly pointless my-method-is-better-than-yours arguments.
I have had successes and failures with unit testing, and in my experience it can be very useful indeed, when correctly applied.
It is not a good technique when you're doing a massive from-scratch development effort, because everything is changing too rapidly for the unit tests to keep up. It is an excellent technique when in a maintenance phase, because you're changing a small part of a larger whole which still needs to work the way it's supposed to.
To me, unit testing is not about test-driven development but design-by-contract: the unit test checks that the interface has not been violated. From this, it's pretty easy to see that the same person should not be writing a unit and its corresponding unit test, so for very small-scale stuff, like my own pet projects, I never use it. In larger projects, I advocate it - but only if design-by-contract is employed and the project recognizes interface specifications as important artefacts.
Unit testing as a technique is also sensitive to the unit size. It is not necessarily the case that a class is testable on its own, and in my experience the class is generally too small for unit testing. You don't want to test a class per se, you want to test a unit of functionality. Individually deployable (installable) binaries, eg jars or DLLs, make better candidates in my book, or packages if you're in a Java-style language.

Does it make sense to unit test throwaway code that runs in production?

One of the primary benefits of unit testing is to provide confidence that when one needs to later alter the code one is not breaking it. However, what benefits does unit testing provide for code that is literally used as one-off throwaway code? This throwaway code is most certainly used in production, but is never actually altered once it's deployed. Does unit testing still make sense in this situation and if so how specifically?
UPDATE: The throwaway code actually is functional-tested before hitting production. Normally, for non-throwaway code it still makes sense to have unit tests despite functional testing occurring. The question here is whether or not it makes sense to also have the unit tests also in the case of throwaway code.
UPDATE 2: The reason why throwaway code is in production in the first place is that this code is literally used for one client, one time only. It's never subject to revision. It is used by a used a single time for a few days. It's very specific to a single client. It's not ever used after that for any other purpose, including the same client. Is there still value in writing unit tests in this case, despite functional tests occurring?
I'm not sure how code can be both production code and throwaway code. The fact that it is being used in production means that it is not throwaway. What makes you so sure that just because you don't plan on changing it now, someone else might not come around and reuse it or alter it at a later date?
Regardless, there are many benefits to unit testing beyond protection when refactoring. Unit tests (like all tests) help prove that the code is doing what is supposed to do. You will need to test at some level before you go into production to prove that, and unit tests are an easy way to automate some of that.
The process of writing unit tests often exposes bugs. It forces you to think about cases that you may not have thought of when you wrote the happy day case.
I'd be willing to bet the time you spent asking the question and coming back to read the answers would have been more than enough time to write a few unit tests. Which was the better use of your time?
It's simple: if you care whether the code is correct or not, you need to test it. Unit test, acceptance test, service test, whatever you call it, it needs to be tested one way or another.
Tests provide benefits in three ways:
When code is written test-first, tests help drive design and ensure you write testable code. If this is truly throwaway code, then this doesn't matter to you.
Tests provide a safety net for refactoring. Again, in your situation you may not care.
Tests prove the code does what it is supposed to do. This DOES apply to throwaway code, especially if the code is written in a test-first manner. I'd much rather write tests as I go, and be confident that I'm shipping something solid to QA, than skip the tests and wait for a QA person to tell me about defects. What if a defect is reported and you have to make a significant change to the code? With automated tests, it's easy to tell if your "fix" broke anything else. Without automated tests you have to repeat the full suite of manual tests, which might be expensive.
To summarize, if the code is truly "throwaway" AND is small or very simple, then you may not get a lot of benefit. If the code is complex, or is relatively large, then tests may be worth something. (Although, I'd focus on writing high value tests only, targeting features that are hard or costly to test by hand)
You should be testing this code somehow before putting it into production. Unit testing can be a valuable part here, regardless of the need to do automated regression tests later.
Unit testing can also be used in test-driven development, where you sketch out what the code is supposed to do before actually writing the code. In this scenario, the unit tests help speed up the development process (precisely because you have automated tests at a time when the code still does change).
If I only had a dollar for every piece of "thorwaway" code I ever needed to maintain...

Are brittle unit tests always a bad thing?

At times I find a very brittle test to be a good thing because when I change the intent of the code under test I want to make sure my unit test breaks so that I'm forced to refactor ... is this approach not recommended when building a large suite of regression tests?
Unit tests must be brittle -- it must be easy to break them. If they don't break, then they're not unit tests at all; they're code comments.
...
or am I missing the point of the question?
Edit: I should clarify my earlier answer.
I was being a bit pedantic about the language. "brittle" just means "easy to break". A unit test should be easy to break. The term "brittle tests" should really be "overly-brittle tests"; tests that break when they shouldn't. Even so, it's much, much easier to fix an over-brittle test than to fix a bug that slipped through an under-brittle test, so go ahead and write your brittle tests!
The general statement admonishing brittle unit tests applies mostly to shops which haven't fully embraced unit testing. For instance, when trying to convert from having no tests to having a full suite of unit tests, or when your project is the unit testing pilot project. In these cases developers get used to false positives from unit tests and begin to ignore them. Then the unit tests fall behind the production code and either get left behind or require a major effort to update.
I would say you should always aim for the least brittle tests you can that fully test your function/module, but if you have 1 or 2 that are brittle you should be okay in most cases.
IMO, as long as your tests make sure that your app code does what it should do, and if changed, the tests fail, then your tests are fine. Could you define what exactly you mean by "brittle"?
Just make sure that your tests really cover every aspect of your app code. (Within reason).
Yes, brittleness of tests is always a bad thing. But it seems to be one of those things that we have to live with in order to fully test our classes. Many classes can't be tested like black boxes that take some input and return some output, like you might see with Math.cos(). Most of them have side effects on other classes or entities in the system and you have to test that those entities were manipulated properly by the class. That means the test has to know implementation details about the class being tested, which creates brittle classes.
Brittle tests are like proctology exams. They are definitely bad, unpleasant things, but we must put up with them because we have no better choice.
As dysfunctor points out, unit tests should be brittle in that they are easy to break. However, I would add that they should not be brittle in that they pass or fail randomly.
This happens a lot in tests that involve threads and sockets. Tests should make use of mutexes and other "wait" devices to avoid the tests failing under uncontrollable circumstances, such as high processor load.
A definite "smell" of a randomly-brittle test is the use of a sleep() function in a test.
When a code change meant to be "internal" to a class (i.e. not changing the API), causes a test to fail, there are two possibilities:
The code has a new bug, OR
The code is correct, the test needs fixing
Try to reduce #2. Those are the 'brittle' tests.
Unit tests are by definition brittle. They break when related code (the system under test) changes. That's by design. That's how unit tests provide value.
What we want to avoid is tests which break when the code changes but the logic does not. For example, having the majority of existing tests break when adding a new requirement is undesirable.
Unfortunately, avoiding such brittleness is not easy. In all but the simplest of cases, the unit test will have some knowledge about the implementation of the system under test (eg. mocked objects). As long as that is true, test will be brittle.
The best way to avoid that problem is to avoid writing classes that need to change. This is actually easier than it sounds when adhering to SOLID principles.

How do you tell that your unit tests are correct?

I've only done minor unit testing at various points in my career. Whenever I start diving into it again, it always troubles me how to prove that my tests are correct. How can I tell that there isn't a bug in my unit test? Usually I end up running the app, proving it works, then using the unit test as a sort of regression test. What is the recommended approach and/or what is the approach you take to this problem?
Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing.
Edit2: For the arguments "unit testing is for making sure your changes don't break anything" and "this will only happen if the test has the exact same flaw as the code", what if the test overfits? It's possible to pass both good and bad code with a bad test. My main question is what good is unit testing since if your tests can be flawed you can't really improve your confidence in your code, can't really prove your refactoring worked, and can't really prove that you met the specification?
The unit test should express the "contract" of whatever you are testing. It's more or less the specification of the unit put into code. As such, given the specs, it should be more or less obvious whether the unit tests are "correct".
But I would not worry too much about the "correctness" of the unit tests. They are part of the software, and as such, they could well be incorrect as well. The point of unit tests - from my POV - is that they ensure the "contract" of your software is not broken by accident. That is what makes unit tests so valuable: You can dig around in the software, refactor some parts, change the algorithms in others, and your unit tests will tell you if you broke anything. Even incorrect unit tests will tell you that.
If there is a bug in your unit tests, you will find out - because the unit test fails while the tested code turns out to be correct. Well then, fix the unit test. No big deal.
Well, Dijkstra famously said:
"Testing shows the presence, not the
absence of bugs"
IOW, how would you write a unit test for the function add(int, int)?
IOW, it's a tough one.
There are two ways to help ensure the correctness of your unit tests:
TDD: Write the test first, then write the code it's meant to test. That means you get to see them fail. If you know that it detects at least some classes of bugs (such as "I haven't implemented any functionality in the function I want to test yet"), then you know that it's not completely useless. It may still let some other bugs slip past, but we know that the test is not completely incorrect.
Have lots of tests. If one test lets some bugs slip past, they'll most likely cause errors further down the line, causing other tests to fail. As you notice that, and fix the offending code, you get a chance to examine why the first test didn't catch the error as expected.
And finally, of course, keep the unit tests so simple that they're unlikely to contain bugs.
For this to be a problem your code would have to be buggy in a way that coincidentally causes your tests to pass. This happened to me recently, where I was checking that a given condition (a) caused a method to fail. The test passed (i.e. the method failed), but it passed because another condition (b) caused a failure. Write your tests carefully, and make sure that unit tests test ONE thing.
Generally though, tests cannot be written to prove code is bug free. They're a step in the right direction.
The complexity of the unit test code is (or should be) less (often orders of magnitude less) than the real code
The chance of your coding a bug in your unit test that exactly matches a bug in your real code is much less than just coding the bug in your real code (if you code a bug in your unit test that doesn't match a bug in your real code it should fail). Of course if you have made incorrect assumptions in your real code you are likely to make the same assumption again - although the mind set of unit testing should still reduce even that case
As already alluded to, when you write a unit test you have (or should have) a different mind set. When writing real code you're thinking "how do I solve this problem". When writing a unit test you're thinking, "how do I test every possibly way this could break"
As others have already said, it's not about whether you can prove that the unit tests are correct and complete (although that's almost certainly much easier with test code), as it is reducing the bug count to a very low number - and pushing it lower and lower.
Of course there has to come a point where your confident in your unit tests enough to rely on them - for example when doing refactorings. Reaching this point is usually just a case of experience and intuition (although there are code coverage tools that help).
I had this same question, and having read the comments, here's what I now think (with due credit to the previous answers):
I think the problem may be that we both took the ostensible purpose of unit tests -- to prove the code is correct -- and applied that purpose to the tests themselves. That's fine as far as it goes, except the purpose of unit tests is not to prove that the code is correct.
As with all nontrivial endeavors, you can never be 100% sure. The correct purpose of unit tests is to reduce bugs, not eliminate them. Most specifically, as others have noted, when you make changes later on that might accidentally break something. Unit tests are just one tool to reduce bugs, and certainly should not be the only one. Ideally you combine unit testing with code review and solid QA in order to reduce bugs to a tolerable level.
Unit tests are much simpler than your code; it's not possible to make your code as simple as a unit test if your code does anything significant. If you write "small, granular" code that's easy to prove correct, then your code will consist of a huge number of small functions, and you're still going to have to determine whether they all work correctly in concert.
Since unit tests are inevitably simpler than the code they're testing, they're less likely to have bugs. Even if some of your unit tests are buggy, overall they're still going to improve the quality of your main codebase. (If your unit tests are so buggy that this isn't true, then likely your main codebase is a steaming pile as well, and you're completely screwed. I think we're all assuming a basic level of competence.)
If you DID want to apply a second level of unit testing to prove your unit tests correct, you could do so, but it's subject to diminishing returns. To look at it faux-numerically:
Assume that unit testing reduces the number of production bugs by 50%. You then write meta-unit tests (unit tests to find bugs in the unit tests). Say that this finds problems with your unit tests, reducing the production bug rate to 40%. But it took 80% as long to write the meta-unit tests as it did to write the unit tests. For 80% of the effort you only got another 20% of the gain. Maybe writing meta-meta-unit tests gives you another 5 percentage points, but now again that took 80% of the time it took to write the meta-unit tests, so for 64% of the effort of writing the unit tests (which gave you 50%) you got another 5%. Even with substantially more liberal numbers, it's not an efficient way to spend your time.
In this scenario it's clear that going past the point of writing unit tests isn't worth the effort.
I guess writing the test first (before writing the code) is a pretty good way of being sure your test is valid.
Or you could write tests for your unit tests... :P
You don't tell. Generally, the tests will be simpler than the code they're testing, so the idea is simply that they'll be less likely to have bugs than the real code will.
First let me start by saying that unit testing is NOT only about testing. It is more about the design of the application. To see this in action you should put a camera with your display and record your coding while writing unit testing. You will realize that you are making a lot of design decisions when writing unit tests.
How to know if my unit tests are good?
You cannot test the logical part period! If your code is saying that 2+2 = 5 and your test is making sure that 2+2 = 5 then for you 2+2 is 5. To write good unit tests you MUST have good understanding of the domain you are working with. When you know what you are trying to accomplish you will write good tests and good code to accomplish it. If you have many unit tests and your assumptions are wrong then sooner or later you will find out your mistakes.
This is one of the advantages of TDD: the code acts as a test for the tests.
It is possible that you'll make equivalent errors, but it is uncommon in my experience.
But I have certainly had the case where I write a test that should fail only to have it pass, which told me my test was wrong.
When I was first learning unit testing, and before I was doing TDD, I would also deliberately break the code after writing the test to ensure that it failed as I expected. When I didn't I knew the test was broken.
I really like Bob Martin's description of this as being equivalent to double entry bookkeeping.
As above, the best way is to write the test before the actual code. Find real life examples of the code your testing also if applicable (mathematical formula or similar), and compare the unit test and expected output to that.
This is something that bugs everyone that uses unit tests. If I would have to give you a short answer I 'd tell you to always trust your unit tests. But I would say that this should be backed up with your previous experience:
Did you have any defects that were reported from manual testing and the unit test didn't catch (although it was responsible to) because there was a bug in your test?
Did you have false negatives in the past?
Are your unit tests simple enough?
Do you write them before new code or at least in parallel?
You can't prove tests are correct, and if you're trying to, you're Doing It Wrong.
Unit tests are a first screen - a smoke test - like all automated testing. They are primarily there to tell you if a change you make later on breaks stuff. They are not designed to be a proof of quality, even at 100% coverage.
The metric does make management feel better, though, and that is useful in itself sometimes!
Dominic mentioned that "For this to be a problem your code would have to be buggy in a way that coincidentally causes your tests to pass.". One technique you can use to see if this is a problem is mutation testing. It makes changes to your code, and see if it causes the unit tests to fail. If it doesn't, then it may indicate areas where the testing isn't 100% thorough.
Unit tests are your requirements concretized. I don't know about you but I like having the requirements specified before starting to code (TDD). By writing them and treating them like any other piece of your code you'll start to feel confident introducing new features without breaking old functionality. To ensure that all your code is needed and that the tests actually tests the code I use pitest (other variants for mutation testing exists for other languages). For me, untested code, is buggy code, however clean it may be.
If the test tests complex code and is complex itself I often write tests for my tests (example).
Edit: I also realize that you could write small, granular unit tests that would be easy to understand. However, if you assume that small, granular code is flawless and bulletproof, you could just write small, granular programs and not need unit testing.
The idea of unit testing is to test the most granular things, then stack together tests to prove the larger case. If you're writing large tests, you lose a bit of the benefits there, although it's probably quicker to write larger tests.