how to use VUnit on a system level - unit-testing

I have a project with multiple structural entities that each have sub-entities like the picture below.
Now I am trying to plan for the testing phase. I have checked UVVM, OSVVM, and VUnit and I found that VUnit is the easiest and fastest way to start.
I have read the documentation and played a little bit with the provided examples and I started to write some tests for some sub-entities like A1, A2, ..., etc.
My question is how to use VUnit for testing at a system level and how should I structure my testbench?
Shall I write tests for all sub-entities and then for the structural entities, and then for the whole system?
Shall I write tests for the structural entities as transactions?
Shall I write tests for all sub-entities each in a separate architecture file and run them in sequence in the test suite in top-level testbench?
Any other suggestions?

This is the type of question that risks being closed as opinion-based so I will just give a few general remarks that can guide you in your decisions.
VUnit doesn't care about the size of the thing you're testing. It can be a small function or a very large system. The basic test strategy is not a tool question but something you're free to decide.
Regardless of test philosophy I think most, if not all, developers agree that it's better/cheaper to find bugs early rather than late. This means that you should test often and start on small things. How often and small depends on the test overhead and bug frequency which depends on the tools being used, experience of designer, complexity of the problem etc. There's no universally true number here.
A transaction is a coding abstraction and as such it can improve readability and maintainability of the code. However, using too much abstractions for a simple problem adds no value. If the thing you're testing has a single simple bus interface it makes sense to create simple read and write procedures that you call after one another. If you have several interfaces and want to perform concurrent transactions on those interfaces you need somehing more. The VUnit solution for this is to take your simple transaction procedures and add message passing on top of that. https://www.linkedin.com/pulse/vunit-bfms-simple-emailing-lars-asplund/ explains how it's done
I'm not sure if I understand your last question but if you're thinking about only testing the small things from the top-level you would need to develop the top before testing can start. That would go against number 2.
Disclaimer: I'm one of the VUnit authors but I think I managed to avoid personal opinions here.

Related

Is it really reasonable to write tests at the early stage? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I used to write the tests while developing my software, but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress. I would need to rewrite the entire main program and the entire test every time.
I believe this situation is common in reality. So my questions are:
Is it really common to write tests at first, like so said in TDD? I'm just an amateur programmer so I don't know the real development world.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure? (unless they're smart enough to think up the best one at first, unlike me.)
I don't know of anyone who recommends TDD when you don't know what you're building yet. Unless you've created a very similar system before, then you prototype first, without TDD. There is a very real danger, however, of ending up putting the prototype into production without ever bringing the TDD process into play.
Some common ways of doin' it right are…
A. Throw the prototype away, and start over using TDD (can still borrow some code almost verbatim from the prototype, just re-implement following the actual TDD cycle).
B. Retrofit unit tests into the prototype, and then proceed with red, green, refactor from there.
but I stopped it because I noticed that, almost always, the first api and structures I thought great turn out to be clumsy after some progress
Test driven development should help you with the design. An API that is "clumsy" will seam clumsy as you write your tests for it.
Is it really common to write tests at first, like so said in TDD?
Depends on the developers. I use Test driven development for 99% of what I write. It aids in the design of the APIs and applications I write.
If so, do people rewrite the tests again (and again) when they revamp the software api/structure?
Depends on the level of the tests. Hopefully during a big refactor (that is when you rewrite a chunk of code) you have some tests at the to cover the work you are about to do. Some unit tests will be thrown away but integration and functional tests will be very important. They are what tells you that nothing has been broken.
You may have noticed I've made a point of writing test driven development and not "TDD". Test driven development is not simply "writing tests first", it is allowing the tests to drive the development cycle. The design of your API will be strongly effected by the tests that you write (contrived example, that singleton or service locator will be replaced with IoC). Writing good APIs takes practice and learning to listen to the tools you have at your disposal.
Purists say yes but in practice it works out a little different. Sometimes I write a half dozen tests and then write the code that passes them. Other times I will write several functions before writing the tests because those functions are not to be used in isolation or testing will be hard.
And yes, you may find you need to rewrite tests as API change.
And to the purists, even they will admit that some tests are better than none.
Is it really reasonable to write tests at the early stage?
No if you are writing top-down-design high level integrationtests that require a real database or internetconnection to an other website to work
yes if you are implementing bottom-up with unittesting (=testing a module in isolation)
The higher the "level" the more difficuilt the unittesting becomes because you have to introduce more mocking/abstraction.
In my opinion the architectual benefits of tdd only apply when combined with unittesting because this drives the Separation_of_concerns
When i started tdd i had to rewrite many tests when changing the api/architecture. With grown experience today there are only a few cases where this is neccessary.
You should have a first layer of tests that verifies the externally visible behavior of your API regardless of its internals.
Updating this kind of tests when a new functional requirement emerges is not a problem. In the example you mention, it would be easy to adjust to new websites being scraped - you would just add new assertions to the tests to account for the new data fetched.
The fact that "the scraping code had to be revamped entirely" shouldn't affect the structure of these higher level tests, because from the outside, the API should be consumed exactly the same way as before.
If such a low-level technical detail does affect your high level tests, you're probably missing an abstraction that describes what data you get but hides the details of how it is retrieved.
Writing tests before you write the actual code would mean you know how your application will be designed. This is rarely the case.
As a matter of fact I for example start writing everything in a single file. It might have a few hundereds or more lines. This way I can easily and quickly redesign the api. Later when I decide I like it and that it's good I start refactoring it by putting everything in meaningfull namespaces and separate files.
When this is done I start writing tests to verify everything works fine and to find bugs.
TDD is just a myth. It is not possible to write tests first and the code later especially if you are at the beginning.
You always have to keep in mind the KISS rule. If you need some crazy stuff to test you own code like fakes or mocks you already failed it.

Is a class that is hard to unit test badly designed? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I am now doing unit testing on an application which was written over the year, before I started to do unit-testing diligently. I realized that the classes I wrote are hard to unit test, for the following reasons:
Relies on loading data from database. Which means I have to setup a row in the table just to run the unit test (and I am not testing database capabilities).
Requires a lot of other external classes just to get the class I am testing to its initial state.
On the whole, there doesn't seem to be anything wrong with the design except that it is too tightly coupled (which by itself is a bad thing). I figure that if I have written automated test cases with each of the class, hence ensuring that I don't heap extra dependencies or coupling for the class to work, the class might be better designed.
Does this reason holds water? What are your experiences?
Yes you are right. A class which is not unit testable hard to unit test is (almost always) not well designed (there are exceptions, as always, but these are rare - IMHO one should better not try to explain the problem away this way). Lack of unit tests means that it is harder to maintain - you have no way of knowing whether you have broken existing functionality whenever you modify anything in it.
Moreover, if it is (co)dependent with the rest of the program, any changes in it may break things even in seemingly unrelated, far away parts of the code.
TDD is not simply a way to test your code - it is also a different way of design. Effectively using - and thinking about using - your own classes and interfaces from the very first moment may result in a very different design than the traditional way of "code and pray". One concrete result is that typically most of your critical code is insulated from the boundaries of your system, i.e. there are wrappers/adapters in place to hide e.g. the concrete DB from the rest of the system, and the "interesting" (i.e. testable) code is not within these wrappers - these are as simple as possible - but in the rest of the system.
Now, if you have a bunch of code without unit tests and want to cover it, you have a challenge. Mocking frameworks may help a lot, but still it is a pain in the ass to write unit tests for such a code. A good source of techniques to deal with such issues (commonly known as legacy code) is Working Effectively with Legacy Code, by Michael Feathers.
Yes, the design could be better with looser coupling, but ultimately you need data to test against.
You should look into Mocking frameworks to simulate the database and the other classes that this class relies on so you can have better control over the testing.
I've found that dependency injection is the design pattern that most helps make my code testable (and, often, also reusable and adaptable to contexts that are different from the original one that I had designed it for). My Java-using colleagues adore Guice; I mostly program in Python, so I generally do my dependency injection "manually", since duck typing makes it easy; but it's the right fundamental DP for either static or dynamic languages (don't let me get started on "monkey patching"... let's just say it's not my favorite;-).
Once your class is ready to accept its dependencies "from the outside", instead of having them hard-coded, you can of course use fake or mock versions of the dependencies to make testing easier and faster to run -- but this also opens up other possibilities. For example, if the state of the class as currently designed is complex and hard to set up, consider the State design pattern: you can refactor the design so that the state lives in a separate dependency (which you can set up and inject as desired) and the class itself is mostly responsible for behavior (updating the state).
Of course, by refactoring in this way, you'll be introducing more and more interfaces (abstract classes, if you're using C++) -- but that's perfectly all right: it's a excellent principle to "program to an interface, not an implementation".
So, to address your question directly, you're right: the difficulty in testing is definitely the design equivalent of what extreme programming calls a "code smell". On the plus side, though, there's a pretty clear path to refactor this problem away -- you don't have to have a perfect design to start with (fortunately!-), but can enhance it as you go. I'd recommend the book Refactoring to Patterns as good guidance to this purpose.
For me, code should be designed for testability. The other way around, I consider non-testable or hard to test code as badly designed (regardless of its beauty).
In your case, maybe you can mock external dependencies to run real unit tests (in isolation).
I'll take a different tack: the code just isn't designed for testability, but that does not mean its necessarily badly designed. A design is the product of competing *ilities, of which testability is only one of them. Every coding decision increases some of the *itilies while decreasing others. For example, designing for testability generally harms its simplicity/readability/understandability (because it adds complexiety). A good design favors the most important *ilities of your situation.
Your code isn't bad, it just maximizes the other *ilities other than testability. :-)
Update: Let me add this before I get accused of saying designing for *testability isn't important
The trick of course is to design and code to maximize the good *ilities, particularly the important ones. Which ones are the important ones depends on your situation. In my experience in my situations, designing for testability has been one of the more important *ilities.
Ideally, the large majority of your classes will be unit-testable, but you'll never get to 100% since you're bound to have at least one bit that is dedicated to binding the other classes together to an overall program. (It's best if that can be exactly one place that is obviously and trivially correct, but not all code is as pleasant to work with.)
While there isn't a way to establish if a class is "well designed" or not, at least the first thing you mention is usually a sign of a questionable design.
Instead of relying on the database, the class you are testing should have a dependency on an object whose only responsibility is getting that data, maybe using a pattern like Repository or DAO.
As for the second reason, it doesn't necessarily highlight a bad design of the class; it can be a problem with the design of the tests (not having a fixture supertype or helpers where you can setup the dependencies used in several tests) and/or the overall architecture (i.e. not using factories or inversion of control to inject the corresponding dependencies)
Also, you probably shouldn't be using "real" objects for your dependencies, but test doubles. This helps you make sure you are testing the behavior of that one class, and not that of its dependencies. I suggest you look into mocking frameworks.
I might have an ideal solution for you to consider... the Private Accessor
Recently I served in a role where we used them prolifically to avoid the very situation you're describing -- reliance upon artificial data maintained within the primary data-store.
While not the simplest technology to implement, after doing so you'll be able to easily hard-define private members in the method being tested to whatever conditions you feel they should possess, and right from the unit-test code (so no loading from the database). Also you'll have accomplished the goal without violating class protection levels.
Then it's basic assert & verification for desired conditions as normal.

What is test-driven development (TDD)? Is an initial design required?

I am very new to test-driven development (TDD), not yet started using it.
But I know that we have to write tests first and then the actual code to pass the test and refactor it till the design is good.
My concern over TDD is where it fits in our systems development life cycle (SDLC).
Suppose I get a requirement of making an order processing system.
Now, without having any model or design for this system, how can I start writing tests?
Shouldn't we require to define the entities and their attributes to proceed?
If not, is it possible to develop a big system without any design?
There is two levels of TDD, ATDD or acceptance test driven development, and normal TDD which is driven by unit tests.
I guess the relationship between TDD and design is influenced by the somewhat "agile" concept that source code IS the design of a software product. A lot of people reinforce this by translating TDD as Test Driven Design rather than development. This makes a lot of sense as TDD should be seen as having a lot more to do with driving the design than testing. Having acceptance and unit tests at the end of it is a nice side effect.
I cannot really say too much about where it fits into your SDLC without knowing more about it, but one nice workflow is:
For every user story:
Write acceptance tests using a tool like FitNesse or Cucumber, this would specify what the desired outputs are for the given inputs, from a perspective that the user understands. This level automates the specifications, or can even replace specification documentation in ideal situations.
Now you will probably have a vague idea of the sort of software design you might need as far as classes / behaviour etc goes.
For each behaviour:
Write a failing test that shows how calling code you would like to use the class.
Implement the behaviour that makes the test pass
Refactor both the test and actual code to reflect good design.
Go onto the next behaviour.
Go onto the next user story.
Of course the whole time you will be thinking of the evolving high level design of the system. Ideally TDD will lead to a flexible design at the lower levels that permits the appropriate high design to evolve as you go rather than trying to guess it at the beginning.
It should be called Test Driven Design, because that is what it is.
There is no practical reason to separate the design into a specific phase of the project. Design happens all the time. From the initial discussion with the stakeholder, through user story creation, estimation, and then of course during your TDD sessions.
If you want to formalize the design using UML or whatever, that is fine, just keep in mind that the code is the design. Everything else is just an approximation.
And remember that You Aren't Gonna Need It (YAGNI) applies to everything, including design documents.
Writing test first forces you to think first about the problem domain, and acts as a kind of specification. Then in a 2nd step you move to solution domain and implement the functionality.
TDD works well iteratively:
Define your initial problem domain (can be small, evolutionary prototype)
Implement it
Grow the problem domain (add features, grow the prototype)
Refactor and implement it
Repeat step 3.
Of course you need to have a vague architectural vision upfront (technologies, layers, non-functional requirement, etc.). But the features that bring added-value to your your application can be introduced nicely with TDD.
See related question TDD: good for a starter?
With TDD, you don't care much about design. The idea is that you must first learn what you need before you can start with a useful design. The tests make sure that you can easily and reliably change your application when the time comes that you need to decide on your design.
Without TDD, this happens: You make a design (which is probably too complex in some areas plus you forgot to take some important facts into account since you didn't knew about them). Then you start implementing the design. With time, you realize all the shortcomings of your design, so you change it. But changing the design doesn't change your program. Now, you try to change your code to fit the new design. Since the code wasn't written to be changed easily, this will eventually fail, leaving you with two designs (one broken and the other in an unknown state) and code which doesn't fit either.
To start with TDD, turn your requirements into test. To do this, ask "How would I know that this requirement is fulfilled?" When you can answer this question, write a test that implements the answer to this question. This gives you the API which your (to be written) code must adhere to. It's a very simple design but one that a) always works and b) which is flexible (because you can't test unflexible code).
Also starting with the test will turn you into your own customer. Since you try hard to make the test as simple as possible, you will create a simple API that makes the test work.
And over time, you'll learn enough about your problem domain to be able to make a real design. Since you have plenty of tests, you can then change your code to fit the design. Without terminally breaking anything on the way.
That's the theory :-) In practice, you will encounter a couple of problems but it works pretty well. Or rather, it works better than anything else I've encountered so far.
Well of course you need a solid functional analysis first, including a domain model, without knowing what you'll have to create in the first place it's impossible to write your unit tests.
I use a test-driven development to program and I can say from experience it helps create more robust, focussed and simpler code. My recipe for TDD goes something likes this:
Using a unit-test framework (I've written my own) write code as you wish to use it and tests to ensure return values etc. are correct. This ensures you only write the code you're actually going to use. I also add a few more tests to check for edge cases.
Compile - you will get compiler errors!!!
For each error add declarations until you get no compiler errors. This ensures you have the minimum declarations for your code.
Link - you will get linker errors!!!
Write enough implementation code to remove the linker errors.
Run - you unit tests will fail. Write enough code to make the test succeed.
You've finished at this point. You have written the minimum code you need to implement your feature, and you know it is robust because of your tests. You will also be able to detect if you break things in the future. If you find any bugs, add a unit test to test for that bug (you may not have thought of an edge case for example). And you know that if you add more features to your code you won't make it incompatible to existing code that uses your feature.
I love this method. Makes me feel warm and fuzzy inside.
TDD implies that there is some existing design (external interface) to start with. You have to have some kind of design in mind in order to start writing a test. Some people will say that TDD itself requires less detailed design, since the act of writing tests provides feedback to the design process, but these concepts are generally orthogonal.
You need some form of specification, rather than a form of design -- design is about how you go about implementing something, specification is about what you're going to implement.
Most common form of specs I've seen used with TDD (and other agile processes) are user stories -- an informal kind of "use case" which tends to be expressed in somewhat stereotyped English sentences like "As a , I can " (the form of user stories is more or less rigid depending on the exact style/process in use).
For example, "As a customer, I can start a new order", "As a customer, I can add an entry to an existing order of mine", and so forth, might be typical if that's what your "order entry" system is about (the user stories would be pretty different if the system wasn't "self-service" for users but rather intended to be used by sales reps entering orders on behalf of users, of course -- without knowing what kind of order-entry system is meant, it's impossible to proceed sensibly, which is why I say you do need some kind of specification about what the system's going to do, though typically not yet a complete idea about how it's going to do it).
Let me share my view:
If you want to build an application, along the way you need to test it e.g check the values of variables you create by code inspection, of quickly drop a button that you can click on and will execute a part of code and pop up a dialog to show the result of the operation etc. on the other hand TDD changes your mindset.
Commonly, you just rely on the development environment like visual studio to detect errors as you code and compile and somewhere in your head, you know the requirement and just coding and testing via button and pop ups or code inspection. this is a Syntax debugging driven development . but when you are doing TDD, is a "semantic debugging driven development " because you write down your thoughts/ goals of your application first by using tests (which and a more dynamic and repeatable version of a white board) which tests the logic (or "semantic") of your application and fails whenever you have a semantic error even if you application passes syntax error (upon compilation).
In practice you may not know or have all the information required to build the application , since TDD kind of forces you to write tests first, you are compelled to ask more questions about the functioning of the application at a very early stage of development rather than building a lot only to find out that a lot of what you have written is not required (or at lets not at the moment). you can really avoid wasting your precious time with TDD (even though it may not feel like that initially)

Preparing unit tests : What's important to keep in mind when working on a software architecture? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Let's say I'm starting a new project, quality is a top priority.
I plan on doing extensive unit testing, what's important to keep in mind when I'm working on the architecture to ease and empower further unit testing?
edit : I read an article some times ago (I can't find it now) talking about how decoupling instantiation code from classes behaviors could be be helpful when unit testing. That's the kind of design tips I'm seeking here.
Ease of testing comes through being able to replace as many of the dependencies your method has with test code (mocks, fakes, etc.) The currently recommended way to accomplish this is through dependency inversion, aka the Hollywood Principle: "Don't call us, we'll call you." In other words your code should "ask for things, don't look for things."
Once you start thinking this way you'll find code can easily have dependencies on many things. Not only do you have dependencies on other objects, but databases, files, environment variables, OS APIs, globals, singletons, etc. By adhering to a good architecture, you minimize most of these dependencies by providing them via the appropriate layers. So when it comes time to test, you don't need a working database full of test data, you can simply replace the data object with a mock data object.
This also means you have to carefully sort out your object construction from your object execution. The "new" statement placed in a constructor generates a dependency that is very hard to replace with a test mock. It's better to pass those dependencies in via constructor arguments.
Also, keep the Law of Demeter in mind. Don't dig more than one layer deep into an object, or else you create hidden dependencies. Calling Flintstones.Wilma.addChild(pebbles); means what you thought was a dependence on "Flintstones" really is a dependence on both "Flintstones" and "Wilma".
Make sure that your code is testable by making it highly cohesive, lowly decoupled. And make sure you know how to use mocking tools to mock out the dependencies during unit tests.
I recommend you to get familiar with the SOLID principle, so that you can write a more testable code.
You might also want to check out these two SO questions:
Unit Test Adoption
What Should Be A Unit
Some random thoughts:
Define your interfaces: decouple the functional modules from each other, and decide how they will communicate with each other. The interface is the “contract” between the developers of different modules. Then, if your tests operate on the interfaces, you're ensuring that the teams can treat each other's modules as black boxes, and therefore work independently.
Build and test at least the basic functionality of the UI first. Once your project can “talk” to you, it can tell you what's working and what's not ... but only if it's not lying to you. (Bonus: if your developers have no choice but to use the UI, you'll quickly identify any shortcomings in ease-of-use, work flow, etc.)
Test at the lowest practical level: the more confident you are that the little pieces work, the easier it will be to combine them into a working whole.
Write at least one test for each feature, based on the specifications, before you start coding. After all, the features are the reason your customers will buy your product. Be sure it's designed to do what it's supposed to do!
Don't be satisfied when it does what it's supposed to do; ensure it doesn't do what it's not supposed to do! Feed it bad data, use it in an illogical way, disconnect the network cable during data transfer, run it alongside conflicting applications. Your customers will.
Good luck!
Your tests will only ever be as good as your requirements. They can be requirements that you come up with up front all at once, they can be requirements that you come up with one at a time as you add features, or they can be requirements that you come up with after you ship it and people start reporting a boat load of bugs, but you can't write a good test if no one can or will document exactly what the thing is supposed to do.

How to write dynamically self balancing system for testability?

I am about to embark on writing a system that needs to re-balance it's load distribution amongst the remaining nodes once one of more of the nodes involved fail. Anyone have any good references on what to avoid and what works?
In particular I'm curious how one should start in order to build such a system to to be able to unit-test it.
This question smells like my distributed systems class. So I feel I should point out the textbook we used.
It covers many aspects of distributed systems at an abstract level, so a lot of its content would apply to what you're going to do.
It does a pretty good job of pointing out pitfalls and common mistakes, as well as giving possible solutions.
The first edition is available for free download from the authors.
The book doesn't really cover unit-testing of distributed systems though. I could see entire book written on just that.
This sounds like a task that involves a considerable degree of out-of-process communication and other environment-dependent code.
To make your code Testable, it is important to abstract such code away from your main logic so that you can unit test the core engine without having to depend on any of these environment-specific things.
The recommended approach is to hide such components behind an interface that you can then replace with so-called Test Doubles in unit tests.
The book xUnit Test Patterns cover many of these things, and much more, very well.