How to properly Unit Test with a large number of dependencies [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So I'm adding some new features to an older project. I'm able to unit test a few classes without relying on any of the features from the legacy code. However I've gotten to a point where the next phase in functionality is just so dependent on the legacy code that it seems like I will basically have to run the main from the project( or at least most of the set up) in order to be able to Unit test my newest class. Is there some sort of approach for dealing with ridiculous dependencies when trying to unit test?

However I've gotten to a point where the next phase in functionality
is just so dependent on the legacy code that it seems like I will
basically have to run the main from the project( or at least most of
the set up) in order to be able to Unit test my newest class.
I have come across this type of problem. You are asked to write a small class with 4 methods.
But, unfortunately, your code needs to create an object of a legacy class. So, you need to build libraries of legacy code, link your code with them, run 3 dozen processes, bring up database, fill up sample data in database, set up configuration for processes, schedule events to kick-in .. etc.
You can avoid some of that pain by mimicking your input (I assume you already did that).
You can also stub out legacy classes. If you don't have source code of legacy classes in your control, you can even stub out methods of legacy classes selectively (by putting your stub libraries on the command line of the compiler ahead of the actual legacy libraries).
There are different tricks to deal with different type of problems that arise in unit testing. If you have a specific problem in mind, you can add that to your question so that people can help you in a better way.

Related

Should I create an interface for every class to make my code testable (unit testing) [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm trying to learn how to create valuable unit tests.
In each tutorial I saw people create interfaces for every dependency to create a mock.
Is it mean that I should always create an interface for every class I have in my project? I don't know is it a good or a bad idea but every time I see a rule with "always" I get suspicious.
I should always create an interface for every class I have in my project?
No.
There's no one single rule you can follow or thing you can do which would make all of your code automatically unit-testable. What you can do is write code with abstractable dependencies. If you want to test whether or not the code you've written is easily unit testable, try to write a unit test for it. If a dependency gets in your way, you have a coupled dependency. Abstract it.
How you abstract it is up to you. You have a variety of tools at your disposal:
Interfaces
Abstract classes
Concrete classes with lots of virtual members
Mockable pass-through wrapper classes (very useful for non-unit-testable 3rd party dependencies)
etc.
You also have a variety of ways to get the dependency into the code that uses it:
Constructor injection
Property injection
Passing it to the method as an argument
Factories
In some circumstances, service locators (useful when introducing dependency abstraction to a legacy codebase, for example)
etc.
How you structure you code really depends on what you're building and what makes sense for those objects. For a service class which integrates with an external system, an interface makes a lot of sense. For a domain model which has a variety of potential implementations that share functionality, an abstract class may make a lot of sense. There are many possibilities for many potential uses.
The real litmus test of whether or not your code is unit-testable isn't "do I use interfaces?", it's "can I write a meaningful unit test for this?" If the functionality is isolatable without relying on dependencies (either by not having them or by allowing them to be mocked for testing), then it seems pretty unit-testable to me.

what is the best way to test DLL functions ? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to test my function and flows.
Lua scripts are a way for testing, Im able to load C Lib from lua and invoke the functions. The greatest advantage of using Lua is if you want to change calling scenerio, you just have to change your lua script file and execute it.
But I want to know that are there any better way to test a DLL code?
There is no such thing exist in the universe as "the best way".Citation needed
However, the generally accepted practice currently is to write automated unit tests. That is, you use a unit testing framework that allows you to express different scenarios of the consumption of the library code. You may think of tests as of a huge bunch of little programs that use the functions and classes in the library to verify their correctness, except that you don't actually write main functions, makefiles, printing and all the boring stuff is handled for you. Tests can pass or fail individually. Usually you can give names to them and organize into blocks. When test fails, a framework typically explain where and why, reducing the time and effort required for the debugging.
Unit tests are frequently built and ran automatically, e.g. by the IDE or a watch script after rebuilding your library and/or by a continuous integration system after a commit into a version control system.
Generally you write tests in the same language as your library (it's just simpler), but if your library is designed to interface with other languages you can of course use any or multiple of these languages as well.
There is an entire branch of programming methodology that is based on unit testing, called Test-driven development (TDD). In short, TDD instructs you to write a unit test first for a given scenario and only then the simplest library code that allows the test to pass.
Few examples of unit testing frameworks for C++ (no particular order):
Google Test
CppUnit
Boost.Test
Catch

Testing workflow for small (i.e. one person) design in SystemVerilog [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I started implementing design in SystemVerilog but I'm a bit lost as far as testing is concerned. I tried to use straightforward SystemVerilog for verification but it seems limited:
The errors are spotted by going through the log (even $error and assert don't stop simulation) so they can be easily missed.
I cannot (?) run all the tests as Vivado allows to use only one as active
I could put everything in single test simulation but waveform for debugging seems too long as it mixes various tests.
I can try to create my own framework but it sounds like reinventing the wheel which is bad idea.
I know of SVUnit but it seems to work with expensive simulators, not xsim I have license for. I'm trying to look at UVM but I'm not sure if the investment of time is worth it.
What would be a good test workflow for SV for person coming from software (drivers) for personal, one-person, FPGA project?
The free and open source VUnit provides a single click (= single command) solution that will find your test suites and test cases, (re)compile what's needed (no recompiles between tests), run the simulations and then present the pass/fail result.
VUnit started as a VHDL unit testing framework but since much of the top-level automation is language agnostic it was updated to also support SystemVerilog. The difference between the VHDL and SV support is that VUnit provides a number of testbench support packages for VHDL which you don't find for SV. On the other hand, some of that functionality is already part of SV.
Find out the very basics here. The UART example above can be found in the examples directory.
VUnit supports simulators from Mentor, Aldec and Cadence and also the open source GHDL. It doesn't support Vivado today but it's being discussed. However, you can use the free ModelSim Altera Edition.
Disclaimer: I'm one of the authors for VUnit.
Running all tests isn't usually done in one simulator invocation. This is handled as multiple invocations by a different tool, which usually does more (distribute jobs across a compute farm, centralizes status, etc.).
Determining whether a test passed or failed is usually done by inspecting the log file. If an error was detected, it should show up in the log and you can grep for it. The simulator's exit code isn't used for this, since non-zero exit codes mean that something was wrong with the tool invocation, not with the simulation itself.
In your case, since you only have the simulator available you have to build a lot of the infrastructure. You'll need a script that can run a single test and can determine if it was a PASS or a FAIL (via grep, Perl, etc.). You can then define another script that loops over all of your tests, calls the previous script and computes a summary.
Have you tried VUnit? If you are interested to run UVM, we do have a port of UVM base class that runs on free Modelsim (With some limitations such as no randomisation, coverage, SVA etc.) as part of Go2UVM (www.go2uvm.org).
Regards
Srini

How verbose/granular should your tests be? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I recently started a new project where I decided to adopt writing unit tests for most functions. Prior to this my testing was limited to sporadically writing test "functions" to ensure something worked as expected, and then never bothering to update the test functions, clearly not good.
Now that I've written a fair amount of code, and tests, I'm noticing that I'm writing a lot of tests for my code. My code is generally quite modular, in the sense that I try to code small functions that do something simple, and then chain them together in a larger function as required, again, accepted best practice.
But, I now end up writing tests for both the individual "building block" functions (quite small tests), as well as tests for the function that chains them together, and testing the result there as well, obviously the result will be different, but since the inputs are similar, I'm duplicating a lot of test code (the setting up the input portions , which are slightly different in each but not by much, since they're not identical I can't just use a text fixture..).
Another concern is I try to adhere quite strictly to test one thing per test, so I write a single test for every different feature within the function, for instance, if there's some extra input that can be passed to the function, but which is optional, I write one version which adds the input, one that doesn't and test them separately. The setup here is again mostly identical except for the input I added, again not exactly the same, so using a fixture doesn't feel "right".
Since this is my first project with everything being fully unit tested, I just wanted to make sure I was doing stuff correctly and that the code duplication in tests is to be expected.. so, my question is: Am I doing things correctly? If not, what should I change?
I code in C and C++.
On a side note, I love the testing itself, I'm far more confident of my code now.
Thanks!
Your question tries to address many things, and I can try to answer only some of them.
Try to get as high coverage as possible (ideally 100%)
Do not use real resources for your unit test, or at least try to avoid it. You can use mocks and stubs for that.
Do not unit test 3rd party libraries.
You can break dependencies using dependency injections or functors. That way the size of your tests can decrease.

Advice for keeping large C++ project modular? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Our team is moving into much larger projects in size, many of which use several open source projects within them.
Any advice or best practices to keep libraries and dependancies relatively modular and easily upgradable when new releases for them are out?
To put it another way, lets say you make a program that is a fork of an open source project. As both projects grow, what is the easiest way to maintain and share updates to the core?
Advice regarding what I'm asking only please...I don't need "well you should do this instead" or "why are you"..thanks.
With clones of open source projects one of your biggest headaches will be keeping in sync/patched according to the upstream sources. You might not care about new features, but you will sure need critical bug fixes applied.
My suggestion would be to carefully wrap such inner projects into shared libraries, so you can more or less painlessly upgrade just those parts if the ABI is not broken by the changes.
One more thing - if you find and fix bugs in an open source project - don't keep the fixes to yourself. Push the patches upstream. That will make the project better and will save you days of merging with a new version.
In order of preference
Make as few changes as possible to the third party libraries. Try and get around their limitations within your code. Document your changes and then submit a patch.
If you can't get around their limitations, submit your change as a patch (this may be idealistic with the glacial pace of some projects).
If you can't do either of those things, document what you've changed in a centralized location so that the poor person doing the integration for new versions can figure out what the heck you were doing, and if the changes made are still needed.
1 and 2 are greatly preferred (however, fast and very slow respectively), while the third option will only lead to headaches and bugs as your code base deviates from the dependencies' code base. In my code, I don't even have the 3rd party code loaded up in the IDE unless I have to peruse a header file. This removes temptation to change things that aren't mine.
As far as modularity, and this is assuming you are using relatively stable 3rd party libraries, only program to the public facing interface. Just because you have the source doesn't mean you have to use it all over your code. This should allow updates to be essentially drag and drop. Now, this is completely idealistic but its what I strive for with code I work on.