what is the best way to test DLL functions ? [closed] - c++

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I want to test my function and flows.
Lua scripts are a way for testing, Im able to load C Lib from lua and invoke the functions. The greatest advantage of using Lua is if you want to change calling scenerio, you just have to change your lua script file and execute it.
But I want to know that are there any better way to test a DLL code?

There is no such thing exist in the universe as "the best way".Citation needed
However, the generally accepted practice currently is to write automated unit tests. That is, you use a unit testing framework that allows you to express different scenarios of the consumption of the library code. You may think of tests as of a huge bunch of little programs that use the functions and classes in the library to verify their correctness, except that you don't actually write main functions, makefiles, printing and all the boring stuff is handled for you. Tests can pass or fail individually. Usually you can give names to them and organize into blocks. When test fails, a framework typically explain where and why, reducing the time and effort required for the debugging.
Unit tests are frequently built and ran automatically, e.g. by the IDE or a watch script after rebuilding your library and/or by a continuous integration system after a commit into a version control system.
Generally you write tests in the same language as your library (it's just simpler), but if your library is designed to interface with other languages you can of course use any or multiple of these languages as well.
There is an entire branch of programming methodology that is based on unit testing, called Test-driven development (TDD). In short, TDD instructs you to write a unit test first for a given scenario and only then the simplest library code that allows the test to pass.
Few examples of unit testing frameworks for C++ (no particular order):
Google Test
CppUnit
Boost.Test
Catch

Related

Testing workflow for small (i.e. one person) design in SystemVerilog [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I started implementing design in SystemVerilog but I'm a bit lost as far as testing is concerned. I tried to use straightforward SystemVerilog for verification but it seems limited:
The errors are spotted by going through the log (even $error and assert don't stop simulation) so they can be easily missed.
I cannot (?) run all the tests as Vivado allows to use only one as active
I could put everything in single test simulation but waveform for debugging seems too long as it mixes various tests.
I can try to create my own framework but it sounds like reinventing the wheel which is bad idea.
I know of SVUnit but it seems to work with expensive simulators, not xsim I have license for. I'm trying to look at UVM but I'm not sure if the investment of time is worth it.
What would be a good test workflow for SV for person coming from software (drivers) for personal, one-person, FPGA project?
The free and open source VUnit provides a single click (= single command) solution that will find your test suites and test cases, (re)compile what's needed (no recompiles between tests), run the simulations and then present the pass/fail result.
VUnit started as a VHDL unit testing framework but since much of the top-level automation is language agnostic it was updated to also support SystemVerilog. The difference between the VHDL and SV support is that VUnit provides a number of testbench support packages for VHDL which you don't find for SV. On the other hand, some of that functionality is already part of SV.
Find out the very basics here. The UART example above can be found in the examples directory.
VUnit supports simulators from Mentor, Aldec and Cadence and also the open source GHDL. It doesn't support Vivado today but it's being discussed. However, you can use the free ModelSim Altera Edition.
Disclaimer: I'm one of the authors for VUnit.
Running all tests isn't usually done in one simulator invocation. This is handled as multiple invocations by a different tool, which usually does more (distribute jobs across a compute farm, centralizes status, etc.).
Determining whether a test passed or failed is usually done by inspecting the log file. If an error was detected, it should show up in the log and you can grep for it. The simulator's exit code isn't used for this, since non-zero exit codes mean that something was wrong with the tool invocation, not with the simulation itself.
In your case, since you only have the simulator available you have to build a lot of the infrastructure. You'll need a script that can run a single test and can determine if it was a PASS or a FAIL (via grep, Perl, etc.). You can then define another script that loops over all of your tests, calls the previous script and computes a summary.
Have you tried VUnit? If you are interested to run UVM, we do have a port of UVM base class that runs on free Modelsim (With some limitations such as no randomisation, coverage, SVA etc.) as part of Go2UVM (www.go2uvm.org).
Regards
Srini

Confusion about unit testing frameworks? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I get the concept of unit testing and TDD on a whole.
However, I'm still a little confused on what exactly unit testing frameworks are. Whenever I read about unit testing, it's usually an explanation of what it is, followed by "oh here are the frameworks for this language, i.e JUnit".
But what does that really mean? Are framework just a sort of testing library that allows programmers to write simpler/efficient unit tests?
Also, what are the benefits of using a framework? As I understand it, unit testing is done on small chunks of code at a time, i.e a method. However, I could individually write a test for a method without using a unit testing framework. Is it maybe for standardization of testing practices?
I'm just very new to testing and unit-testing, clarification on some basic concepts would be great.
A bit of a broad question, but I think there are certain thoughts that could count as as facts for an answer:
When 5, 10, 100, ... people go forward to "work" with the same idea/concept (for example unit testing) then, most likely, certain patterns respectively best practices will evolve. People have ideas, and by trial and error they find out which of those ideas are helpful and which are not.
Then people start to communicate their ideas, and those "commonly used" patterns undergo discussions and get further refined.
And sooner or later, people start thinking "I am doing the same task over and over again; I should write a program for me to do that".
And that is how frameworks come into existence: they are tools to support certain aspects of a specific activity.
Let's give an example: using a framework like JUnit, I can completely focus on writing test cases. I don't need to worry about accumulation of failure statistics; I don't need to worry how to make sure that really all my tests are executed when I want that to happen.
I simply understand how to use the JUnit framework; and I know how to further utilize JUnit test cases in conjunction with build systems such as gradle or maven - in order to have all my unit tests executed automatically; each time I push a commit into my source code management system for example.
Of course you can re-invent the wheel here; and implement all of that yourself. But that is just a waste of time. It is like saying: "I want to move my crop to the market - let's start by building the truck myself". No. You rent or buy a pre-build truck; and you use that to do what you actually want to do (move things around).

How to properly Unit Test with a large number of dependencies [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 6 years ago.
Improve this question
So I'm adding some new features to an older project. I'm able to unit test a few classes without relying on any of the features from the legacy code. However I've gotten to a point where the next phase in functionality is just so dependent on the legacy code that it seems like I will basically have to run the main from the project( or at least most of the set up) in order to be able to Unit test my newest class. Is there some sort of approach for dealing with ridiculous dependencies when trying to unit test?
However I've gotten to a point where the next phase in functionality
is just so dependent on the legacy code that it seems like I will
basically have to run the main from the project( or at least most of
the set up) in order to be able to Unit test my newest class.
I have come across this type of problem. You are asked to write a small class with 4 methods.
But, unfortunately, your code needs to create an object of a legacy class. So, you need to build libraries of legacy code, link your code with them, run 3 dozen processes, bring up database, fill up sample data in database, set up configuration for processes, schedule events to kick-in .. etc.
You can avoid some of that pain by mimicking your input (I assume you already did that).
You can also stub out legacy classes. If you don't have source code of legacy classes in your control, you can even stub out methods of legacy classes selectively (by putting your stub libraries on the command line of the compiler ahead of the actual legacy libraries).
There are different tricks to deal with different type of problems that arise in unit testing. If you have a specific problem in mind, you can add that to your question so that people can help you in a better way.

Guidelines for writing a test suite [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What are the best practices/guidelines for writing test suite for C++ projects?
This is a very broad question. For unit testing and Test Driven Development (TDD), there is some useful (if somewhat platidinous in parts) guidance on this from Microsoft - you can overlook the Visual Studio-specific advice, if it does not apply.
If you are looking for guidance on system or performance testing, I would clarify your question. There is a decent broader rationale in the docs for Boost.Test.
There are several unit testing best
practices to review before we close.
Firstly, TDD is an invaluable
practice. Of all the development
methodologies available, TDD is
probably the one that will most
radically improve development for many
years to come and the investment is
minimal. Any QA engineer will tell you
that developers can't write successful
software without corresponding tests.
With TDD, the practice is to write
those tests before even writing the
implementation and ideally, writing
the tests so that they can run as part
of an unattended build script. It
takes discipline to begin this habit,
but once it is established, coding
without the TDD approach feels like
driving without a seatbelt.
For the tests themselves there are
some additional principals that will
help with successful testing:
Avoid creating dependencies between
tests such that tests need to run in a
particular order. Each test should be
autonomous.
Use test initialization
code to verify that test cleanup
executed successfully and re-run the
cleanup before executing a test if it
did not run.
Write tests before
writing the any production code
implementation.
Create one test class
corresponding to each class within the
production code. This simplifies the
test organization and makes it easy to
choose where to places each test.
Use
Visual Studio to generate the initial
test project. This will significantly
reduce the number of steps needed when
manually setting up a test project and
associating it to the production
project.
Avoid creating other machine
dependent tests such as tests
dependent on a particular directory
path.
Create mock objects to test
interfaces. Mock objects are
implemented within a test project to
verify that the API matches the
required functionality.
Verify that
all tests run successfully before
moving on to creating a new test. That
way you ensure that you fix code
immediately upon breaking it.
Maximize
the number of tests that can be run
unattended. Make absolutely certain
that there is no reasonable unattended
testing solution before relying solely
on manual testing.
TDD is certainly one set of bests practices. When retrofitting tests, I aim for two things code coverage and boundary condition coverage. Basically you should pick inputs to functions such that A) All code paths are tested and better if all permutations of all code paths are tested(sometimes that can be a large number of cases and not really be necessary if path differences are superficially different) and B) That all boundary conditions(those are conditions that cause variation in code path selection) are tested if your code has an if x > 5 in it you test with x = 5, and x = 6 to get both sides of the boundary.

How do you implement unit-testing in large scale C++ projects? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 5 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I believe strongly in using unit-tests as part of building large multi-platform applications. We currently are planning on having our unit-tests within a separate project. This has the benefit of keeping our code base clean. I think, however, that this would separate the test code from the implementation of the unit. What do you think of this approach and are there any tools like JUnit for c++ applications?
There are many Test Unit frameforks for C++.
CppUnit is certainly not the one I would choose (at least in its stable version 1.x, as it lacks many tests, and requires a lot of redundant lines of codes).
So far, my preferred framework is CxxTest, and I plan on evaluating Fructose some day.
Any way, there are a few "papers" that evaluate C++ TU frameworks :
Exploring the C++ Unit Testing Framework Jungle, By Noel Llopis
an article in Overload Journal #78
That's a reasonable approach.
I've had very good results both with UnitTest++ and Boost.Test
I've looked at CppUnit, but to me, it felt more like a translation of the JUnit stuff than something aimed at C++.
Update: These days I prefer using Catch. I found it to be effective and simple to use.
You should separate your base code to a shared (dynamic) library and then write the major part of your unit tests for this library.
Two years ago (2008) I have been involved in large LSB Infrastructure project deployed by The Linux Foundation. One of the aims of this project was to write unit tests for 40.000 functions from the Linux core libraries. In the context of this project we have created the AZOV technology and the basic tool named API Sanity Autotest in order to automatically generate all the tests. You may try to use this tool to generate unit tests for your base library (ies).
I think your on the right path with unit testing and its a great plan to improve reliability of your product.
Though unit testing is not going to solve all your problems when converting your application to different platforms or even different operating systems. The reason for this, is the process unit testings goes through to uncover bugs in your application. It just simply throws as many inputs imaginable into your system and waits for a result on the other end. Its like getting a monkey to constantly pound at the keyboard and observing the results(Beta testers).
To take it to the next step, with good unit testing you need to focus on your internal design of your application. The best approach i found was to use a design pattern or design process called "contract programming" or "Design by contract". The other book that is very helpful for building reliability into your core design was.
Debugging the Development Process: Practical Strategies for Staying Focused, Hitting Ship Dates, and Building Solid Teams.
In our development team, we looked very closely at what we consider to be a programmer error, developer error, design error and how we could use both unit testing and also building reliability into our software package through DBC and following the advice of debugging the development proccess.
I use UnitTest++. The tests are in a separate project but the actual tests are intertwined with the actual code. They exist in a folder under the section under test.
ie:
MyProject\src\ <- source of the actual app
MyProject\src\tests <- the source of the tests
If you have nested folders (and who doesn't) then they too will have their own \tests subdirectory.
Cppunit is a direct equivalent of Junit for C++ applications
http://cppunit.sourceforge.net/cppunit-wiki
Personally, I created the unit tests in a different project, and created a separate build configuration which built all the unit tests and dependent source code. In some cases I wanted to test private member functionss of a class so I made the Test class a friend class to the object to be tested, but hid the friend declarations when building in "non-test" configurations through preprocessor declarations.
I ended up doing these coding gymnastics as I was integrating tests into legacy code however. If you are starting out with the purpose of unit testing a better design may be simple.
You can create a unit test project for each library in your source tree in a subdirectory of that library. You end up with a test driver application for each library, which makes it easier to run a single suite of tests. By putting them in a subdirectory, it keeps your code base clean, but also keeps the tests close to the code.
Scripts can easily be written to run all of the test suites in your source tree and collect the results.
I've been using a customized version of the original CppUnit for years with great success, but there are other alternatives now. GoogleTest looks interesting.
Using tut http://tut-framework.sourceforge.net/
very simple, just header file only no macros. Can generate XML results
CxxTest is also worth a look for lightweight, easy to use cross platform JUnit/CppUnit/xUnit-like framework for C++. We find it very straightforward to add and develop tests
Aeryn is another C++ Testing Framework worth looking at