TDD approaches for Development DBAs? [closed] - unit-testing

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
The software development team at my company develops using TDD and BDD practices. Consequently we have lots of unit, integration and acceptance tests to let us know whether our code base is working as expected. Needless to say, we now could not live without these tests giving us constant feedback.
The development DBA on our team writes table views with complex logic. He develops these without unit tests, and they invariably break when he does subsequent development, causing frustration in the software development team.
My question is, are DBAs encouraged to use TDD practices when working in an agile environment? Do DBA's have test frameworks to allow them to work in this way? We use IBM's DB2 database; are there any test frameworks for this database to allow database views to be developed in a TDD manner?

In the past I've used two approaches:
Having a very thin Data Access layer in the application and writing tests around that. In other words (assuming your dba uses sprocs), for each new sproc a method to access it is written and a test is created which exercises it appropriately (or better, tests first). This is nice because it integrates easily with test runners. You can use transactions to rollback tests without side effects.
Another option is to use native SQL testing frameworks. I've evaluated tsqlt which is a SQL Server framework, so not appropriate in your case, but the approach is solid and there could be appropriate frameworks for DB2.

There are several framework to test routines in the different kind of databases. Some of them follow the xUnit specification, and this allows to have jUnit-like tests at database level.
For DB2, there is a framework called db2unit: https://github.com/angoca/db2unit
With this framework you can compare objects (numbers, dates, boolean, strings, etc.) like you do in jUnit.
You can include the result of your database level tests into the global tests by capturing the error code, and this can be included in a Continuous Integration system. db2unit uses Travis-CI to test itself.

Related

Why should I write unit tests if I already have E2E tests [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
If we're already doing E2E (end-to-end) testing do we also need to write unit tests?
Considering we are doing all functional testing in end-to-end testing, what are the pros and cons of doing both?
Because E2E tests are not a perfect substitution for unit tests.
In particular:
They are slow to run
E2E tests use actual services, not mocks. A real database is vastly slower than an in-memory mock of a database. You also have to build your whole project, set up seed data etc etc.
If I have to wait a long time for a test run to end, I'm probably gonna start skipping it more often.
They don't isolate failure
E2E tests tell you that a whole scenario is broken, i.e 'User login faled'. They don't tell you which part of which component that takes part in that scenario is broken. This makes it harder to tell which part of the code caused the test failure.
They hurt reusability
You can't plug-out a component of your system and drop it in another system with confidence. There's no unit-tests to run for that component, in the new environment.
You can't do per-unit TDD
If you want to carve out a new component(unit) for your system and use TDD while you're at it, you're at a dead-end without unit tests. Unit tests and TDD go hand-in-hand.
That being said:
Having both E2E tests and unit-tests is what you should be aiming for, if your resources allow it.
Having only E2E tests is always better than having no tests at all.
Here's a nice illustration of the perfect test-suite, called the Testing Pyramid:
This article from the Google Test Blog goes into more detail: Just Say No to More End-to-End Tests. While I don't agree that you shouldn't write E2E tests, it illustrates the pros and cons of each, in-depth.
Unit testing is important aspect of software life cycle to improve its quality. If we have written test cases, any new developer can understand the functionality and check the different output based on conditions. It should be mandatory part of the process.

Confusion about unit testing frameworks? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I get the concept of unit testing and TDD on a whole.
However, I'm still a little confused on what exactly unit testing frameworks are. Whenever I read about unit testing, it's usually an explanation of what it is, followed by "oh here are the frameworks for this language, i.e JUnit".
But what does that really mean? Are framework just a sort of testing library that allows programmers to write simpler/efficient unit tests?
Also, what are the benefits of using a framework? As I understand it, unit testing is done on small chunks of code at a time, i.e a method. However, I could individually write a test for a method without using a unit testing framework. Is it maybe for standardization of testing practices?
I'm just very new to testing and unit-testing, clarification on some basic concepts would be great.
A bit of a broad question, but I think there are certain thoughts that could count as as facts for an answer:
When 5, 10, 100, ... people go forward to "work" with the same idea/concept (for example unit testing) then, most likely, certain patterns respectively best practices will evolve. People have ideas, and by trial and error they find out which of those ideas are helpful and which are not.
Then people start to communicate their ideas, and those "commonly used" patterns undergo discussions and get further refined.
And sooner or later, people start thinking "I am doing the same task over and over again; I should write a program for me to do that".
And that is how frameworks come into existence: they are tools to support certain aspects of a specific activity.
Let's give an example: using a framework like JUnit, I can completely focus on writing test cases. I don't need to worry about accumulation of failure statistics; I don't need to worry how to make sure that really all my tests are executed when I want that to happen.
I simply understand how to use the JUnit framework; and I know how to further utilize JUnit test cases in conjunction with build systems such as gradle or maven - in order to have all my unit tests executed automatically; each time I push a commit into my source code management system for example.
Of course you can re-invent the wheel here; and implement all of that yourself. But that is just a waste of time. It is like saying: "I want to move my crop to the market - let's start by building the truck myself". No. You rent or buy a pre-build truck; and you use that to do what you actually want to do (move things around).

Unit test - best practice for a multilayer project [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
I have a project with different layers of abstraction, which can be splitted in the groups:
Internal API;
Data Access Layer (DAL)
Business Access Layer (BAL)
...
Public API
Public accessible classes that have access to the internal data;
REST endpointes.
...
And inside Public API services I use Internal APIs.
Is it required to write Unit tests for all this layers or only for Internal API?
Are there any best practices?
Should I start writing my tests from Internal API and move to the next layer bottom-up?
The first thing I would say is "Yes." In other words, test everything.
For the internal API, you can write true unit tests with mock objects for the DAL and each class tested in isolation. This isn't just good to test for verification sake but also to give you confidence your code works and to serve as documentation of the code. That confidence comes in handy too when, for example, a REST API call fails later and you need to narrow down where the problem is.
You can test your DAL with an in-memory database for speed. I would call that an integration test while others would call that a unit test. Just semantics. But you got to do that too.
The Internal API tests are by developers for developers.
The testers should help with anything public facing. You simply write integration tests for the API services and REST client tests to verify the common cases and the obvious exceptional cases.
It sounds like a lot, and it kind of is. But if you take the time to get to know your tools and set up automation everywhere you can, you will be amazed how much you can accomplish pretty fast.
Hope this helps.

Is BDD mainly used in integration test? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
A common story
Story: User logging in
As a user
I want to login with my details
So that I can get access to the site
Given such a broad coverage, it is useless if I mock the system components such as DB in order to perform the test, so can I say that people mainly use BDD in integration test?
Here's my terminology.
Scenario: an example of the user using the system, with all relevant components in place rather than mocked out. May be automated and used as an acceptance test, but the conversations between business, testers and devs are the most important aspect of BDD. Often created using the Given / When / Then template, sometimes in tools which allow for natural language capture such as Cucumber or JBehave.
Integration test: Crosses the boundary of two components, and usually used to check the integrity of integration of those components. For instance, may be used to send messages back and forth between the client and server layers of a web interface, or to check database bindings with Hibernate, etc.. Does not necessarily involve the full stack. A scenario could be considered a particular kind of integration test. BDD doesn't really apply for most non-scenario integration tests, though you could still conceivably use the Given / When / Then template.
Unit test: An example of a consuming class using another class, usually with collaborators mocked out. May also be an example of how a consuming class delegates work to its collaborators. That's how we talk about it in BDD, anyway (you can do BDD at both levels). Can also use the Given / When / Then syntax.
Story: A slice through a feature to allow us to get faster feedback. The behavior of a feature may be illustrated with several scenarios, and these can also be used to help slice up the feature. Often illustrated with the As a... I want... So that... template, or the In order to... as a... I want... template of Feature Injection.
Feature: Features represent the way in which users will use the capabilities we’re giving them. This is the stage in which we start defining the concrete implementation and UI. A feature may be a web page, part of a web page, a module in a windows UI, part of an app, etc.
Capability: Something a user can achieve with the system, or which the system can achieve. Ie: A user can book a trade, the system is secure enough to withstand hackers. Phrasing scenarios at this level helps them be independent of the UI and keeps them in the language of the business.
Hope this helps.
Your example is a user story, which describes acceptance test. Acceptance tests could have end-to-end scope, but not necessarily. Core difference between acceptance and integration tests, is what they are focused on. Acceptance test is business-focused and could be written/read by non-technical person (customer). On the other hand we have development-focused integration tests, which simply verify that two or more components could work together.
Back to BDD. It could be used in acceptance testing (feature level) and unit testing (code level). There are even different tools for different levels of BDD:
SpecFlow (acceptance testing)
NSpec, NBehave (unit testing)
Behaviour Driven Development is thinking about the behaviour of a product in a given scenario. It extends both Test Driven Development and Domain Driven Design. Also BDD is thinking beyond integration test. BDD is about maximizing the communication between the Users, Developers, Testers, Managers and Analysts.
Integration Testing is considered as a step of BDD. Integration testing can also exist out of the context of BDD. As integration testing can be used to cover high-level behaviour of your application without dropping into the unit testing.
Behaviour is about the interactions between components of the system and so the use of mocking is fundamental to advanced TDD. Expertise in TDD begins to dawn at the point where the developer realizes that TDD is about defining behaviour rather than testing.
A user story may have a broad scope, as it is always a priority of developing human friendly software. It combines the pragmatic approach of Extreme Programming with Enough Up Front Thinking based on Macro Level Analysis to enable Macro Level Planning.
Integration Testing is what we are using BDD for mainly - UI tests with Selenium. Although actually we are not mocking anything with these tests as the BDD Scenarios are used to drive SpecFlow to in turn drive Selenium Webdriver to perform user-journeys such as logging in, clicking menu links, creating records. In fact I'm trying my hardest to do everything through the UI where possible.
I have been working towards and with the Business Analysts to write their user stories in a BDD fashion (in fact it is now in our contract with clients) and it has been very refreshing and useful to find that during the writing of stories in a BDD fashion, we discover edge-cases that might not otherwise have been thought when we extrapolate the requirements into atomic steps (Given, When, Then). It truly is a win-win scenario for both the business and the developers' perspective when we have a more common language to express requirements.

Guidelines for writing a test suite [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What are the best practices/guidelines for writing test suite for C++ projects?
This is a very broad question. For unit testing and Test Driven Development (TDD), there is some useful (if somewhat platidinous in parts) guidance on this from Microsoft - you can overlook the Visual Studio-specific advice, if it does not apply.
If you are looking for guidance on system or performance testing, I would clarify your question. There is a decent broader rationale in the docs for Boost.Test.
There are several unit testing best
practices to review before we close.
Firstly, TDD is an invaluable
practice. Of all the development
methodologies available, TDD is
probably the one that will most
radically improve development for many
years to come and the investment is
minimal. Any QA engineer will tell you
that developers can't write successful
software without corresponding tests.
With TDD, the practice is to write
those tests before even writing the
implementation and ideally, writing
the tests so that they can run as part
of an unattended build script. It
takes discipline to begin this habit,
but once it is established, coding
without the TDD approach feels like
driving without a seatbelt.
For the tests themselves there are
some additional principals that will
help with successful testing:
Avoid creating dependencies between
tests such that tests need to run in a
particular order. Each test should be
autonomous.
Use test initialization
code to verify that test cleanup
executed successfully and re-run the
cleanup before executing a test if it
did not run.
Write tests before
writing the any production code
implementation.
Create one test class
corresponding to each class within the
production code. This simplifies the
test organization and makes it easy to
choose where to places each test.
Use
Visual Studio to generate the initial
test project. This will significantly
reduce the number of steps needed when
manually setting up a test project and
associating it to the production
project.
Avoid creating other machine
dependent tests such as tests
dependent on a particular directory
path.
Create mock objects to test
interfaces. Mock objects are
implemented within a test project to
verify that the API matches the
required functionality.
Verify that
all tests run successfully before
moving on to creating a new test. That
way you ensure that you fix code
immediately upon breaking it.
Maximize
the number of tests that can be run
unattended. Make absolutely certain
that there is no reasonable unattended
testing solution before relying solely
on manual testing.
TDD is certainly one set of bests practices. When retrofitting tests, I aim for two things code coverage and boundary condition coverage. Basically you should pick inputs to functions such that A) All code paths are tested and better if all permutations of all code paths are tested(sometimes that can be a large number of cases and not really be necessary if path differences are superficially different) and B) That all boundary conditions(those are conditions that cause variation in code path selection) are tested if your code has an if x > 5 in it you test with x = 5, and x = 6 to get both sides of the boundary.