How to write unit tests for a Slack Bolt application? - unit-testing

I'm attempting to transition an app from the hubot library to the official #slack/bolt library. The app uses hubot-test-helper for unit testing, which mocks out a chatroom. Is there anything equivalent for the bolt library?
Yes, most of the logic can be (and in many cases is) split away from the actual chat implementation; the exception being middleware which I believe still benefits from testing closer to integration.

Related

Grails unit/intergration testing

I'm trying to add unit-test and integration-test for my grails application but I have some trouble how to distinguish between both and when to use unit or integration to test my controllers actions and services.
The tutorial I found online is not very clean. I can't find complete example to follow up.
Can you please share helpful topics?
I follow the following guidelines:
Try writing as many unit tests as you can. They can be written for controllers, services, domain classes or any other groovy classes. The idea is that unit tests are friends for developers. Writing enough unit tests will
make sure that the developer makes lesser mistakes. As they execute
quickly, this means quick verification. But unit tests cannot test the following:
Criteria queries, HQL queries
Actual database Interactions (queries, transactional behaviour, updates, db constraints etc.)
Inter modular interactions
So we write the Integration tests as well
Integration tests take longer to execute. Writing Integration tests often need bootstrapping data. But they really are helpful to test functionalities end to end (excluding the actual user interactions through UI for which functional tests are written). So Integration Tests can be written for:
Testing all database interactions as unit tests actually do not test the database interactions. This also includes testing criteria, hql etc.,
Testing transactional behaviour (which is dependent on db)
Testing implementations end to end. So this will also test how two independently created modules interact with each other and make sure we have created them correctly.
One problem with integration tests is their speed. For me, integration tests take 15+ seconds to start up. In that time, certain things do slip out of mind focus.
I prefer to go with unit tests that start in no more then 2 sec and can be run several times in those 15 seconds.
One more argument for unit tests is they force you to decouple your code. Integration tests always tempt you to just rely on some other component existing and initialized.
Important links:
http://spockframework.org/spock/docs/1.0/interaction_based_testing.html
http://docs.grails.org/latest/guide/testing.html
Unfortunately it is not just a matter preference or speed. It is a huge subject, but I can give you some advice based on my experience.
If you expect to be covering your database access code (queries, transactional behaviour) by using unit tests, you are deluding yourself. You are testing how your queries comply with the in-memory implementation of GORM. Not hibernate, not your database.
I usually have two types of tests. Unit and functional tests. The functional tests will perform a full test, running against a real database, and stimulating the system like a user would (if it is a web site via Geb, if it is a REST api, via a REST client).
The functional tests will set up a startup state by executing some kind of fixture code first. This can be registering a user and logging them in, for example. Then the test will run, and then the postconditions are checked. Here, you can check the postconditions either by accessing the database through the GORM API, or by using production API calls (danger of covering a bug with another bug).
Sometimes, your system will interact with a third system. Here, if you can, you can mock the implementation of the third system, by injecting a mock implementation into the system under test.
You have also tools like Spring Cloud Contract, that allow you to create bock a mock server for your system under test, and a specification for your third-party system. See https://cloud.spring.io/spring-cloud-contract/spring-cloud-contract.html
The unit tests, I use to thoroughly test all execution paths of a given class. I will try to trigger all exception states, all secondary scenarios, to make sure that everything is covered. I don't think it is realistic to have 100% coverage by using functional or integration tests.

Can I use Protractor for TDD in MEAN stack

I'm very new to Unit testing and TDD. I'm clear with TDD concepts theoretically, but I'm having lots of impediments in implementing that. Most of the examples explains how to do unit testing for Multiply, adding two numbers, etc., which is not we really need in real time.
For angular, it is much better, we can check the values of array, existence of Controller, using the service, mocking the backend, etc., So now I have couple of questions,
How can do Unit testing for Backend process like how the request is being handled ?
My application mostly interacts with UI components, Can I use protractor in my TDD process, for example Drawing tool, how can I do testing without drawing(interacting) anything on it ?
There are many orm framework available in nodejs to generate testdata if you want to generate data in your traditional DB and use your backend as it is.
jugglingdb and sequelizejs are the most popular one.
Even to make it more manageable you can implement cucumber or jasmine framework with protractor, so you can manage after and before hookups for individual test-scenario.

What should be mocked for an integration test?

Upon reading Growing Object Orientated Software Guided by Tests, I learnt of test isolation and test fragility. The idea that each test should be very specific to a piece of code or functionality, and the overlap of code coverage by tests should be kept to a minimum.
The implied ideal that each change in the code should result in breaking only one test.
Avoiding spending time going through multiple broken tests to confirm that one change is the cause and if it is fixed by the test modifications.
Now this seems easy enough for unit tests, they are very isolated by their nature.
However when presented by integration tests, it seems hard to avoid having multiple tests exercising the same code paths, particularly when run in addition to the unit tests.
So my question, is what dependencies should be mocked when doing integration testing? Should anything be mocked at all? Should a single execution path be tested, and all side effects not directly relevant to this code path be mocked?
I'm toying with the idea of doing pairwise integration testing. Test one relationship between two objects, and mock everything else. Then changes in either one of these objects should have minimal impact on other integration tests, in addition to forming a complete chain of end-to-end tests by means of pairs.
Thanks for any info..
Edit: Just to clarify, I'm basically asking "How do I avoid large numbers of failing integrations tests during the normal course of development?". Which I assume is achieved by using mocks, and why I asked about what to mock.
Update: I've found a very interesting talk about Integration tests by J.B.Rainsberger, which I think answers this fairly well, if perhaps a bit controversially. The title is "Integration Tests are a Scam", so as you can guess, he does not advocate Integration Tests at all (end to end type tests). The argument being that Integration Tests will always be far below the amount needed to thoroughly test the possible interactions (due to combinatoric explosion), and may give a false confidence.
Instead he recommends what he calls Collaboration Tests and Contract Tests. It's a 90 min talk and unfortunately the whiteboard is not very clear and there aren't code examples, so i'm still getting my head around it. When I have a clear explanation I'll write it here! Unless someone else beats me to it..
Here's a brief summary of Contract Tests. Sounds like Design by Contract type assertions, which I believe could/would be implemented in a Non-Virtual Interface pattern in C++.
http://thecodewhisperer.tumblr.com/post/1325859246/in-brief-contract-tests
Integration Tests are a Scam video talk:
http://www.infoq.com/presentations/integration-tests-scam
Summary:
Integration tests are a scam. You’re probably writing 2-5% of
the integration tests you need to test thoroughly. You’re probably
duplicating unit tests all over the place. Your integration tests
probably duplicate each other all over the place. When an integration
test fails, who knows what’s broken? Learn the two-pronged attack that
solves the problem: collaboration tests and contract tests.
For integration tests you should mock the minimum amount of dependencies to get the test working, but not less :-)
Since the integration of the components in your system is obviously the thing you want to test during integration testing, you should use real implementations as much as possible. However, there are some compontents you obviously want to mock, since you don't want your integration tests to start mailing your users for instance. When you don't mock these dependencies, you obviously mock too little.
That doesn't mean btw that you shouldn't allow an integration test to send mails, but at least you want to replace the mail component with one that will only send mail to some internal test mail box.
For integration tests I lean towards mocking the service rather than the representation for example using mirage instead of a 3rd party REST API and Dumpster rather than a real SMTP server.
This means that all layers of your code are tested, but none of the 3rd parties are tested so you are free to refactor without worrying that the tests will fail.
Unit tests should have mock objects, but integration tests should have few if any mocks (otherwise what is being integrated ?) I think it's overkill to do pairwise mocking; it will lead to an explosion of tests that might each take a long time and lots of copy and paste code which will be a pain to change if requirements change or new features are added later.
I think it's fine to to not have any mocks in the integration tests. You should have everything mocked in the unit tests to know that each individual unit works as expected in isolation. The integration test tests that everything works wired together.
The discussion of Contract/Collaborator test pattern (described by JB Rainsberger in "Integration Tests are a Scam" mentioned in the question above). Relating to the question here - I interpreted his talk to mean that when you own the code for both the service side and the client side, then you should not need any integration test at all. Instead you should be able to rely on mocks which implement a contract.
The talk is a good reference for high level description of the pattern but doesn't go into detail (for me at least) about how to define or reference a contract from a collaborator.
One common example of the need for Contract/Collaborator pattern is between an API's server / client (for which you own the code of both). Here's how I've implemented it:
Define the contract:
First define the API Schema, if your API uses JSON you might consider JSONSchema. The schema definition can be considered the "Contract" of the API. (And as a side note, if you're about to do that, make sure you know about RAML or Swagger since they essentially make writing JSONSchema APIs alot easier)
Create fixtures which implement the contract:
On the server side, mock out the client requests to allow unit testing of the requests/responses. To do this you will create client request fixtures (aka mocks). Once you have your API defined, validate the fixtures against the JSONSchema to ensure that they comply. There are a host of schema validators - I currently use AJV (Javascript) and jsonschema (Python), but most languages should have an implementation.
On the client(s) side you will likely mock out the server responses to allow unit testing of the requests. Follow the same pattern as the server, validating the request and response fixtures via JSONSchema.
If both the Client and the Server are validating their fixtures against the contract, then whenever the API Contract changes, the out of date implementations on both sides will fail JSONSchema validation and you'll know it's time to update your fixtures and possibly the code which relies on those fixtures.

How to unit test wxPython?

I've heard of unit testing, and written some tests myself just as tests but never used any testing frameworks. Now I'm writing a wxPython GUI for some in-house data analysis/visualisation libraries. I've read some of the obvious Google results, like http://wiki.wxpython.org/Unit%20Testing%20with%20wxPython and its link http://pywinauto.openqa.org/ but am still uncertain where to start.
Does anyone have experience or good references for someone who sort of knows the theory but has never used any of the frameworks and has no idea how it works with GUIs?
I am on a Windows machine developing a theoretically cross-platform application that uses NumPy, Matplotlib, Newville's MPlot package, and wxPython 2.8.11. Python 2.6 with plans for 3.1. I work for a bunch of scientists, so there is no in-house unit-testing policy.
If you want to unit-test your application, you haven't to focus on GUI testing techniques. It is much better to write the application using MVC, MVP, or other meta-pattern like these. So you get business logic and presentation layer separated.
It is much more important to cover the business layer with tests since this is your code. Presentation layer is tested already by wxWidgets developers. To test the business layer it will be enough just basic tools like standard unittest module and maybe nose.
To make sure the whole application behave correctly, you should add few acceptance tests that will test functionality from end to end. These will deal with GUI, but there will be few such tests comparing to number of unit-tests.
If you will limit yourself with acceptance tests only, you'll get low coverage, fragile and very slow test code base.
To unit test your application without requiring lots of mock objects/stubs, your GUI's event handlers should basically delegate to other method calls, passing in values from the Event object as parameters to the delegated method.
Otherwise you'll be unable to test your application without having to mock wx's objects.
Take a look at the PyPubSub project for a great module to help with MVC.
In one early project of mine I really test wxPython application using GUI layer. So tests really spin live wxApp object, pops up real windows and then starts messing with a real MainLoop(). Very soon I realize it was a wrong way to do testing. My tests was run very slow and unreliable. Much better way is to separate GUI stuff aside and test only the "model" level of your application. Note that you can actually create model for presentation level logic (model that represent some visual part of your application) and test it. But this model should not involve any "real" gui objects (windows, dialogs, widgets).

What is the best framework for Unit Testing in JavaME?

What is currently the best tool for JavaME unit testing? I´ve never really used unit testing before (shame on me!), so learning curve is important. I would appreciate some pros and cons with your answer. :)
I think it will depend on what kind of tests are you planning to do. Will you be using continuous integration. Is running tests on handsets a must.
If tests are more logic/data processing tests, the you can do fine with JUnit. But if you need to use some classes from javax.microedition.*, then the things will become a bit tricky, but not impossible.
Some examples: a test for text wrapping on screen, that would need javax.microedition.lcdui.Font. You can't just crate a font class using jars shipped with WTK, because at initialization it will be calling some native methods, that are not available.
For these kind of tests I have created a stub implementation of J2ME. Basically these are my own interpretation of J2ME classes. I can set some preconditions there (for example every character is 5 pixels wide, etc). And it is working really great, because my test only need to know, how J2ME classes respond, not how they are internally implemented.
For networking tests I have used MicroEmulator's networking implementation, and it has also worked out well.
One more issue with unit tests - it is better to have your mobile projects as a java project using Java 4,5,6, because writing test in 1.3 is, at leas for me, a pain in the...
I belive, that starting with JUnit will be just fine, to get up and running. And if some other requirements come up (running tests on handsets), then You can explore alternatives.
I'll be honest, the only unit tester I've used in Java is JUnit and a child project for it named DBUnit for database testing... which I'm assuming you won't need under J2ME.
JUnit's pretty easy to use for unit testing, as long as your IDE supports it (Eclipse has support for JUnit built in). You just mark tests with the #Test annotation (org.junit.Test I think). You can also specify methods that should be run #Before or #After each test, as well as before or after the entire class (with #BeforeClass and #AfterClass).
I've never used JUnit under J2ME, though... I work with J2EE at work.
Never found an outstanding one. You can try to read this document :
how to use it
and here the link to : download it
Made by sony ericsson but work for any J2ME development.
And I would recommend you spend some time learning unit testing in plain Java before attacking unit testing on the mobile platform. This may be a to big to swallow one shoot.
Good luck!
There's a framework called J2MEUnit that you could give a try, but it doesn't look like it's still being actively developed:
http://j2meunit.sourceforge.net