Testing third party API's - unit-testing

I would like to test a third party API such as forecast.io, but I am not quite sure how to accomplish what I want to achieve.
I have read all over the internet that I should use mock objects. However, the concept of mock objects is not what I need, as I do not want to test my implementation of the parsing rather than the network call itself.
I want to test for example if the URL is still working, if my API key is still working, if the request is still in the expected format so GSON does not crash or other things directly related to the network call itself.
Is there any good way to do this?
Many thanks
TLDR; I don't need mock objects!

I am going to try to answer this question as generally as it was asked. I understand OP wants to avoid testing each type of response, but if you are relying on this data for continuity of users and/or revenue, you may consider creating an api caller so you can look at each part of the api response(s) as well as test the URL, api key, etc. I would use an OO language, but I'm sure there are other ways.
In general:
create a process/library/software that can call the api
serialize/store the data you are expecting (GSON in OP's case)
unit test it with xUnit or NUnit
automate it to run every x time period and generate an email with success/change/fail message
This is no small project if you are counting on it - need to tighten all the screws and bolts. Each step deserves its own question (and may already have one).
[I can add some sample code in C# here if that will help]
How to automate this to run and email you is a completely different question, but hopefully this gives you the idea of how an object oriented library can help you test every piece of data your own software is planning to use. Not every api host will let you know in a timely manner when/if changes are taking place and you may even know before they do if something breaks.

Related

Should I test the whole return of a function or only a sample?

I want to test the return of a script that queries a list of urls (more than 1000), extracts some data from the response and returns it as an array of objects (dict) with certain attributes.
Is it safe to test only a sample from the returned list ?
My concern is mainly that exhaustive testing would be time consuming.
P.S. I am hoping that a random sampling would help catch errors, knowing that the response bodies of the urls my script queries may be inconsistant.
Thanks,
I understand your question such that you actually access the urls in the list. In unit-testing, you would normally take a different approach (but not in integration-testing, see the bottom of my answer). You would not actually access those urls, but instead find some way to "simulate" the url access. As part of this simulated url access, your tests can also define what the responses look like.
This way, you can test all aspects of your code that handles the responses. You can simulate all kinds of valid, but also as you mention inconsistent responses - because you have full control from the tests.
There are several ways to make it possible for your tests to "simulate" that url access: One option is, to separate within your code the part that does the url access from the part that processes the response. In pseudo-code:
response = accessUrl(url);
handleResponse(response);
Then, in unit-testing you would focus on testing the function handleResponse, and test the rest of the code in integration-testing.
A second option is to mock the function/method that performs the url access. This makes sense if it is difficult to change the code to achieve the separation I have shown in the pseudo-code. There are lots of information about mocking available on the web.
In any case, this way of testing allows you to test the functionality of your code more systematically. You can test all scenarios you are aware of and will be sure that these were really covered because you have full control.
The testing approach you have described is more on the level of integration testing, and also makes sense after you have fully unit-tested your code: Because, after all, you may still have missed some real-world scenarios that your code should handle.

Can autotests reuse code of the application?

Unit tests are strongly connected to the existing code of the application, written by developers. But what about UI and API automated tests (integration tests)? Does anybody think, that it's acceptable to re-use code of the application in separate automation solution?
The answer would be no. UI tests follow the UI, go to that page, input that value in that textbox, press that button, I should see this text. You don't need anything code related for this. All this should be done against some acceptance criteria so you should already know what to expect, without looking at any code.
For the API integration tests you would call the endpoints, with some payloads and then check the results. You don't need references to any code for this. The API should be documented and explain very well what endpoints are available, what the payloads look like and what you can expect to get back.
I am not sure why you'd think about reusing application code inside an automation project.
Ok, so after clarifications, you're talking about reusing models only, not actual code. This is not a bad idea, it can actually help, as long as these nuget packages do not bring in any other dependencies.
Code Reusability is a great concept but it's very hard to get right in practice. Models typically come with adnotations which require other packages which are of course not needed in an automation project. So, if you can get nuget packages without extra dependencies, so literally data models only and nothing else then that does work. Anything more than that and it will create issues so I'd push back on that

Big project, huge lack of test coverage, how would you approach this?

So i have this huge SF2 project, which is luckily pretty 'OK' written. Services are there, background jobs are there, no god classes, it's testable--but, i never gotten any further than just unit-testing stuff, so the question is basically, where do i start taking this further.
The project consists of SF2 and all the yada yada, Doctrine2, Beanstalkd, Gaufrette, some other abstractions--its fine.
The one problem it has is some gluecode in controllers here and there, but i don't see it as a big problem since functional tests are going to me the main focus.
The infrastructure is setup pretty ok as well, its covered by docker so CI is going to work out well also.
But it has basically gotten too large to manually test any longer, so i want full functional coverage on short notice, and let the unit-testing grow over time. (Gonna dive into the isolated objects as they need future adjustments and build test for them in due course)
So i got the unit-testing covered, thats going to need to grow over time, but i want to make some steps towards the functional testing to get some quick gains on the testing dep. YESTERDAY.
My plan as of now is use Behat and Mink for this, the tests are going to be huge, so i might as well want to have it set as stories instead of code. Behat also seem to have a extension for Symfony' BrowserKit.
There are plenty of services and external things happening, but they are all isolated by services, so i can mock them through the test environment service config i guess.
Please some advice here if there is as better way
I'm also going to need fixtures, i'm using Alice for generating some fixtures so far, seems nice together with the doctrine extension, don't think there are "better" options on this one.
How should i test external services? Im mocking things as a Facebook service, but i also want to really test it to some test account, is this advisable? I know that this goes beyond its scope, the service has to be mocked and tested in every way possible to "ensure its working" according to the purist. But in the end of the day it still breaks because of some API key or other problem in the connection, which i cant afford really. So please advice here also
All your suggestions to use other tools are welcome ofcourse, and especially if there is a good book that covers my story.
I'm glad you brought up behat, I was going to suggest the same thing.
I would consider starting with your most business critical pieces; unit test the extremely important business logic and use behat on the rest.
For the most part, I would create stubs for your services that have expected output for expected input. That way you can create failures based on specific input. You can override your services in your test config.
Another approach would be to do very thin functional testing where you make GET requests to all of your endpoints and look for 200's. This is a very quick way to make sure that your pages are at least loading. From there, you can start writing tests for your POST endpoints and expanding your suite further with more detailed test cases.

TDD In practice -- How do you write a unit test for sending an email?

Client says he wants a button that when pushed, takes the form below and sends it as an email. How would you unit test this?
Assume you are:
Using an existing SMTP library
Using an existing IMAP library
From what I understand, you could use a UI mocker to click the button, then wait a minute or so, then use IMAP to count if there is one more message received than before. But this sounds like it violates TDD goals in almost every category -- it's not fast, it's not atomic, it's complex, it requires it's own IMAP dependency. What's the "TDD" way to do this?
What part of YOUR program do you want to test?
Please note what I highlighted there - you want to test your code.
You do not test things out of your control unless you absolutely need to. So, I would say, if you use SMTP/IMAP libraries - you trust that they do what they are supposed to, you don't write tests to verify that, similarly as you don't check the File.write() actually writes to file, etc.
Back to your question, please look into creating and adapter/simulator for email behavior. Use in-memory implementation in your test, and use the real thing in your production code.
Eric Gunnerson has written a good post about them as well as published a kata to github

architecture/design advise for a test program

I am trying to build a test program in c++ to automate testing for a specific application. The testing will involve sending requests which have a field 'CommandType' and some other fields to a server
The commandType can be 'NEW', 'CHANGE' or 'DELETE'
The tests can be
Send a bunch of random requests with no pattern
Send 100 'NEW' requests, then a huge amount of 'CHANGE' requests followed by 200 'DELETE' requests
Send 'DELETE' requests followed by 'CHANGE' requests
... and so on
How can I design my software (what kind of modules or layers) so that adding any new type of test case is easy and modular?
EDIT: To be more specific, this test will be to only test one specific application that gets requests of the type described above and handles them. This will be a client application that will send the requests to the server.
I would not create your own framework. There are many already written that follow a common pattern and can likely accomodate your needs elegantly.
The xUnit framework in all incarnations I have seen allows you to add new test cases without having to edit the code that runs the tests. For example, CppUnit provides a macro that when added to a test case will auto-register the test case with a global registry (through static initialization I assume). This allows you to add new test cases without cracking open and editing the thing that runs them.
And don't let the "unit" in xUnit and CppUnit make you think it is inappropriate. I've used the xUnit framework for all different kinds of testing.
I would separate out each individual test into it's own procedure or, if it requires code beyond a function or two, it's own source file. Then in my main routine I'd do something like:
void main()
{
run_test_1();
run_test_2();
//...
run_test_N();
}
Alternatively, I'd recommend leveraging the Boost Test Library and following their conventions.
I'm assuming you're not talking about creating unit tests.
IMHO, Your question is too vague to provide useful answers. Is this to test a specific application or are you trying to make something generic enough to test as many different applications as is possible? Where do these applications live? Are they client server apps, web apps, etc.?
If it's more than one application that you want your tool to test, you'll need an architecture that creates a protocol in between the testing tool and the applications such that you can convert the instructions your tool and consumers of your tool can understand, into instructions that the application being tested can understand. I've done similar things in the past but I've only ever had to worry about maybe 5 different "applications" so it was a pretty simple matter of summing up all the unique functionality of the apps and then creating an interfact that supports them all.
I wouldn't presume that NEW, CHANGE, and DELETE would be your only command types either. A lot of testing involves data cleanup, test reporting, etc. And applications all handle this their own special ways.
use C++ unit testing framework , Read this for Detail and examples