Given-When-Then scenarios for Webservice calls - web-services

We are using Specflow for automating our regression suite but now we would like to take it to a next level of automating our webservices.
Using the Gherkin “Gven-When-Then”, how can I use / write the webservices calls.
For e.g : How do I write my given - when - then for the below request?
<ns:request>
<ns1:ServiceAuthenticationRequest>
<ns1:Password>?</ns1:Password>
<ns1:Station>?</ns1:Station>
<ns1:UserName>?</ns1:UserName>
</ns1:ServiceAuthenticationRequest>
…
</ns:request>

Unless you're writing webservice code itself, the behaviour of a generic webservice probably isn't that interesting.
What does your webservice do?
1) Can you give me an example?
2) Is there any situation in which your webservice would behave differently to that?
If you can answer those questions, you'll probably have a better idea of what to use as your first, and then second, scenario. The situations in which the webservice behaves differently are the contexts (givens); what you get as a result of those contexts will be the outcomes (thens). Calling the webservice will be your event. If your webservice has different capabilities, you'll probably have different events for each one.
As with anything related to BDD, have a conversation with someone and see if you can give them some examples of what you're thinking / ask them for some examples if they're more knowledgeable than you are.

Related

Testing third party API's

I would like to test a third party API such as forecast.io, but I am not quite sure how to accomplish what I want to achieve.
I have read all over the internet that I should use mock objects. However, the concept of mock objects is not what I need, as I do not want to test my implementation of the parsing rather than the network call itself.
I want to test for example if the URL is still working, if my API key is still working, if the request is still in the expected format so GSON does not crash or other things directly related to the network call itself.
Is there any good way to do this?
Many thanks
TLDR; I don't need mock objects!
I am going to try to answer this question as generally as it was asked. I understand OP wants to avoid testing each type of response, but if you are relying on this data for continuity of users and/or revenue, you may consider creating an api caller so you can look at each part of the api response(s) as well as test the URL, api key, etc. I would use an OO language, but I'm sure there are other ways.
In general:
create a process/library/software that can call the api
serialize/store the data you are expecting (GSON in OP's case)
unit test it with xUnit or NUnit
automate it to run every x time period and generate an email with success/change/fail message
This is no small project if you are counting on it - need to tighten all the screws and bolts. Each step deserves its own question (and may already have one).
[I can add some sample code in C# here if that will help]
How to automate this to run and email you is a completely different question, but hopefully this gives you the idea of how an object oriented library can help you test every piece of data your own software is planning to use. Not every api host will let you know in a timely manner when/if changes are taking place and you may even know before they do if something breaks.

Is there a way to detect from which source an API is being called?

Is there any method to identify from which source an API is called? source refer to IOS application, web application like a page or button click( Ajax calls etc).
Although, saving a flag like (?source=ios or ?source=webapp) while calling api can be done but i just wanted to know is there any other better option to accomplish this?
I also feel this requirement is weird, because in general an App or a web application is used by n number of users so it is difficult to monitor those many API calls.
please give your valuable suggestions.
There is no perfect way to solve this. Designating a special flag won't solve your problem, because the consumer can put in whatever she wants and you cannot be sure if it is legit or not. The same holds true if you issue different API keys for different consumers - you never know if they decide to switch them up.
The only option that comes to my mind is to analyze the HTTP header and see what you can deduce from it. As you probably know a typical HTTP header looks something like this:
You can try and see how the requests from all sources differ in your case and decide if you can reliably differentiate between them. If you have the luxury of developing the client (i.e. this is not a public API), you can set your custom User-Agent strings for different sources.
But keep in mind that Referrer is not mandatory and thus it is not very reliable, and the user agent can also be spoofed. So it is a solution that is better than nothing, but it's not 100% reliable.
Hope this helps, also here is a similar question. Good luck!

Design pattern for working with different versions of webservices?

I'm looking for a design pattern to solve an architectual issue I'm having.
I use some webservices that are kinda the same but not exactly. For each new version of the webservices there might be a few more methods available, but for the most part they are basically the same.
I want to write an abstractionlayer that works regardless of which version of the webservices I'm communicating with. Obviously if I'm using a method that only exists in the newer versions of the webservices I will get some sort of error, but that is OK. I can handle those.
The reason I want this abstraction layer is to avoid a tight coupling between my application and the version of the webservices it is communicating with.
What are my options when it comes to design patterns for my abstraction layer? I see there is one pattern called Adapter, and another one called Bridge. Will any of those do in this situation? Any help is appreciated!
Edit - for clarity here is a drawing.
Sometimes I want my application to talk to webservices version 1, and other times I want it to use webservices version 2. It depends on who is using the client application.
The client application shouldn't relly know or care which version it is talking to. The only exception is that if it uses a method that is only available in some of the versions I need to handle that gracefully (tell the user that they have installed an old version of the webservices).
That would be a factory. You could even use a builtin ChannelFactory or come up with your own. Anyway, a facttory lets you change the implementation without changing client's contract.
I will suggest to use the FACADE pattern. You may go through the following link to understand more about it.
http://javapapers.com/design-patterns/facade-design-pattern/
Facade is to provide the abstraction and a seamless layer for clients to interact. It hides all the internal complexities, as in your case client need to find the correct version of web service it can interact with. Lets assume you have different version of webservices, and input json/xml structures have changed in different versions. Facade will accept the client call, it will validate the input against different version of web services and then call the correct web service version. If you don't have facade layer then client will have to struggle to find the correct webservice version and it will have to send multiple calls before reaching the correct web service.

web services api: to wrap or not to wrap?

When providing a web services API (well, let's say SOAP), do you provide a library wrapper along with it to make it "easier" for people to use? Or do you just package up a WSDL and documentation for it and let people figure out what to do with it?
What are people doing usually? I've seen a bunch of examples where the wrapper is provided, but it has always seemed counter-productive to me.
WSDL is easily discoverable (all functions & types as declared), so there is usually no need to offer any package with it, and minimal documentation (apply an XSL to the WDSL and it's usually enough :) ). My theory about the appearance of libraries/wrappers is that it is directly related to security measures / needed authentication & hashes (usually: concatenating some fields with a secret & hash it), about which one simply doesn't want to answer every single question anymore.
Audience matters I think: if you want you run-of-the-mill hobby coder to be able to use your service, providing a package can get you that much more users. If you're more in business to business services, the webservice usually has to be integrated in some larger package and most libraries would be futile.
That being said, I'd say of the webservices I came across: about 60% of the libraries provided were hopeless spaghetti code fit for the bin, 30% were not the code I'd use, but could clear up some questions not answered by the documentation, and only about 10% were fit enough to integrate in a project (or the project small and/or worse enough to be no worse for it).
How you going to support multiple web-service stacks - JAX-WS, AXIS2, CXF etc? My choice - WSDL/XSD. In practice I got service built with JAX-WS and a client with AXIS2. And I don't want to build a client wich you are going to use. I don't even know your preferable web-service stack and your JVM version limitations. For example, I can call web-service from java 1.4 - there are no annotations and not possible to use client lib built with annotations for java 1.5. So WSDL is right way to build ws-client instead of providing generated client library.

architecture/design advise for a test program

I am trying to build a test program in c++ to automate testing for a specific application. The testing will involve sending requests which have a field 'CommandType' and some other fields to a server
The commandType can be 'NEW', 'CHANGE' or 'DELETE'
The tests can be
Send a bunch of random requests with no pattern
Send 100 'NEW' requests, then a huge amount of 'CHANGE' requests followed by 200 'DELETE' requests
Send 'DELETE' requests followed by 'CHANGE' requests
... and so on
How can I design my software (what kind of modules or layers) so that adding any new type of test case is easy and modular?
EDIT: To be more specific, this test will be to only test one specific application that gets requests of the type described above and handles them. This will be a client application that will send the requests to the server.
I would not create your own framework. There are many already written that follow a common pattern and can likely accomodate your needs elegantly.
The xUnit framework in all incarnations I have seen allows you to add new test cases without having to edit the code that runs the tests. For example, CppUnit provides a macro that when added to a test case will auto-register the test case with a global registry (through static initialization I assume). This allows you to add new test cases without cracking open and editing the thing that runs them.
And don't let the "unit" in xUnit and CppUnit make you think it is inappropriate. I've used the xUnit framework for all different kinds of testing.
I would separate out each individual test into it's own procedure or, if it requires code beyond a function or two, it's own source file. Then in my main routine I'd do something like:
void main()
{
run_test_1();
run_test_2();
//...
run_test_N();
}
Alternatively, I'd recommend leveraging the Boost Test Library and following their conventions.
I'm assuming you're not talking about creating unit tests.
IMHO, Your question is too vague to provide useful answers. Is this to test a specific application or are you trying to make something generic enough to test as many different applications as is possible? Where do these applications live? Are they client server apps, web apps, etc.?
If it's more than one application that you want your tool to test, you'll need an architecture that creates a protocol in between the testing tool and the applications such that you can convert the instructions your tool and consumers of your tool can understand, into instructions that the application being tested can understand. I've done similar things in the past but I've only ever had to worry about maybe 5 different "applications" so it was a pretty simple matter of summing up all the unique functionality of the apps and then creating an interfact that supports them all.
I wouldn't presume that NEW, CHANGE, and DELETE would be your only command types either. A lot of testing involves data cleanup, test reporting, etc. And applications all handle this their own special ways.
use C++ unit testing framework , Read this for Detail and examples