I am an experienced c++ developer, but I have not spend much work in design of applications. Question:
I am working on an existing application that consists of the following layers
HTML5 front-end
Database that communicates with the front-end via stored procedures
Back-end c++ application running on Linux servers that communicate with the database via a job-daemon
We want to introduce more automatic testing into the application. We want full end-to-end tests, as well as individual tests on each level, i.e. test of front-end, test of SQL stored procedures, and test of back-end.
Is there some kind of unifying framework where I can setup tests across all layers? And e.g. a board that provides overview of test results.
Thanks in advance!
Related
I'm currently designing a software that should run on a cloud (Azure, Amazon, Google). The software performs several time- and resource-consuming tasks. Because of this, to reduce costs and leverage some existing software we have, we're considering developing the server side of the application with C++.
So far, our architecture considers writing unmanaged libraries with C API entry points, which in turn uses the C++ code. Then we'll write a C# ASP.NET Core application with WebApi controllers that would simply use P/Invoke to call the unmanaged libraries. The return values are JSON strings. Finally, the client apps are Android and iOS, and a SPA for web access. We're leaning towards Azure with Azure SQL as well, using Web Applications.
However, we are now wondering whether the ASP.NET Core app makes sense at all, since it only passes the control over the unmanaged libraries, which in turn do all the heavy lifting. I'm looking for a way to make Azure Web Application to invoke a C++ program that, in turn, returns a JSON string, depending on parameters. That would save us having to write the ASP.NET Core app.
How could I achieve this with Azure, or with any other cloud provider?
You should take a look at Azure Functions. You can call Windows native console apps inside your AZ Function and then return it's output.
https://azure.microsoft.com/en-us/resources/samples/functions-dotnet-migrating-console-apps/
We have had some success using selenium and web driver along with Jbehave. just wanted to know what others are using for unit testing the web tier of any web application?
the reason i am asking is , writing web driver test cases along with jbehave makes the unit testing very complicated and in more cases, it is taking more time then writing an actual JSP page.
A few ideas for unit testing a web tier:
Use MVC for doing web development. It is pretty easy to unit test controllers assuming you extract all your dependencies.
Make liberal use of interfaces to extract dependencies in your JSP pages. For example, does your JSP make a database call? Consider making an Repository interface and then have implementations like MySQLRepositoryImpl.java that implement the interface. This way, you can also "mock" the interface and create a fake database that will run fast in your unit tests.
For very difficult problems where you absolutely need to use a dependency, you can get embedded versions of things like web servers (Grizzly, Jetty) or even databases (H2, SQLite).
Make sure you write your code such that each function does one thing and one thing only. This will take some refactoring, but it makes testing so much easier.
We have a considerable code base with relatively high test coverage for pages/forms, all via vanilla POST/GET.
Now, we are find ourselves moving more into the 'ajaxy' space, and it's not quite possible to test with GET/POST complete scenarios like user registration, or an item creation, as they involve lots of JavaScript/Ajax calls.
While things like that are the most likely candidates for testing with Selenium, I wonder should we adopt the Selenium testing across the board, leaving the old-school POST/GET tests altogether?
Advantages of Selenium adoption seems to be too good - ability to run pretty much same GET/POST tests but across the range of browsers.
Or am I missing something in my pursuit of cool and trendy stuff and ditching the old proven POST/GET tests?
There's advantages and disadvantages of both approaches, so my recommendation would be to use both.
Selenium launches an actual browser and simulates a user interacting with your web application, which can be great if you're testing Ajax features. It can verify that elements are visible and interact with them just as a user would. Another killer feature is the ability to take screenshots through Selenium, which can be incredibly useful when investigating failures.
Unfortunately launching a browser and navigating to a specific page/state in your application can be slow, and you'd need a lot of hardware if you wanted to test concurrent users (load testing) with Selenium.
If you just want to test that your server responds with a HTTP 200 for certain actions, or load test your applications, or that the response contains certain values then basic POST/GET would be more suitable.
If you do decide to go with a pure Selenium approach to testing I would recommend looking into using Selenium Grid or a cloud based service, as running lots of tests through Selenium can be quite time consuming.
I think you should definitely use Selenium and POST/GET (Unit) tests altogether, because the aim of your unit test is to test functionality of a specific section of code but Selenium is is doing integration testing on your web-app.
My test team currently uses QTP to test through the GUI, but like any automated test suite that relies on the interface, it is more fragile than automating tests that directly interact with the code. I am attempting to learn more about Siebel and Siebel Tools to better understand how we might be able to test below the GUI, but would like to hear from someone with more expertise to find out if this is feasible.
it really depends on what you want to test, I guess.
I'm using the Siebel Java Data Bean (JDB) a lot to access Siebel. You basically connect to the Siebel Server and execute code very similar to eScript. That means you could create records, invoke workflows and so on; basically everything you could do in eScript. That might be helpful. This will apply all the usual validations, runtime events and events.
As soon as some of your scripts in BusComps or in Business Services or elsewhere access data that needs a UI context (TheApplication().ActiveBusObject() or TheApplication().ActiveApplet() for instance) this approach will fail, though, because the Siebel Data Bean doesn't have a UI context.
Another drawback is that you have to connect to a Siebel Server. That means you have to deploy your SRF to the development server and only then you can run your tests. It sure would be much better if the JDB could connect to your local instance, but as far as I know this is not possible. Have a look at the Object Interfaces guide in the Bookshelf, though. There are different ways to connect to Siebel, not just Java.
Let me know if you have any questions about this. I could maybe post some sample code of how to connect to the Siebel Server etc.
Right now QTP is the best way to go - it's still a PITA but there really is nothing else out there to test the full Siebel Web Client. This is because the Siebel UI is delivered through Internet Explorer with proprietary Active X and Java controls and so you really need a bespoke pack to test it.
Because the UI is a re-interpretation, not just an abstraction, of the Business Object layer (that one accesses with Data Beans / COM etc.) it is not useful to test at that layer except in a small number of unit test cases (such as when you have complex scripting in Siebel).
If you change the end of the URL for the client (login to Siebel first of course) to something like "SWEcmd=GotoPageTab&SWEScreen=Accounts+Screen&SWESetMarkup=XML" then you'll see lots of XML mark-up which is then consumed by the proprietary controls - you might think this would be a cool way to build an automation tool, but it is not (I've tried).
If you want to really use a proper UI testing tool, like Selenium, you'll have to test the HTML Siebel Web Client - this is a 'skinny' 'Standard Interactivity' UI that doesn't use Active X or Java ... it has a lot less cool UI controls but it works essentially the same at the full Siebel Web Client (aka High Interactivity Siebel Web Client, or HI for short), and it works in Firefox!
Have you looked at Oracle Application testing Suite. It comes pre-built accelerators for testing Siebel which makes it all the more easier to test Siebel.
Since Siebel version 7.7 QTP uses Siebel Test Automation (STA) which needs to be purchased separately from Oracle, a quick search found this explanation on how to set up testing with STA (this is written from a QTP prespective but is true for all STA usage).
If you really want to avoid using GUI testing then you can hunt down the API documentation and try to use STA directly but I would not recommend it, QTP has already done all the heavy lifting for you, why would you want to reproduce the effort (especially since your company already owns QTP licenses).
Should we unit test web service or really be looking to unit test the code that the web service is invoking for us and leave the web service alone or at least until integration testing, etc.... ?
EDIT: Further clarification / thought
My thought is that testing the web service is really integration testing not unit testing?? I ask because our web service at this point (under development) is coded in such a way there is no way to unit test the code it is invoking. So I am wondering if it is worth while / smart to refactor it now in order to be able to unit test the code free of the web service? I would like to know the general consensus on if it's that important to separate the two or if it's really OK to unit test the web service and call it good/wise.
If I separate them I would look to test both but I am just not sure if separation is worth it. My hunch is that I should.
Unit testing the code that the web service is invoking is definitely a good idea since it ensures the "inside" of your code is stable (and well designed). However, it's also a good idea to test the web service calls, especially if a few of them are called in succession to accomplish a certain task. This will ensure that the web services that you've provided are usable, as well as, work properly when called along with other web service calls.
(Not sure if you're writing these tests before or after writing your code, but you should really consider writing your web service tests before implementing the actual calls so that you ensure that they are usable in advance of writing the code behind them.)
Why not do both? You can unit test the web service code, as well as unit test it from the point of view of a client of the web service.
Under my concept, the WS is just a mere encapsulation of a Method of a Central Business Layer Object, in other words, the Web Method is just a "gate" to access methods deeper in the model.
With the former said, i do both operations:
Inside the Server, I create a Winform App who do Load Testing on the Business Layer Method.
Outside the Server (namely a Machine out of the LAN where the Web App "lives"), I create a Tester (Winform or Web) that consumes the WS, doing that way the Load Testing.
That Way I can evaluate the performance of my solution considering and discarding the "Web Effect" (i.e. The time for the data to travel and reach the WS, the WS object creation, etc).
All the above said is of course IMHO. At least that worked a lot for me!
Haj.-
We do both.
We unit test the various code elements.
Plus, we use the unit test framework to execute tests against the web service as a whole. This is rather complex because we have to create (and load) a database, start a server, and then execute requests against that server.
Testing the web service API is easy (it's got an API) and valuable. It's not a unit test, though - it's an "integration", "sub-system", or "system" test (depends on who you ask).
There's no need to delay the testing until some magical period called "integration testing" though, just get some simple tests now and reap the benefit early.
I like the idea of writing unit tests which call your web service through one of its public interfaces. For instance, a given WCF web service may expose HTTP, TCP, and "web" bindings. Such a unit tests proves that the web service can be called through a binding.
Integration testing would involve testing all of the bindings of the service, testing with particular client scenarios, and with particular client tools. For instance, it would be important to show that a Java client can be created with IBM's Rational Web Developer that can access the service when using WS-Security.
If you can, try consuming your web service using some of the development tools your customers will use (Delphi, C#, VB.Net, ColdFusion, hand crafted XML, etc...). Within reason, of course.
1) Different tools may have problems consuming your web service. Better for you to run in to this before your customers do.
2) If a customer runs in to a problem, you can easily prove that your web service is working as expected. In the past year or so, this has stopped finger pointing in its tracks at least a dozen times.
The worst was the developer in a different time zone who was hand crafting the XML for SOAP calls and parsing the responses. Every time he ran in to a problem, he would insist that it was on our end and demand (seriously) that we prove otherwise. I made a dead simple Delphi app to consume the web service, demonstrate that the methods worked as expected and even displayed the XML for each request and response.
re your updated question.
The integration testing and unit testing are only superficially similar, so yes, they should be done and thought of separately.
Refactoring existing code to make it testable can be risky. It's up to you to decide if the benefits outweigh the time and effort it will take. In my case, I'd certainly try, even if you do it a little bit at a time.
On the bright side, a web service has a defined interface, so you don't really have to change anything to add integration testing. Go crazy. If you can, try to do this before you roll the web service out to customers. There's a good chance that using the web service lead to changes in the interface, and you don't want this to mess up customers too much.