As the title of this post says I want to test a BizTalk-Orchestration. In past I did something like that by using BizUnit. But I was only able to test the Input, let the BizTalk-solution run and test the output. BizUnit helped me by automatizing this process.
So the question is:
How can I test every step in an Orchestration (for example the output of Transform component). How can I read the messages in the MessageBox while I'm testing that?
Does anyone know a great tutorial?
I agree with Johns-305 and would suggest that its not probably required to test every step of the Orchestration.
Still if you want to, you could put temporary Send shapes after every transformation and send them out to a FILE location so that you can understand what messages are being generated.
Also, you can make use of Orchestration Debugger if you want to understand the flow of the Orchestration.
You wont be able to see any messages that are created within the orchestration but are to published to the message box.
You can debug the orchestration in the BizTalk administrator console.
Just stop your orchestration. then in the BizTalk group you can attach debugger.
Then you can resume in debug mode
The first question you need to ask yourself is: Do I really need to do this? Hint, the answer is no.
The Unit Test fad has long past and was never a useful thing with BizTalk apps.
Instead, focus on developing, with the business owners, a set of test cases that validate the entire app. You then use these to test everything, not just Orchestrations.
During DEV, 97% of the time, testing in Visual Studio is all you need for Maps.
Related
Unit tests are strongly connected to the existing code of the application, written by developers. But what about UI and API automated tests (integration tests)? Does anybody think, that it's acceptable to re-use code of the application in separate automation solution?
The answer would be no. UI tests follow the UI, go to that page, input that value in that textbox, press that button, I should see this text. You don't need anything code related for this. All this should be done against some acceptance criteria so you should already know what to expect, without looking at any code.
For the API integration tests you would call the endpoints, with some payloads and then check the results. You don't need references to any code for this. The API should be documented and explain very well what endpoints are available, what the payloads look like and what you can expect to get back.
I am not sure why you'd think about reusing application code inside an automation project.
Ok, so after clarifications, you're talking about reusing models only, not actual code. This is not a bad idea, it can actually help, as long as these nuget packages do not bring in any other dependencies.
Code Reusability is a great concept but it's very hard to get right in practice. Models typically come with adnotations which require other packages which are of course not needed in an automation project. So, if you can get nuget packages without extra dependencies, so literally data models only and nothing else then that does work. Anything more than that and it will create issues so I'd push back on that
So i have this huge SF2 project, which is luckily pretty 'OK' written. Services are there, background jobs are there, no god classes, it's testable--but, i never gotten any further than just unit-testing stuff, so the question is basically, where do i start taking this further.
The project consists of SF2 and all the yada yada, Doctrine2, Beanstalkd, Gaufrette, some other abstractions--its fine.
The one problem it has is some gluecode in controllers here and there, but i don't see it as a big problem since functional tests are going to me the main focus.
The infrastructure is setup pretty ok as well, its covered by docker so CI is going to work out well also.
But it has basically gotten too large to manually test any longer, so i want full functional coverage on short notice, and let the unit-testing grow over time. (Gonna dive into the isolated objects as they need future adjustments and build test for them in due course)
So i got the unit-testing covered, thats going to need to grow over time, but i want to make some steps towards the functional testing to get some quick gains on the testing dep. YESTERDAY.
My plan as of now is use Behat and Mink for this, the tests are going to be huge, so i might as well want to have it set as stories instead of code. Behat also seem to have a extension for Symfony' BrowserKit.
There are plenty of services and external things happening, but they are all isolated by services, so i can mock them through the test environment service config i guess.
Please some advice here if there is as better way
I'm also going to need fixtures, i'm using Alice for generating some fixtures so far, seems nice together with the doctrine extension, don't think there are "better" options on this one.
How should i test external services? Im mocking things as a Facebook service, but i also want to really test it to some test account, is this advisable? I know that this goes beyond its scope, the service has to be mocked and tested in every way possible to "ensure its working" according to the purist. But in the end of the day it still breaks because of some API key or other problem in the connection, which i cant afford really. So please advice here also
All your suggestions to use other tools are welcome ofcourse, and especially if there is a good book that covers my story.
I'm glad you brought up behat, I was going to suggest the same thing.
I would consider starting with your most business critical pieces; unit test the extremely important business logic and use behat on the rest.
For the most part, I would create stubs for your services that have expected output for expected input. That way you can create failures based on specific input. You can override your services in your test config.
Another approach would be to do very thin functional testing where you make GET requests to all of your endpoints and look for 200's. This is a very quick way to make sure that your pages are at least loading. From there, you can start writing tests for your POST endpoints and expanding your suite further with more detailed test cases.
I have been writing a lot of unit tests for the code I write. I've just started to work on a web project and I have read that WatiN is a good test framework for the web.
However, I'm not exactly sure what I should be testing. Since most of the web pages I'm working on are dynamic user generated reports, do I just check to see if a specific phrase is on the page?
Besides just checking if text exists on a page, what else should I be testing for?
First think of what business cases you’re trying to validate. Ashley’s thoughts are a good starting point.
You mentioned most pages are dynamically generated user reports. I’ve done testing around these sorts of things and always start by figuring out what sort of baseline dataset I need to create and load. That helps me ensure I can get exactly the appropriate set of records back in the reports that I expect if everything's working properly. From there I’ll write automation tests to check I get the right number of records, the right starting and ending records, records containing the right data, etc.
If the reports are dynamic then I’ll also check whether filtering works properly, that sorting behaves as expected, etc.
Something to keep in mind is to keep a close eye on the value of these tests. It may be that simply automating a few tests around main business use cases may be good enough for you. Handle the rest manually via exploratory testing.
You essentially want to be testing as if you are a user entering your site for the first time. You want to make sure that every aspect of your page is running exaclty the way you want it to. For example, if there is a signup/login screen, automate those to ensure that they are both working properly. Automate the navigation of various pages, using Assertions just to ensure the page loaded. If there are generated reports, automate all generations and check the text on the generations to ensure it is what you specified by the "user" (you). If you have any logic saying for example when you check this box all other boxes should check aswell. There are many assertions that can be applied, I am not sure what Unit-Testing software you are using but most have a very rich assortment.
I searched for unit testing tool and I found the suitable one is NUnit and I think it good but my problem that this tool only show test method result only (pass or fail) and I need to show not only pass or fail also the output .How can I show the output using NUnit or if there another unit testing tool its also good ?If its not supported please suggest me how can I solve it.
All ideas are welcomed
Piping the output of System.Console will work for NUnit, but it's not your best option.
For passing tests, you shouldn't need to review the console output to verify that the tests have passed. If you are, you're doing it wrong. Tests should be automated and repeatable without human intervention. Verifying by hand doesn't scale and creates false positives.
On the other hand, having console output for failing tests is helpful, but it will only provide information that could otherwise be inferred from attaching a debugger. That's a lot of extra effort to add console logging to your application for little benefit.
Instead, make sure that your error messages are meaningful. When writing your tests make sure your assertions are explicit. Always try to use the assertion that closely fits the object you are asserting and provide a failure message that explains why the test is important.
For example:
// very bad
Assert.IsTrue( collection.Count == 23 );
The above assertion doesn't really provide much help when the test fails. As NUnit formats the output of the assertions, this assertion won't help you as it will state something like "expecting <True> but was <False>".
A more appropriate assert will provide more meaningful test failures.
// much better
Assert.AreEqual(23, collection.Count,
"There should be a minimum of 23 items by default.");
This provides a much more meaningful failure message: "Expecting <23> but was <0>: There should be a minimum of 23 items by default."
On the bottom bar of NUnit you can click Text Output and that shows all debug and console output.
It depends where are you want to output the data from a test.
I believe you mentioned something another from File, Log, Console, Debug output.
As an alternative NUnit allow to output any message in the regular tests output stream, just use following utility methods:
For successful test
Assert.Pass( string message, object[] parms );
For failed test
Assert.Fail( string message, object[] parms );
More details see here
This post is loooooong after the question was asked, but wanted to chime in. Yes, you can accomplish a lot in unit/integration tests and probably do most of what you need. So, I agree, do as much as you can in your test methods.
But sometimes, providing some output is useful. Especially if you need to further verify the results and that verification cannot be accomplished via your unit test. Think of an external system that your dev/test environment has no or limited access to.
As an example, let's say you are hitting a webapi to CREATE a claim and the response is the new claim number. But the api does not expose methods to GET a claim, and you need to verify some other data that was created when you made the webapi call. In this case, you could use the outputted claim numbers to manually check the remote system.
FWIW
We develop custom survey web sites and I am looking for a way to automate the pattern testing of these sites. Surveys often contain many complex rules and branches which are triggered on how items are responded too. All surveys are rigorously tested before being released to clients. This testing results in a lot of manual work. I would like to learn of some options I could use to automate these tests by responding to questions and verifying the results in the database. The survey sites are produced by an engine which creates and writes asp pages and receives the responses to process into a database. So the only way I can determine to test the site is to interact with the web pages themselves. I guess in a way I need to build some type of bot; I really don't know much about the design behind them.
Could someone please provide some suggestions on how to achieve this? Thank you for your time.
Brett
Check out selenium: http://selenium.openqa.org/
Also, check out the answers to this other question: https://stackoverflow.com/questions/484/how-do-you-test-layout-design-across-multiple-browsersoss
You could also check out WatiN.
Sounds like your engine could generate a test script using something like Test::WWW::Mechanize
Usual test methodologies applies; white box and black box.
White box testing for you may mean instrumenting your application to be able to make it go into a particular state, then you can predict the the result you expect.
Black box may mean that you hit a page, then consider of the possible outcomes valid. Repeat and rinse till you get sufficient coverage.
Another thing we use is monitoring statistics for our service. Did we get the expected number of hits on this page. We routinely run a/b tests, and I have run a/b tests against refactored code to verify that nothing changed before rolling things out.
/Allan
I can think of a couple of good web application testing suites that should get the job done - one free/open source and one commercial:
Selenium (open source/cross platform)
TestComplete (commercial/Windows-based)
Both will let you create test suites by verifying database records based on interactions with the web app.
The fact that you're Windows/ASP based might mean that TestComplete will get you up and running faster, as it's native to Windows and .NET. You can download a free trial to see if it'll work for you before making the investment.
Check out the unit testing framework 'lime' that comes with the Symfony framework. http://www.symfony-project.org/book/1_0/15-Unit-and-Functional-Testing. You didn't mention you language, lime is php.
I would suggest the mechanize gem,available for ruby . It's pretty intuitive to use .
I use the QEngine(commerical) for the same purpose. I need to add a data and check the same in the UI. I write one script which does this and call that in a loop. the data can be passed via either csv or excel.
check that www.qengine.com , you can try Watir also.
My proposal is QA Agent (http://qaagent.com). It seems this is a new approach because you do not need to install anything. Just develop your web tests in the browser based ide. By the way you can develop your tests using jQuery and java script. Really cool!