Mocking WebService consumed by a Biztalk Request-Response port - web-services

I'm using BizUnit to unit-tests my Biztalk orchestrations, but some orchestrations consume a WebService,and testing these seems more like integration testing than unit testing.
I'm familiar with using a mocking framework to mock the generated proxy objects, in order to test a web service from a Windows Forms application, but I would like to be able to do it in a more integrated way in a request-response port?
How would you approach this problem?

This goes to the heart of one of my main irritations as a BizTalk developer - BizTalk does not lend it self to unit testing. From the fact the 99% of your interfaces into BizTalk applications are message based and have a huge number of possible inputs, through to the opaque nature of orchestrations, BizTalk offers no real way of testing units of functionality as... well... units.
For BizTalk, integration tests are sadly often the only game in town.
That results in, due to no fault on the part of Kevin Smith, BizUnit being (IMO) a misnomer. A better name would perhaps be BizIntegrationIt. BizUnit offers a range of tools that assist in integration testing, the majority of its tests, like checking if a file has been written to a given directory or sending an HTTPRequest to a BizTalk HTTPReceive location are all strictly speaking, testing integration.
Now that I've gotten that rant out, what you are asking for is something I've been thinking about for a long time, the ability to create automated unit tests that give some real confidence that my making a small change to a map won't suddenly break something else downstream, as well as a way to remove dependance on external services.
I've never thought of any nice way of doing this but below is a solution that should work, I've done variations of each part of this in isolation but never tried to but them all together in this specific form.
So given the desire to mock a call to some external service (that may not even exist yet) without needing to actually make any external call and wanting to have the ability to set expectations for that service call and to specify the nature of the response, the only method I can think of is to develop a custom adapter.
Mock webservice using custom adapter
If you build a custom request-response adapter you can plug it into your send port in place of the SOAP adapter. You can then specify properties for the adapter that allow it to behave as a mock of your webservice. The adapter would be similar in concept to a loopback adapter but would allow internal mocking logic.
Things that you might want to include as adapter properties:
Expected document (perhaps a disk location that specifies an example of what you expect your BizTalk applicaiton to send to the webservice).
Response document - the document that the adapter will send back to the messaging engine.
Specific expectations for the test such as lookup values in document elements.
You could also have the custom adapter write to disk and setup a BizUnit step to validate the file that was written out.
Building a custom adapter is non-trivial, but possible, you can get a good start from the BizTalk Adapter Wizard and there is an article on deploying custom adapters here.
There is a bug in the code generated by the wizard, you will need to change new Guid(""), to new Guid().
There are also some examples of building custom adapters in the BizTalk SDK.
Another option is to use a plain http page and the HTTP solicit response as discussed here, all your logic goes in the http page. This is probably simpler if you are happy having an http call, and setting up an IIS port to listen for your test.
Initialising unit tests
You can import binding files into a BizTalk application using a .bat file.
If you make a new binding file for each test you run, as well as for your standard applicaiton set up, you can then run the appropriate batch file to apply the right binding.
Each binding file would change your webservice sendport to use the mock custom adapter and set the specific properties for that test.
You could then even make a custom BizUnit step that (perhaps) generated binding settings based on settings in the test step and then ran the shell commands to update the bindings.
Testing Message Contents
A final thing that you might want to consider, to really tie all this together, is some way of testing the contents of messages. You could do this in your mock adapter, but that would get tedious very quickly for large messages, or for a large range of possible input messages.
One option is to make a custom pipeline that calls Schematron to validate files that it receives. Schematron is a schema language that allows a much richer level of file inspection that xsd, so you can check things like "If element x contains this content, I expect element y to be present".
If you built a custom pipeline that took a schematron schema as a parameter, you could then swap in a testing file for a specific unit test, validating that for this test, when you call the webservice you get a file that actually matches what you want (and doesn't just match the xsd)

As a co-author of BizUnitExtensions (www.codeplex.com/bizunitextensions) i agree that the name "unit" in BizUnit can be confusing but for Biztalk, the 'integration test' is the unit test. Some Biztalk folk have successfully used mocks to test pipeline components and other test harnesses (+ BizUnit/Extensions) to test schemas and maps.
Orchestrations unfortunately are opaque. But theres are good reasons for that.
(a) Because of the huge subscription system in the message box - that orchestrations use when being activated etc, it is not possible to fire up some "virtual" process to host the orchestration (which can be done for pipelines. Tomas Restrepo has done something along these lines).
(b) Also, how would this virtual process handle persistence and dehydration?. I'd wager that people using WF would have the same problem in trying to test the workflow fully.
(c) we dont work with the C# directly, so there is no way we can "inject" a mock
interface into the orchestration code.
(d) An orchestration is not really a "unit". its a composite element. The units are the messages going to and from the message box and the external components called through expression shapes.So even if you could inject a mock webservice interface you cannot inject mock message boxes and correlation sets and other things.
One thing that can be done for orchestrations (and i've been considering an addition to the BizUnitExtensions library to do this) is to link in with the OrchestrationProfiler tool as that tool gives a pretty detailed report of all the shapes and somehow check that individual steps were executed (and perhaps the time it took for execution). This could go quite far in making the orchestration a bit more of a white box.Also considering that the orchestration debugger shows a lot of the variable values, surely it must be possible to get that info via an API to show what the values of variables were at a given point for a given instance.
Back to Richard's question though, my previous dev team had a solution. Basically what we did was to write a generic configurable HttpHandler that parsed incoming service requests and returned pre-set responses. The response sent back was configurable based on conditions such as XPath. In the BUILD and DEV binding files, the webservice end point was the mock. This worked brilliantly in isolating the BUILD and DEV environments from the actual third party webservices. This also helped in a "contract first" approach where we built the mock and the orch developer used it while the webservice author went ahead and built the actual service.
[Update:17-FEB-09: this tool is now on codeplex : http://www.codeplex.com/mockingbird.
If this approach sounds interesting check it out and let me know what you think of the tool ]
Now, before someone throws the old "WHAT ABOUT MOCK OBJECT FRAMEWORKS" chestnut in, let me say that the utility above was used for both Biztalk 'consumers' as well as non Biztalk consumers, BUT i have also worked with NMock2 and found that to be an excellent way to mock interfaces and set expectations when writing CLR consumers. (I'm going to be looking into MoQ and TypeMock etc soon). However, it wont work with orchestrations for the reasons described above.
Hope this helps.
Regards,
Benjy

Don't.
Don't test against arbitrary interfaces, and don't create mocks for them.
Most people seem to see developer (unit) testing as intended for testing nontrivial, individual units of functionality such as a single class. On the other hand, it is also important to perform customer (acceptance/integration) testing of major subsystems or the entire system.
For a web service, the nontrivial unit of functionality is hidden in the classes that actually perform the meaningful service, behind the communication wiring. Those classes should have individual developer test classes that verify their functionality, but completely without any of the web-service-oriented communication wiring. Naturally, but maybe not obviously, that means that your implementation of the functionality must be separate from your implementation of the wiring. So, your developer (unit) tests should never ever see any of that special communication wiring; that is part of integration and it can be viewed (appropriately) as a "presentation" issue rather than "business logic".
The customer (acceptance/integration) tests should address a much bigger scale of functionality, but still not focused on "presentation" issues. This is where the use of the Facade pattern is common--exposing a subsystem with a unified, coarse-grained, testable interface. Again, the web service communication integration is irrelevant and is implemented separately.
However, it is very useful to implement a separate set of tests that actually do include the web service integration. But I strongly recommend against testing only one side of that integration: test it end-to-end. That means building tests that are web service clients just like the real production code; they should consume the web services exactly the way that the real application(s) do(es), which means that those tests then serve as examples to anyone who must implement such applications (like your customers if you are selling a library).
So, why go to all that trouble?
Your developer tests verify that your functionality works in-the-small, regardless of how it is accessed (independent of presentation tier since it is all inside the business logic tier).
Your customer tests verify that your functionality works in-the-large, again regardless of how it is accessed, at the interface boundary of your business logic tier.
Your integration tests verify that your presentation tier works with your business logic tier, which is now managable since you can now ignore the underlying functionality (because you separately tested it above). In other words, these tests are focused on a thin layer of a pretty face (GUI?) and a communication interface (web services?).
When you add another method of accessing your functionality, you only have to add integration tests for that new form of access (presentation tier). Your developer and customer tests ensure that your core functionality is unchanged and unbroken.
You do not need any special tools, such as a test tool specifically for web services. You use the tools/components/libraries/techniques that you would use in production code, exactly as you would use them in such production code. This makes your tests more meaningful, since you are not testing someone else's tools. It saves you lots of time and money, since you are not buying, deploying, developing for, and maintaining for a special tool. However, if you are testing through a GUI (don't do that!), you might need one special tool for that part (e.g., HttpUnit?).
So, let's get concrete. Assume that we want to provide some functionality for keeping track of the cafeteria's daily menu ('cause we work in a mega-corp with its own cafe in the building, like mine). Let's say that we are targeting C#.
We build some C# classes for menus, menu items, and other fine-grained pieces of functionality and its related data. We establish an automated build (you do that, right?) using nAnt that executes developer tests using nUnit, and we confirm that we can build a daily menu and look at it via all these little pieces.
We have some idea of where we are going, so we apply the Facade pattern by creating a single class that exposes a handful of methods while hiding most of the fine-grained pieces. We add a separate set of customer tests that operate only through that new facade, just as a client would.
Now we decide that we want to provide a web page for our mega-corp knowledge workers to check today's cafeteria menu. We write an ASP.NET page, have it invoke our facade class (which becomes our model if we are doing MVC), and deploy it. Since we have already thoroughly tested the facade class via our customer tests, and since our single web page is so simple, we forego writing automated tests against the web page--a manual test using a few fellow knowledge workers will do the trick.
Later, we start adding some major new functionality, like being able to preorder our lunch for the day. We extend our fine-grained classes and the corresponding developer tests, knowing that our pre-existing tests guard us against breaking existing functionality. Likewise, we extend our facade class, perhaps even splitting off a new class (e.g., MenuFacade and OrderFacade) as the interface grows, with similar additions to our customer tests.
Now, perhaps, the changes to the website (two pages is a website, right?) make manual testing unsatisfactory. So, we bring in a simple tool comparable to HttpUnit that allows nUnit to test web pages. We implement a battery of integration/presentation tests, but against a mock version of our facade classes, because the point here is simply that the web pages work--we already know that the facade classes work. The tests push and pull data through the mock facades, only to test that the data successfully made it to the other side. Nothing more.
Of course, our grand success prompts the CEO to request (demand) that we expose the web application to mega-corp's BlackBerrys. So we implement some new pages and a new battery of integration tests. We don't have to touch the developer or customer tests, because we have added no new core functionality.
Finally, the CTO requests (demands) that we extend our cafeteria application to all of mega-corp's robotic workers--you did notice them over the last few days? So, now we add a web services layer that communicates through our facade. Again, no changes to our core functionality, our developer tests, or our customer tests. We apply the Adapter/Wrapper pattern by creating classes that expose the facade with an equivalent web service API, and we create client-side classes to consume that API. We add a new battery of integration tests, but they use plain nUnit to create client-side API classes, which communicate over the web service wiring to the service-side API classes, which invoke mock facade classes, which confirm that our wiring works.
Note that throughout this whole process, we did not need anything significant beyond our production platform and code, our chosen development platform, a few open-source components for automated building and testing, and a few well-defined batteries of tests. Also note that we didn't test anything that we don't use in production, and we didn't test anything twice.
We ended up with a solid core of functionality (business logic tier) that has proven itself mature (hypothetically). We have three separate presentation tier implementations: a website targeted to desktops, a website targeted to BlackBerrys, and a web service API.
Now, please forgive me for the long answer--I tire of inadequate answers and I did not want to provide one. And please note that I have actually done this (though not for a cafeteria menu).

This is a very interesting question that I still haven't seen a good generic answer to. Some people suggest using SoapUI but I haven't had time to actually test that yet. This page might be interesting on that.
Another way might be to somehow wrap the WebDev.WebHost.dll and use that ... Phil Hakkck discusses that in this post.
It's also be discussed before on SO here.
Please let us know if you find another solution to this!

This is the way to do it:
Back to Richard's question though, my
previous dev team had a solution.
Basically what we did was to write a
generic configurable HttpHandler that
parsed incoming service requests and
returned pre-set responses. The
response sent back was configurable
based on conditions such as XPath

I haven't had to do this in a while, but when I would test my Biztalk Apps I always used either soap ui or web service studio. I was able to test different input values without effort.

Related

How can I keep unit testing in Unity after introducing security rules to firestore?

I am using cloud firestore + cloud functions + firestore auth to support my game.
I developed the main part of the app with unit tests in the app plus typescript tests for cloud functions. Now I want to add security rules to secure the data.
When I do so, requiring the calls to be authenticated, all my unit tests in unity (naturally) fails, as I do not authenticate a user but mocks them as data representation of the user in the db.
I want to keep using my unit tests in unity but still requiring the real db to demand authentication.
I have tried to look around for mock auth, or auth test environment, but found nothing except the library rules-unit-testing.
I see the content of it with specialized logic for mocking user, making me think that I am understanding this the wrong way by trying to do this in unity. My question is, How to continue to do game tests in unity, which requires interacting with the firestore server, while keeping security rules?
I am answering my own question after more time.
My analysis was that I ran into issues because I had coupled my code too tightly: server logic was on the client side and broke when introducing security rules. I decided to move logic to cloud functions and have only simple client side calls (sets, gets, updates, http functions).
So if someone runs into similar problems (architecture hampers use of best practices) I suggest to re-think the architecture. Feels obvious when writing...
Have fun coding all ^_^

How to implement BDD on very complex business rules?

I am learning the art of Unit Testing and BDD and in my company there is no one who follows this approach. I try a lot to learn it by myself but get stuck somewhere and give up after trying for a few days. After some time I get inspiration again from someone and try to learn to swim in these waters again.
I have recently developed a Windows Service that started out small but ended up a big ball of messy business rules. Here is a brief overview of what the service does.
Log to database “Service starting…”
Get Data From DB that needs to be posted to another web service
If there is no data to post Log to database “No data to process…”
and exit service
If data contains duplicate values Log to database “Duplicate data
found, this record will be skipped.”
Update the status of the record for which duplicate data was found
to something e.g. 302
If the data is null, Log to database “Record contains null value and cannot be processed.”
Update the status of the record appropriately e.g. 310
If the database is unavailable or down due to some reason, Log in a file “Database is down…”
If the service is down where we have to post the data log to database “Receiver service is down.”
Log to the database “Exiting service…”
So my service basically retrieves some data from the Database, creates JSON request from it and post it to another service.
It also parses the response from that service and logs if the data was posted successfully or not. I have just entered some of the business rules that are currently implemented in the service to give you an idea of what lies under the hood. I am learning BDD and unit testing and would really love to know how an expert will write test cases that test these complex business rules?
From my understanding BDD does not need to focus internally on how the service is written, instead it will test scenarios that service should fulfill For example
When executing the windows service with duplicate data
It should log to database “Duplicate data found, this record will be
skipped.”
It should update the status of the record to 302
I can write multiple scenarios that test some functionality of the service. Is this the right approach or should I right very large sets of scenarios that test each business rule in every test?
Secondly as the service is communicating with the DB as well as a web service, how can I test the HttpRequest and HttpResponse that is sent and received by the service?
Finally how do I actually test something as complex as the business rules I have written above, If I simple assert that the service called some specific method of some class is that enough? How do we know that by simply calling some method it will perform the right task?
A few simple thoughts to help keep it in perspective:
You say learning...
Keep that in mind and don't get hung up on what is perfect, right, or proper. You're learning, feel free to make mistakes and know that you'll improve as you go. Keep at it, keep practicing, you'll get better and it will feel more natural the more you do and the more you think about it.
BDD tests behaviors.
You use this to say the system should behave in a specific way, and that implies that it must be the system. You may still stub in a couple dummy services, on occasion (like a fake credit-card processing service) but for the most part, you want this to prove the system works as needed. Think of them as more of integration tests.
Your BDD tests should drive your Unit Tests.
Write the BDD test to fail, and then let that dictate what unit tests should be written in order to get your system to behave as you expect. This essentially means that each BDD test of yours will introduce a set of Unit Tests as well.
In summary, let BDD drive TDD
and you'll have the right balance of tests. The starting point is with your first BDD test.
And in your scenario, if your system is supposed to alert the user that they are attempting to add a duplicate, that's a valid test.
The annoying thing with testing Http Requests and Responses are that you end up doing string comparisons, but that is doable. The BDD tests should just care that the system responded as you expect.
The unit tests should isolate in respect to what you are doing, so you would have unit tests inside your web service to make sure it responds correctly, but you would not have a unit test on the outside that calls the web service; extract it out instead.
It all can become pretty philosophical, and that probably gets into what makes a good unit test versus what makes a good behavioral test, but hopefully this helps you get started down the road.

How to make sure web services are kept stable from one release to the next?

The company where I work is a software vendor with a suite of applications. There are also a number of web services, and of course they have to be kept stable even if the applications change. We haven't always succeeded with this, and sometimes a customer finds that a service is not behaving as before after upgrading.
We now want to handle this better. In general, web services shouldn't change, and if they have to, at least we will know about it and document the change.
But how do we ensure this? One idea is to compare the WSDL files with the previous versions at every release. That will make sure the interfaces don't change, but it won't detect that the behavior changes, for example if a bug is introduced in some common library.
Another idea is to build up a suite of service tests, for example using soapUI. But then we'll never know if we have covered enough cases.
What are some best practices regarding this?
I think, you can definitely be confident of the stability of the services If you keep updating your service tests with the latest changes in the service and I think this is one of the best practices people use before they deploy.
Also, In general, I think what would probably matter is how well the unit testing is being done by the developers who are writing the components(libraries) used by the services. Are those unit tests being updated with the changes in the components being used by the service.
There as two kinds of changes for a web service, breaking change and non-breaking change. Breaking change is like changing the signature of a web method or changing a datacontract schema. Non-breaking change is like adding a new web method or adding an optional member to a datcontract. In general your client should continue to work with a non-breaking change. I don't know which technology you are using but use versioing in service namespace and datacontract namespace following W3C recommendations. You can even continue to host different versions at different endpoints. This way your clients will break if they try to use a new version of your service without re-generating the proxy from the new version of WSDL or continue to use the old version.
Some WCF specific links are
http://msdn.microsoft.com/en-us/library/ms731060.aspx
http://msdn.microsoft.com/en-us/library/ms733832.aspx
I wouldn't consider behaviour change as a change in SOA sense. That is more like fixing defects.
IMO, aside from monitoring the WSDL for changes (which is really only necessary if you have a willy-nilly implementation ("promote-to-production") strategy), the only way to really ensure that everythign is operational and stable, is to perform continuous, automated, periodic, functional testing with a test suite that provides complete coverage of both the WSDL and the underlying application functionality, including edge cases. The test cases should be version controlled just like the app and WSDL, and should be developed in parallel to new versions of the app (not afterward, as a reaction).
This can all be automated with SoapUI. Ideally, logging results somewhere that can be accumulated and reported on some dashboard, so that if somethign breaks, you know when it broke, and hopefully correlate that to an event such as an application update, or something more benign such as a service pack being pushed, electrical work being performed, etc..
However... do as I say, not as I do. I have been unsuccessful in pushing this strategy at work. Your votes will tell me whether I should push harder or do something else!

Is it possible to automate Siebel testing behind the GUI?

My test team currently uses QTP to test through the GUI, but like any automated test suite that relies on the interface, it is more fragile than automating tests that directly interact with the code. I am attempting to learn more about Siebel and Siebel Tools to better understand how we might be able to test below the GUI, but would like to hear from someone with more expertise to find out if this is feasible.
it really depends on what you want to test, I guess.
I'm using the Siebel Java Data Bean (JDB) a lot to access Siebel. You basically connect to the Siebel Server and execute code very similar to eScript. That means you could create records, invoke workflows and so on; basically everything you could do in eScript. That might be helpful. This will apply all the usual validations, runtime events and events.
As soon as some of your scripts in BusComps or in Business Services or elsewhere access data that needs a UI context (TheApplication().ActiveBusObject() or TheApplication().ActiveApplet() for instance) this approach will fail, though, because the Siebel Data Bean doesn't have a UI context.
Another drawback is that you have to connect to a Siebel Server. That means you have to deploy your SRF to the development server and only then you can run your tests. It sure would be much better if the JDB could connect to your local instance, but as far as I know this is not possible. Have a look at the Object Interfaces guide in the Bookshelf, though. There are different ways to connect to Siebel, not just Java.
Let me know if you have any questions about this. I could maybe post some sample code of how to connect to the Siebel Server etc.
Right now QTP is the best way to go - it's still a PITA but there really is nothing else out there to test the full Siebel Web Client. This is because the Siebel UI is delivered through Internet Explorer with proprietary Active X and Java controls and so you really need a bespoke pack to test it.
Because the UI is a re-interpretation, not just an abstraction, of the Business Object layer (that one accesses with Data Beans / COM etc.) it is not useful to test at that layer except in a small number of unit test cases (such as when you have complex scripting in Siebel).
If you change the end of the URL for the client (login to Siebel first of course) to something like "SWEcmd=GotoPageTab&SWEScreen=Accounts+Screen&SWESetMarkup=XML" then you'll see lots of XML mark-up which is then consumed by the proprietary controls - you might think this would be a cool way to build an automation tool, but it is not (I've tried).
If you want to really use a proper UI testing tool, like Selenium, you'll have to test the HTML Siebel Web Client - this is a 'skinny' 'Standard Interactivity' UI that doesn't use Active X or Java ... it has a lot less cool UI controls but it works essentially the same at the full Siebel Web Client (aka High Interactivity Siebel Web Client, or HI for short), and it works in Firefox!
Have you looked at Oracle Application testing Suite. It comes pre-built accelerators for testing Siebel which makes it all the more easier to test Siebel.
Since Siebel version 7.7 QTP uses Siebel Test Automation (STA) which needs to be purchased separately from Oracle, a quick search found this explanation on how to set up testing with STA (this is written from a QTP prespective but is true for all STA usage).
If you really want to avoid using GUI testing then you can hunt down the API documentation and try to use STA directly but I would not recommend it, QTP has already done all the heavy lifting for you, why would you want to reproduce the effort (especially since your company already owns QTP licenses).

Should we unit test web service?

Should we unit test web service or really be looking to unit test the code that the web service is invoking for us and leave the web service alone or at least until integration testing, etc.... ?
EDIT: Further clarification / thought
My thought is that testing the web service is really integration testing not unit testing?? I ask because our web service at this point (under development) is coded in such a way there is no way to unit test the code it is invoking. So I am wondering if it is worth while / smart to refactor it now in order to be able to unit test the code free of the web service? I would like to know the general consensus on if it's that important to separate the two or if it's really OK to unit test the web service and call it good/wise.
If I separate them I would look to test both but I am just not sure if separation is worth it. My hunch is that I should.
Unit testing the code that the web service is invoking is definitely a good idea since it ensures the "inside" of your code is stable (and well designed). However, it's also a good idea to test the web service calls, especially if a few of them are called in succession to accomplish a certain task. This will ensure that the web services that you've provided are usable, as well as, work properly when called along with other web service calls.
(Not sure if you're writing these tests before or after writing your code, but you should really consider writing your web service tests before implementing the actual calls so that you ensure that they are usable in advance of writing the code behind them.)
Why not do both? You can unit test the web service code, as well as unit test it from the point of view of a client of the web service.
Under my concept, the WS is just a mere encapsulation of a Method of a Central Business Layer Object, in other words, the Web Method is just a "gate" to access methods deeper in the model.
With the former said, i do both operations:
Inside the Server, I create a Winform App who do Load Testing on the Business Layer Method.
Outside the Server (namely a Machine out of the LAN where the Web App "lives"), I create a Tester (Winform or Web) that consumes the WS, doing that way the Load Testing.
That Way I can evaluate the performance of my solution considering and discarding the "Web Effect" (i.e. The time for the data to travel and reach the WS, the WS object creation, etc).
All the above said is of course IMHO. At least that worked a lot for me!
Haj.-
We do both.
We unit test the various code elements.
Plus, we use the unit test framework to execute tests against the web service as a whole. This is rather complex because we have to create (and load) a database, start a server, and then execute requests against that server.
Testing the web service API is easy (it's got an API) and valuable. It's not a unit test, though - it's an "integration", "sub-system", or "system" test (depends on who you ask).
There's no need to delay the testing until some magical period called "integration testing" though, just get some simple tests now and reap the benefit early.
I like the idea of writing unit tests which call your web service through one of its public interfaces. For instance, a given WCF web service may expose HTTP, TCP, and "web" bindings. Such a unit tests proves that the web service can be called through a binding.
Integration testing would involve testing all of the bindings of the service, testing with particular client scenarios, and with particular client tools. For instance, it would be important to show that a Java client can be created with IBM's Rational Web Developer that can access the service when using WS-Security.
If you can, try consuming your web service using some of the development tools your customers will use (Delphi, C#, VB.Net, ColdFusion, hand crafted XML, etc...). Within reason, of course.
1) Different tools may have problems consuming your web service. Better for you to run in to this before your customers do.
2) If a customer runs in to a problem, you can easily prove that your web service is working as expected. In the past year or so, this has stopped finger pointing in its tracks at least a dozen times.
The worst was the developer in a different time zone who was hand crafting the XML for SOAP calls and parsing the responses. Every time he ran in to a problem, he would insist that it was on our end and demand (seriously) that we prove otherwise. I made a dead simple Delphi app to consume the web service, demonstrate that the methods worked as expected and even displayed the XML for each request and response.
re your updated question.
The integration testing and unit testing are only superficially similar, so yes, they should be done and thought of separately.
Refactoring existing code to make it testable can be risky. It's up to you to decide if the benefits outweigh the time and effort it will take. In my case, I'd certainly try, even if you do it a little bit at a time.
On the bright side, a web service has a defined interface, so you don't really have to change anything to add integration testing. Go crazy. If you can, try to do this before you roll the web service out to customers. There's a good chance that using the web service lead to changes in the interface, and you don't want this to mess up customers too much.