SOAPUI ability to switch between database connections for test suite - web-services

A bit of background:
I have an extensive amount of SOAPUI test cases which test web services as well as database transactions. This worked fine when there were one or two different environments as i would just clone the original test suite, update the database connections and then update the endpoints to point to the new environment. A few changes here and there meant i would just re-clone the test cases which had be updated for other test suites.
However, I now have 6 different environments which require these tests to be run and as anticipated, i have been adding more test cases as well as changing original ones. This causes issues when running older test suites as they need to be re-cloned.
I was wondering whether there was a better way to organise this. Ideally i would want the one test suite and be able to switch between database connections and web service endpoints but have no idea where to start with this. Any help or guidance would be much appreciated.
I only have access to the Free version of SOAPUI
This is what the structure currently looks like:

Here is how I would go to achieve the same.
There is an original test suite which contains all the tests. But it is configured to run the tests against a server. Like you mentioned, you cloned the suite for second data base schema and changed the connection details. Now it is realized since there are more more data bases need to test.
Have your project with the required test suite. Where ever, the data base server details are provided, replace the actual values with with property expansion for the connection details.
In the Jdbc step, change connection string from:
jdbc:oracle:thin:scott/tiger#//myhost:1521/myservicename
to:
jdbc:oracle:thin:${#Project#DB_USER}/${#Project#DB_PASSWORD}#//${#Project#DB_HOST}:${#Project#DB_PORT}/${#Project#DB_SERVICE}
You can define the following properties into a file and name it accordingly. Say, the following properties are related to database hosted on host1 and have the details, name it as host1.properties. When you want to run the tests against host1 database, import this file at project level custom properties.
DB_USER=username
DB_PASSWORD=password
DB_HOST=host1
DB_PORT=1521
DB_SERVICE=servicename
Similarly, you can keep as many property files as you want and import the respective file before you run against the respective db server.
You can use this property file for not only for database, but also for different web services hosted on different servers such as statging, qa, production without changing the endpoints. All you need is set the property expansion in the endpoint.
Update based on the comment
When you want to use the same for web services, go to the service interface -> Service Endpoints tab and then, add a new endpoint ${#Project#END_POINT}/context/path. Now click on the Assign button. Select All requests and Test Requests from drop down. And you may also remove other endpoints
Add a new property in your property file END_POINT and value as http://host:port. This also gives you advantage if you want to run the tests agains https say https://host:port.
And if you have multiple services/wsdls which are hosted on different servers, you can use unique property name for each service.
Hope this helps.

Related

How to run two separate django instances on same server/domain?

To elaborate, we have one server we have setup to run django. Issue is that we need to establish "public" test server that our end-users can test, before we push the changes to the production.
Now, normally we would have production.domain.com and testing.domain.com and run them separately. However, due to conditions outside our control we only have access to one domain. We will call it program.domain.com for now.
Is there a way to setup two entirely separete django intances (AKA we do not want admin of production version to be able to access demo data, and vice versa) in such a way we have program.domain.com/production and program.domain.com/development enviroments?
I tried to look over Djangos "sites"-framework but as far as I can see, all it can do is separate the domains, not paths, and it has both "sites" able to access same data.
However, as I stated, we want to keep our testing data and our production data separate. Yet, we want to give our end-user testers access to version they can tinker around, keeping separation of production, public test and local development(runserver command) versions.
I would say you use the /production or /development path to select which database to use. You can read more about multitenancy from here https://books.agiliq.com/projects/django-multi-tenant/en/latest/

What is the difference between Puppeteer and Electron with respect to using it to test client side of an app

I have been rushing to get my app, a mixture between nodejs server driving an sqlite database and a litElement based client side providing the ui, into a useable state as a beta release. I achieved that a couple of days ago and now I am (belatedly I know) thinking how to put together a test framework. However I am really struggling to understand how to best test the client side. I think its because I am having difficulty understanding conceptually what the two main choices of framework are. Before I go into more detail, let me explain the structure of the app in top level terms.
At the project root level there are three main directories node_modules which comprises all the modules I've pulled in (including lit-element and web-components-loader which are client side elements - but see below) server which contains all the code for the server side of my application and client which consist of all the code for the client side of my app. I run rollup ONLY at module install time to "package" lit-element and the directives I use and the web-component-loader and effectively treeshake and copy them to the client/libs. As a result of this my client is coded to assume the modules are in the libs directory AND I DO NOT NEED OR HAVE any build stage. I guess the root of the client is index.html which pulls in a service-worker.js and main-app.js. main-app is the root of a tree in lit-element based components that make up the entire client app. Nginx is the web server for all the static files in the client, but also acts as a proxy to pass any urls that start /api to a standard node http web server (not even express, although I do use the router, body-parser and final-handler modules) and these get passed to various api handlers each of which is a separate javascript file - although these can "require" a few common modules that I have written and those in the node_modules directory.
I plan on using jest as a test environment. For my server I think it is easy. For each api handler I want to test I can build a test script that "requires" the javascript file I want to test. I am in two minds about whether to use a sqlite database for testing or mock something - I am leaning towards the former as I am using better-sqlite3 and it is totally synchronous and very fast. I already have scripts to create empty databases, so I have no worry about test isolation.
Client testing is where I get confused. I "think" that in essence jest can run tests the same way as for the server, one element at a time. BUT, these elements and my test scripts are going to need a "web-platform" set of APIs - not least of which is the entire shadow dom and custom components stuff that lit-element uses. This is where, I think, puppeteer or electron in with there associated jest plugins which can put these platform apis into the test environment. But, and this is the essence of my confusion, puppeteer instructions all start with a something like
const browser = await puppeteer.launch({headless: true});
const page = await browser.newPage()
await page.goto(SOME URL);
What is this URL? - do I have to also run a server? I cannot relate this snippet to running a test controlled by jest. All the examples seem to use webpack and typescript, neither of which I know anything about.
The other module I have seen mentioned is electron, in particular this article, which everything else seems to point to (or the same text).
https://www.ninkovic.dev/blog/2020/testing-web-components-with-jest-and-lit-element
From the code snippets in this article it "seems" like it might be what I want, BUT ...
I cannot find very many references to electron other than on its own web site. Here it is telling you to use electron as a tool to build a cross platform application, but nowhere can I find what it is - it assumes you already know. I don't want a UI for my unit testing I want it to be headless like in puppeteer.
Hence my confusion and why I am unsure how to achieve what I want. Can someone give me some pointers as to
How can I set up puppeteer to run headless tests without needing a server OR
What exactly is electron (and can I use it to
a) run my tests
b) provide me with tools to examine the dom elements I have created to see I have created the right ones) and how is it different from puppeteer and can I use it to conduct headless tests of my client.
UPDATE
I've done some more digging and am beginning to understand the differences. Let me summarise what I think I have found.
Puppeteer is great for end to end testing of your site. You run the tests by launching the included puppeteer at the home page of your site (or the more likely scenario of a development test site) and programatically pretend to be a user who can click on buttons etc. You can use various methods, including functions such as document.querySelector() to check your UI has behaved how you think, or you can take screen shots and compare with standardised version. I could possible use it for unit tests, but I would have to run a server, create a test fixture html page for every test and navigate to it. jest-puppeteer is a package with some of that built in.
Electron is a platform for building apps. What the url I was referencing was using a test runner app jest-electron built using electron. So worrying about electron is a red herring, I should be worrying about jest-electron.
My main concern right now, I think, is that I need different jest configurations for my three scenarios
unit tests on the server
unit tests on the client
end to end testing of the complete app.
Given I have only one package.json file and one set of node_modules I need to figure out a way to have three different jest-config.js files.

How to mock a Zookeeper server for unit test in golang?

I am using the library gozk to interface my application with a production zookeeper server. I'd like to test that the application create the correct nodes, that they contain the correct content for various cases, and that the DataWatch and NodeWatch are set properly:
i.e. the application performs exactly what should based on the node and data updates.
Can I have a mock zookeeper server to be created and destroyed during unit tests only, with capability to artificially create new node and set node contents?
Is there a different way than to manually create a zookeeper server and use it?
A solution already exists for java
I would recommend making the code of yours that calls zookeeper become an interface.
Then during testing you sub in a 'mockZookeeperConn' object that just returns values as though it was really connecting to the server (but the return values are hardcoded)
#Ben Echols 's answer is very good.
As further, you can try "build constraints".
You can configure different build tags on real-zk and mock-zk code.
For example, we configure "product" for real-zk code and "mock" for mock-zk code.
Thus there are two ways to run unittests:
go test -tags mock if there isn't a zk env.
go test -tags product if there is an available zk env.

Linking Classes to their relative Configuration / Setup / Data in Sql and Testability

Suppose I have several services represented by classes ServiceA, ServiceB, ServiceC etc.
These services have configuration and other data in SQL, in for example a ServiceConfiguration table.
Are there any best practices for linking ServiceA to it's corresponding configuration and data in SQL other than hard-coding the Id in the class?
And any thoughts on testability?
I can verify that, for example, ServiceA returns true for IsValidServiceForId(1) -- but if it's configuration id were to change in SQL for some unknown reason, the test would still pass.
I could add to the test an assertion verifying that I can get a configuration record for a service matching name "Service A", but that would then introduce an extra dependency into my test class - I would then also be testing the ServiceConfigurationRepository (for example).
Edit
I thought this would be a platform agnostic question, but for info I'm using C#, .Net 3.5, NUnit for testing, NHibernate with SQL Server.
If these services are directly hitting the DB for their config, and you need to test them, abstract away the configuration.
When testing, you give your services a mocked configuration.
When actually running the app, you give your services the real configuration, fetched from the DB.
Sorry for being so vague, but without any code, it's hard not to be.

What are the best practices for the AspNetDevelopmentServerHost attribute?

We have a Web Application Project (dozens actually..) that has a testing project attached to it. In the testing project I have a simple unit test which exercises a couple of methods.
Running locally, the unit test executes and works.
However, when our TFS Build server attempts to execute the test, it fails with a error about an invalid path for the AspNetDevelopmentServerHost attribute. Other team members can execute it just fine.
The problem is that the root of my TFS Workspace is set to c:\projects\ One of the team members has theirs set to c:\tfs2008\ The TFS Build server on the other hand sets the pathToWebRoot variable to "c:\blahblah\Release_PublishedWebsites..." Which results in a bad path.
Due to the number of projects we have, I can't have everyone reset an environment variable every time they switch projects.
So, what are the best practices with regards to unit testing web projects in a team environment? The MSDN site article was in true microsoft fashion less than helpful.
you should specify the string %pathtowebroot%\\WebSiteName in the ,pathToWebApp parameter of the AspNetDevelopmentServer or AspNetDevelopmentServerHost attribute.
We use:
<TestMethod(), _
HostType("ASP.NET"), _
AspNetDevelopmentServerHost("$(SolutionDir)\\MyWebProject", "/"), _
UrlToTest("http://localhost:44444/")>
Public Sub Test()
AssertStuff();
End Sub
The $(SolutionDir) means it works fine in everyone's environment, and all the tests can be checked in. We do have to change it each time we automatically create a new test, but we're writing the test anyway, so it's not too much of a hardship.
It's possible to set environment variables during the build to alter the path and still have it work in a local environment.
How to: Parameterize the URL for a Web Performance Tests Web Server
Testing Web Sites and Web Services in a Team Environment