I am currently using SOAPUI (Free Version).
I am looking to automate the tests so a value is place in each test, without having to manually enter them.
(At a Basic Level) Example
The test web service I am using is http://www.webservicex.net/mortgage.asmx?WSDL.
http://www.webservicex.net/mortgage.asmx?op=GetMortgagePayment
The service has been set up as SOAP, a test suite has been built and a SOAP request test step has been assigned.
Out of the box, you can enter data manually in to the test step. However I am wanting too achieve three things using the free addition.
Add a randomized value to the test, so that a value of between is inserted each time the test is run. I am aware you can use Groovy but I am unsure whether the Groovy script needs to be added as a test step or as the set up script tab
Pull a list of data from a .csv or similar so the test continues to run through all data.
Repeat step 2 but set the assertions for each piece of test data to state whether the response should be valid or not.
If you could help it would be appreciated.
Cheers,
Tim
Related
I would like to know how it is possible to perform unit tests on the ETL developed on Talend.
My ETLs performs files reading, files generation, and connection with SAP system. (read/write IDOC).
Is there any tools? Is what it takes to develop a small java Test Framework?
Yes, Mohcine, Talend introduced in version 6 test case automation which is part of its overall Continuous Integration framework. You right click on a component in a job and select "Create Test Case". It will create a skeleton test case job. You can extend this test case job to perform a variety of tests, including db connectivity and results. It will take some to learn the tool to make it useful, but worth the effort. Also, this feature may only be available in the subscription version of Talend, I am not sure if its available in Open Studio.
Here is an example: diagram is a very simple job that loads a file into a db table.
Here is the test case I created by first generating the skeleton, then modifying it for my specific purposes.
Here is the assert where I match the number of rows read from the file with the number of rows inserted into the db table.
For further info check out this tutorial.
We have a SOAP web service that is in production and contains a large number of methods. As part of a project we are adding new methods to that web service, note we are not amending the existing methods.
What I am trying to determine is whether I need to regression test the existing methods to test if they have been impacted by adding new methods?
Yes, if you change your webservice the only proper way to make sure none of the changes have impacted existing operations is a regression test.
If you use a testing tool like SOAPUI you can automate this for every build you make. (Regression) testing should be a standard step after any new build to ensure software quality.
As a preface, our setup is somewhat unusual due to "legacy" reasons. It is fully possible that I am going against the grain with this. I would love to get an expert opinion on whether it is possible to get my current setup working or a suggestion on a different approach.
Environment
Java application with over 10K unit tests in JUnit. For legacy reasons the entire unit test run takes a long time (the ultimate goal is to fix the root of the problem, but this will not happen soon).
The application is broken up into multiple modules, with each module having its own unit tests. Executing tests module by module takes a reasonable amount of time, so that if someone commits the code to repo subtree with module's code and only module's tests get executed, they can get a result quickly.
Current Jenkins Setup
JUnit job
This is the single parameterized job that can run tests for any module. The job takes in as parameters the regexs for which tests to run and a parameter indicating which module it is running, for notification purposes. It checks out the whole repo tree and then does the run based on the parameters.
After the completion of the run this job does the analysis of JUnits, publishes the report and sends out email notifications.
Repo watchers
One repo watcher for each module. The watcher checks out only the repo subtree that it wants to monitor. When a change is detected it triggers the JUnit job telling it which tests to run and for what component this is.
Question
In general the setup works well, does exactly what I need, however it breaks a few of the nice and expected features of Jenkins and JUnit plugin.
Because the same job keeps executing different subsets of unit tests, the job to job comparison between unit tests does not provide any value. Without manually scanning between jobs it is not possible to tell what changed in terms of new failures or new fixes to unit tests.
Something very similar happens to change history:
Each repo watches runs on its own schedule. Suppose we have a change to module A and a change to module B, very close time wise to each other. If watcher A triggers first, the JUnit job triggered by watcher A will "claim" both changes. When the JUnit job triggered by watcher B runs, it will not detect any new changes in the repo. This plays havoc with email notifications, as the second JUnit job would not know who broke the build.
At the end of the day I believe I am looking for a way to establish dependency relationship between non sequential job runs in Jenkins for the same job, or alternatively a totally different approach.
Thank you!
Okay so lets see if I get this right in basic language,
You want to track what failures are caused by what changeset?
In which case I would suggest the following: Again in simple term - you need to adapt this to your current setup.
Setup a job that manages results
This job should be parameterized to take the name or change number and publish ALL the results.
This job should be triggered following each test run is completed to consolidate all results
now if a new test or failure is introduced the same job can track it and email the person who caused the failure.
Jenkins is very powerful and very generic so pretty much every secanrio if not plugins with groovy it can be resolved. So I would suggest taking a pen and map it on the board and create process of it rather than just a single job.
I have designed a job in Talend. The job is fetching data from database and converting it into json and it uploads that json on server. I want to write test case for my job like we write unit test in java projects. I have searched a lot on how to write test case for talend job but did not find anything. If any one know how to test talend job please suggest.
You can simply create a job which call your job (either tRunJob or tSoap if your job is soap-exposed):
Init your database
call your job
check the result on the server (or mock the server call by overriding context parameters)
use tAssert to make your check
use tAssertCatcher->tLogRow to print test result
I made a CI (internal project) for our project with a basic Java application, which is a telnet wrapper around the Talend Command Line API (listJob, runJob...), then generates a Junit XML result file. Everything is called by Jenkins.
It seems that nothing really exist to perfectly test Talend jobs :-(
Good luck.
In talend 6.0.1 i found a tab named "Test Cases", it seems new to me. On https://help.talend.com/display/TalendRealtimeBigDataPlatformStudioUserGuide60EN/6.10+Testing+Jobs+using+test+cases you can find an explanation on writing such tescases. Im not sure if its what you wanted but i will have look on that.
For the end to end testing we are running two versions of the job asking the user which version he needs to compare with which version and dynamically creating the table on the fly and compare the result at the db side. This is just an attempt.
Yeah there is no Junit OOB(out of the box.)
let me begin by saying I 'm a coldfusion newbie.
I 'm trying to research if its possible to do the following and what would be the best approach to achieve it.
Whenever a developer checks in code into SVN, I would like to do a get all the new changes/files and do an auto build to check if the code can be deployed successfully to production server. I guess there are two parts to it, one syntax checking and second integration test(if functionality is working as expected). For the later part some unit test tools would have to be used.
Can someone comment on their experience doing something similar for coldfusion.
Sorry for being a bit vague...I know its a very open-ended question but any feedback would be appreciated.
Thanks
There's a project called "Cloudy With A Chance of Tests" that purports to do what you require. In particular it brings together a number of other CFML code analysis projects (VarScope & QueryParam) to check code, as well as unit testing. I am not currently using it myself but did have a look at it some time ago (more than 12 months) and it appeared to be quite good.
https://github.com/mhenke/Cloudy-With-A-Chance-Of-Tests
Personally I run MXUnit tests in Jenkins using the instructions from the MXUnit site - available here:
http://wiki.mxunit.org/display/default/Continuous+Integration+--+Running+tests+with+Jenkins
Essentially this is set up as an ant task in Jenkins, which executes the MXUnit tests and reports back the results.
We're not doing fully continuos integration, but we have a process which automates some of the drudgery of our builds:
replace the site's application.cf(m|c) with one that tells users that the app is being deployed (we had QA staff raising defects that were due to re-deployments)
read a database manifest XML which lists all SQL scripts which make up the current release. We concatenate the scripts into a single upgrade script, suitable for shipping
execute the SQL script against the server's DB, noting any errors. The concatenation process also adds a line of SQL after each imported script that white to a runlog table, so we can see what ran, how long it took and which build it was associated with. If you're looking to replicate this step, take a look at Liquibase
deploy the latest code
make an http call to a ?reset=true type URL to tell the app to re-initialize
execute any tests
The build is requested manually through the build servers we have, but you click a button, make tea and it's done.
We've just extended the above to cope with multiple servers in a cluster and it ticks along nicely. I think the above suggestion of using the Jenkins SVN plugin to automate the process sounds like the way to go.