What's the difference between test suite and test group? If I want to organize my unit tests in phpunit.xml in groups of tests (i.e. groups of directiories), e.g. tests for specific application module.
How should the phpunit.xml look like to utilize groups and test suites?
How to run specific group/test suite from the command line?
Similar question:
PHPUnit - Running a particular test suite via the command line test runner
PHPUnit manual about test suites and groups in xml configuration:
http://www.phpunit.de/manual/3.4/en/appendixes.configuration.html
http://www.phpunit.de/manual/current/en/organizing-tests.html
How to configure the groups in phpunit.xml, that phpunit --list-groups showed them?
Test suites organize related test cases whereas test groups are tags applied to test methods.
Using the #group annotation you can mark individual test methods with descriptive tags such as fixes-bug-472 or facebook-api. When running tests you can specify which group(s) to run (or not) either in phpunit.xml or on the command line.
We don't bother with test suites here, but they can be useful if you need common setup and teardown across multiple tests. We achieve that with a few test case base classes (normal, controller, and view).
We aren't using groups yet, either, but I can already think of a great use for them. Some of our unit tests rely on external systems such as the Facebook API. We mock the service for normal testing, but for integration testing we want to run against the real service. We could attach a group to the integration test methods to be skipped on the continuous integration server.
Related
I know that in Azure DevOps I can create test cases for my features and I can collect those test cases in a test suite to keep track of test results for a particular release or iteration of my software.
I also know that on a code level I can integrate a test framework to run unit or integration tests. Depending on the technology stack and language that I use the frameworks differ (i.e., Mocha, Junit, NUnit, PyTest, etc.), but usually they produce a common format for test results such as an XML Test Report using the JUnit Schema.
Now, I run my unit tests in different tools (i.e., Gitlab CI/CD or Jenkins) and I would like to link the test results from those unit and integration tests to test cases that I am collecting in test suites on Azure DevOps.
If I understand this correctly, then the link to bring this together is the test case ID on Azure DevOps which I somehow need to correlate with the individual unit tests in my test framework.
Once my test suite has run in the external tool, how can I publish the test report (i.e., a JUnit XML) to Azure DevOps to correlate the report with my test suite?
Is there a specific Azure DevOps API that I can use (i.e., https://learn.microsoft.com/en-us/rest/api/azure/devops/test/) to publish a JUnit test report?
How do I run a specific set of appium java testng test cases in AWS Device farm? I could see that device farm ignores all the annotations for testNG including group and enabled. Is there any way out?
To only run a subset of tests the project will need to include the testng.xml file in the root of the *-tests.jar. Here is a GitHub post I've authored showing how to do that.
https://github.com/aws-samples/aws-device-farm-appium-tests-for-sample-app/pull/14
In the standard environment the tests are parse and executed individually. As a result some testng features like priority and nested groups don't get honored. Also tests are executed slower because the Appium server would be restarted between tests.
https://docs.aws.amazon.com/devicefarm/latest/developerguide/test-environments.html#test-environments-standard
If these feature are needed the project will need to use the custom environments in Device Farm.
https://docs.aws.amazon.com/devicefarm/latest/developerguide/custom-test-environments.html
This produces one set of logs and video of all the tests since the test package is not parsed.
Hth
-James
I have successfully begun to write SSDT unit tests for my recent stored procedure changes. One thing I've noticed is that I wind up creating similar test data for many of the tests. This suggests to me that I should create a set of test data during deployment, as a post-deploy step. This data would then be available to all subsequent tests, and there would be less need for lengthy creation of pre-test scripts. Data which is unique to a given unit test would remain in pre-test scripts.
The problem is that the post-deploy script would run not only during deployment for unit tests, but also during deployment to a real environment. Is there a way to make the post-deploy step (or parts of it) run only during the deployment for an SSDT unit test?
I have seen that the test settings in the app.config include the database project configuration to deploy. But I don't see how to cause different configurations to use different SQLCMD variables.
I also see that we can set different SQLCMD variables in the publish profiles, but I don't see how the unit test settings in app.config can reference different publish profiles.
You could use an IF-statement, checking ##SERVERNAME and only running your Unit Testing code on the Unit Test server(s), with the same type of test for the other environments.
Alternatively you could make use of the Build number in your TFS Build Definition. If the Build contains, for example the substring 'test', you could execute the the test-code, otherwise not. Then you make sure to set an appropriate build number in all your builds.
I have several tests suites based on the functionality they test and I want to run them in parallel - to complete more quickly. It turned out that within one suite I need to put several tests that run against different environmental setting. I think I can do this by assigning tests to groups and then use the #BeforeGroups annotation to insert a method which set ups the environmental settings. However I don't know how to make the tests within each group to run in parallel and groups to wait for each other - otherwise there will be tests working in the wrong environment. Any suggestions would be appreciated.
You can define your dependencies of the groups in the xml. Example here
I have written a REST server (in Java using RestEasy) with a unit test suite written in Scala. The test suite uses the mock server provided by RestEasy and runs with every Maven build.
I would like to create a second functional test suite that calls an actual tomcat server and exercises each REST service. I do not want this new suite to run with every build, but only on demand, perhaps controlled with a command line argument to Maven.
Is it possible to create multiple independent test suites in a Maven project and disable some from automatic running, or do I need to create a separate Maven project for this functional suite? How can I segregate the different functional suite code if these tests are in the same project with the unit tests (different directories)? How do I run a selected suite with command line arguments?
I never used it myself but I am aware of maven integration tests run by the Maven Failsafe plugin.
As the surefire plugin by default includes the tests named **/Test*.java, **/*Test.java, **/*TestCase.java the failsafe plugin runs the **/IT*.java, **/*IT.java, **/*ITCase.java tests.
Both test approaches have different intentions which seems to match part of your needs. It might be worth to have a look.....
Another approach would be to use maven profiles and specifiy different surefire includes for each profile.