Run TestNG groups sequentially and tests in each group in parallel - unit-testing

I have several tests suites based on the functionality they test and I want to run them in parallel - to complete more quickly. It turned out that within one suite I need to put several tests that run against different environmental setting. I think I can do this by assigning tests to groups and then use the #BeforeGroups annotation to insert a method which set ups the environmental settings. However I don't know how to make the tests within each group to run in parallel and groups to wait for each other - otherwise there will be tests working in the wrong environment. Any suggestions would be appreciated.

You can define your dependencies of the groups in the xml. Example here

Related

How can I properly parallellize my test suites in TeamCity?

So I have a scenario whereby I have many different test suites. They are all triggered by a Create Test Environment step. However, these test suites cannot run concurrently on the same environment, as they would interfere with each other. To alleviate this, I added a shared resource in TeamCity and configured the build definitions to block on this resource, so that only one test suite runs at a time. This works.
However, if while the test suites for Environment A are running, another code change is checked in, Environment B can be created by the Create Test Environment step, and all the test suites are re-queued. Currently, due to the fact that they all share a shared resource that they block on, those tests then sit in the queue awaiting access to the shared resource. However, there is no reason that the tests for Environment B cannot run (one build at a time) in parallel with the tests for Environment A. How can I best tweak my TeamCity configuration to achieve this?
It seems that you are looking for Matrix builds. This feature is not implemented in TeamCity. As workaround you can create different build configurations for different environments. You can use TeamCity templates to simplify the setup. For more details see the related comment in the TeamCity issue tracker.

Postman - Ignore test execution based on environment

In Postman, is there a way to ignore a specific test execution based upon environment at runtime?
I have a collection which consists of around 30 tests and I don't want 4-5 tests to execute on production environment at runtime but because they are meant to be executed on staging only.
I don't want to have multiple collections for different environments.
Any way I can make use of pre-request scripts in this scenario?
I agree with #joroe that a simple way to conditionally add tests is with a variable and check that variable before each test that is conditional.
If you want to not have the test sent to the server at all you probably want to explore the collection runner and group your into collections according to the environment. For example you may group your requests into a collection called PROD Tests that runs requests 1-10. You can have a second collection called DEV Tests that contains requests 1-15 (the ten from PROD plus the five others you don't want to run in PROD). It's very easy to copy a request from one collection to another. You then in the test runner run the collection for the specific environment. You can even automate this using the Newman extension. I'm not super familiar but there is documentation at the link posted. I've included a screen capture of the collection runner interface and how I have some of my tests set up to run.
Create a variable in each environment ("ENV") with name of the environment ("LOCAL").
if(environment.ENV === "LOCAL") tests["Run in local environment"] = true;

How to Deploy Data in an SSDT Unit Test but Only for the Unit Test

I have successfully begun to write SSDT unit tests for my recent stored procedure changes. One thing I've noticed is that I wind up creating similar test data for many of the tests. This suggests to me that I should create a set of test data during deployment, as a post-deploy step. This data would then be available to all subsequent tests, and there would be less need for lengthy creation of pre-test scripts. Data which is unique to a given unit test would remain in pre-test scripts.
The problem is that the post-deploy script would run not only during deployment for unit tests, but also during deployment to a real environment. Is there a way to make the post-deploy step (or parts of it) run only during the deployment for an SSDT unit test?
I have seen that the test settings in the app.config include the database project configuration to deploy. But I don't see how to cause different configurations to use different SQLCMD variables.
I also see that we can set different SQLCMD variables in the publish profiles, but I don't see how the unit test settings in app.config can reference different publish profiles.
You could use an IF-statement, checking ##SERVERNAME and only running your Unit Testing code on the Unit Test server(s), with the same type of test for the other environments.
Alternatively you could make use of the Build number in your TFS Build Definition. If the Build contains, for example the substring 'test', you could execute the the test-code, otherwise not. Then you make sure to set an appropriate build number in all your builds.

Using multiple simultaneous tests using Selenium IDE

I have written a simple test for my website. The test simply search for a word in my search page and waits for results.
What I need is to run the same test 40 times simultaneously to mimic a situation where 40 users are searching for the same word at the same time.
Basically I want to know how to run them simultaneously not in a queue.
Thanks.
What you probably need is Selenium RC and Selenium Grid as Silenium IDE is quite limited on automated testing. RC allows you to run remote selenium tests (tho rc can run locally too) and grid allows you to simplify the access to all running rcs.
You need 40 clients at once. If you are using selenium-rc you can start severall clients simultaniously by configuring them to run on different ports. After that you have to start your test 40 times at once. That is the tricky part depending on what framework you are using to launch the tests.
I would suggest JMeter for load-test like situations. It is quite easy to setup and you can configure how many simulated users you want on your website at once. JMeter works fine for manuell tests and for automated tests.
Don't mind but I guess you need to do that in Jmeter, actually what you are trying to do is part of Load/Stress where number of user tries to do some certain action simultaneously. Have a look a Jmeter.

What's the difference between test suite and test group?

What's the difference between test suite and test group? If I want to organize my unit tests in phpunit.xml in groups of tests (i.e. groups of directiories), e.g. tests for specific application module.
How should the phpunit.xml look like to utilize groups and test suites?
How to run specific group/test suite from the command line?
Similar question:
PHPUnit - Running a particular test suite via the command line test runner
PHPUnit manual about test suites and groups in xml configuration:
http://www.phpunit.de/manual/3.4/en/appendixes.configuration.html
http://www.phpunit.de/manual/current/en/organizing-tests.html
How to configure the groups in phpunit.xml, that phpunit --list-groups showed them?
Test suites organize related test cases whereas test groups are tags applied to test methods.
Using the #group annotation you can mark individual test methods with descriptive tags such as fixes-bug-472 or facebook-api. When running tests you can specify which group(s) to run (or not) either in phpunit.xml or on the command line.
We don't bother with test suites here, but they can be useful if you need common setup and teardown across multiple tests. We achieve that with a few test case base classes (normal, controller, and view).
We aren't using groups yet, either, but I can already think of a great use for them. Some of our unit tests rely on external systems such as the Facebook API. We mock the service for normal testing, but for integration testing we want to run against the real service. We could attach a group to the integration test methods to be skipped on the continuous integration server.