Postman-Run a folder issue - postman

Im using latest version of Postman v5.3.2
When running a folder with several requests inside of it, it is not executing them sequentially in the same order they appear in the main app. unlike when running the all collection that contains that folder.
Is this a bug or Postman do not support this?

When you need to run anything sequentially through the Collection Runner, you need to add logic to the "Tests" tab in your request.
There is a built-in method call "setNextRequest" that you can use in this situation. (documentation: https://www.getpostman.com/docs/postman/collection_runs/building_workflows) It's important to note that "setNextRequest" only runs when you use the Collection Runner. (not sure if this is a bug, or as intended)
Here is an example I made: https://www.getpostman.com/collections/78ffebd7823f47b26a21
In this example collection, you can see that Test checks for status 200, then if true, calls Test3 using setNextRequest on line 3. The same is done inside of Test3 on line 3, but we call Test2 instead. In Test2 we again run a test, but this time we're STOPPING the execution by supplying null to the method.
Note: you MUST to supply null as an argument to setNextRequest in Test2 to prevent the collection from running over and over.
The results in the Collection Runner should look like this:
Hope this helps someone.

Related

How to config environment before running automated tests?

I need a good practice to deal with my issue.
The issue is: I need to run automatic tests against a site. The site has different configurations that completely change its design (on some pages). For example I can config 2 different pages of login. And I need to test them both.
First of all I must make sure that a correct test is run against a correct configuration. So before each test I need to change site's config. It is not good if I have a thousand of test.
So a solution that comes to my mind is to not reconfigure the site each time but do it once and run all the tests that are corresponding to this configuration. But this solution doesn't seems to me as an easy one to make.
For now what I did is: I created a method that is run once before all the other tests and in this method I configure the site to make config that are used in the majority of the tests. All the other tests for now change the config before execution and after execution they change it back. It's not good at all.
To do so I used NUnit3 SetUpFixture and OneTimeSetUp attributes:
/// <summary>
/// Runs once before all the test in order to config the environment
/// </summary>
[SetUpFixture]
public class ConfigTests
{
[OneTimeSetUp]
public void RunBeforeAnyTests()
{
IWebDriver driver = new ChromeDriver();
try
{
//Here I config the stie
CommonActions actions = new CommonActions(driver);
actions.SwitchOffCombinedPaymentPage();
driver.Dispose();
}
catch (Exception)
{
driver.Dispose();
}
}
}
What I thought after this is that I'll be able to send parameters to SetUpFixture but first of all it's impossible and second of all it won't resolve the problem as this feature will just be run twice and the tests will be run against the last configuration.
So guys, how to deal with a site testing that has a lot of configurations?
I'd use a test run parameter from the command-line (or in the .runsettings file if you are using the VS adapter) Your SetUpFixture can grab that parameter and do the initialization and any individual fixtures that need it can grab it as well.
See the --params option to nunit3-console and the TestContext.TestParameters property for accessing the values.
This answers your "first of all it's impossible" part. I didn't answer "second of all... " because I don't understand it. I'll add more if you can clarify.

Robot Framework Test Case Generation from Within a Test Case?

I am using Robot Framework to automate onboard unit testing of a Linux based device.
The device has a directory /data/tests that contains a series of subdirectories, each subdirectory is a test module with 'run.sh' to be executed to run the unit test. For example :
/data/tests/module1/run.sh
/data/tests/module2/run.sh
I wrote a function that collects the subdirectory names in an array, and this is the list of test modules to be executed. The number of modules can vary daily.
#{modules}= SSHLibrary.List Directories in Directory /data/tests
Then another function (Module Test) that basically runs a FOR loop on the element list and executes the run.sh in each subdirectory, collects log data, and logs it to the log.html file.
The issue I am experiencing is that when the log.html file is created, there is one test case titled Module Test, and under the FOR loop, a 'var' entry for each element (test module). Under each 'var' entry are the results of the module execution.
Is it possible from within the FOR loop, to create a test case for each element and log results against it? Right now, if one of the modules / elements fails, I do not get accurate results, I still get a pass for the Module Test test case. I would like to log test cases Module 1, Module 2, ... , Module N, with logs and pass fail for each one. Given that the number of modules can vary from execution to execution, I cannot create static test cases, I need to be able to dynamically create the test cases once the number of modules has been determined for the test run.
Any input is greatly appreciated.
Thanks,
Dan.
You can write a simple script that dynamically create the robot test file by reading the /data/test/module*, then create one test case for each of the modules. In each test case, simply run the operating system command and check the return code (the run.sh).
This way, you get one single test suite, with many test cases, each representing a module.
Consider writing a bash script that would run robot test for each module, and then merge reports to one report with rebot script. Use a --name parameter in pybot script to differentiate tests in report.

How to simulate browser access with plone.app.testing and PAS plugin

I'm developing a PAS plugin for Plone and inside test coverage I'm also testing generated HTML (I'm not using zope.testbrowser or similar, what I need to test is really simple).
The problem is that when test runs be call stack is totally different from browser usage (on a real Plone site I replicated the test environment).
Here a piece of the test that fail:
portal = self.layer['portal']
request = self.layer['request']
request.set('ACTUAL_URL', 'http://nohost/plone/folder/subfolder')
request.set('URL', 'http://nohost/plone/folder/subfolder')
login(portal, 'user2')
output = self.folder.subfolder()
Up there subfolder is an ATCT Folder and I'm calling it's default view (folder listing).
Now, the PAS plugin is implementing the checkLocalRolesAllowed method.
def checkLocalRolesAllowed(self, user, object, object_roles):
...
What's is happening:
using browser the first call to checkLocalRolesAllowed is done with object= <ATFolder at subfolder>, user='users2'
running the test the first call to checkLocalRolesAllowed is done on the portal root with: object= <PloneSite at plone>, user='user2'
On both situations the method is called a lot of additional time (which is normal) but the call order is totally different.
In the test the subfolder context is only the 4rth in the stack order (the Plone site on the first two times, ZCatalog the third).
Starting from the fact that the "good" behavior is the one reproduced by the browser, how can simulate the same behavior on tests? Why calling portal.folder.subfolder() is using the Plone site at the first call?
NB: I tested other combinations like using restrictedTraverse, portal.folder.subfolder.folder_listing(), ... nothing worked.

TFS Build servers and critical Unit Tests

When you build on a TFS build server, failed unit tests cause the build to show an orange alert state but they still "succeed". Is there any way to tag a unit test as critical such that if it fails, the whole build will fail?
I've Googled for it and didn't find anything, and I don't see any attribute in the framework, so I'm guessing the answer is no. But maybe I'm just looking in the wrong place.
There is a way to do this, but you need to create multiple test runs and then filter your tests. On your tests, set a TestCategory attribute:
[TestCategory("Critical")]
[TestMethod]
public void MyCriticalTest {}
For NUnit you should be able to use [Category("Critical")]. There are multiple attributes of a test you can filter on, including the name.
Name = TestMethodDisplayNameName
FullyQualifiedName = FullyQualifiedTestMethodName
Priority = PriorityAttributeValue
TestCategory = TestCategoryAttributeValue
ClassName = ClassName
And these operators:
= (equals)
!= (not equals)
~ (contains or substring only for string values)
& (and)
| (or)
( ) (paranthesis for grouping)
XUnit .NET currently does not support TestCaseFilters.
Then in your build definition you can create two test runs, one that runs Critical tests, one that runs everything else. You can use the Filter option of the Test Run.
Open the Test Runs window using this hard to find button:
Create 2 test runs:
On your first run set the options as follows:
On your second run set the options as follows:
This way Team Build will run any test with the "Ciritical" category in the first run and will fail. If the first run succeeds it will kick off the non-critical tests and will Partially Succeed, even when a test fails.
Update
The same process explained for Azure DevOps Pipelines.
Yes.
Using the TFS2013 Default Template:
Under the "Process" tab, go to section 2, "Basic".
Expand the Automated Tests section.
For "Test Source", click the ellipsis ("...").
This will open a new window that has a "Fail build when tests fail" check box.

xunit programmatically add new tests/"[Facts]"?

We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.