How to execute same test case multiple times in testlink configuration vice - testlink

We have approximate 7 different configurations to test same test case present in test link. But we can Fail/Pass a test case once only in testlink.
Configuration like:-
Mac OS - Safari Window 10
IE, edge, Chrome Window 7
IE, edge, Chrome
Along with different resolutions
Now suppose my one testcase fail for mac safari but passing for IE then how I can execute the same testcase differently as per their configuration
Is there any solution for same so we can execute the same testcase multiple times with-in the same build, the only difference will be the configuration. I think may be using custom field but how?
My manager asked me to create different builds for all 7 configurations, Yes means 7 different build for each configuration for the same test plan which is not looking a good approach to me.
Any help would be appreciated

TestLink has a feature called Platform and that can be used achieve your requirement.
First, you need to create required Platforms(TestProject--Platform Management) Ex. Mac OS - Safari Window 10
Then Add those Platforms to the TestPlan( Test Plan Contents --Add/Remove Platforms)
Then select which Testcases need to run on which platform ( Test Plan Contents --Add/Remove Test Cases)
Now Execute the Test Plan, It will show the Test Plan, Platform, Build to select and execute the Testcase
Finally in reports( Test Result matrix) will show the TestCase Pass/ fail status under each platform.

Related

What are the settings to be set to get Impacted Test results in AzureDev ops for MSTEST

I want to get an Impacted test result in MSTEST but not getting expected result. I have followed all the instructions written here - https://learn.microsoft.com/en-us/azure/devops/pipelines/test/test-impact-analysis?view=azure-devops
This is the log files of VSTS here you can see all the configuration done for Impact Analysis
This is test result image where I can not see Impacted results
My main branch is "Build Development" and child branch is "Mstest_UT" We have rebased it but still I did not get impacted result as expected.
After doing the research I got to know that Impacted test result gets only if all test cases are passed so I did that too but did not get such result.
[TestMethod]
public void GetAboutTideContent_Passing_Valid_Data()
{
iAboutTideEditorRepository.Setup(x => x.GetAboutTideContent(It.IsAny<ApplicationUser>())).Returns(new AboutTideEditor() { });
ResponseData<AboutTideEditor> actual = aboutTideService.GetAboutTideContent(It.IsAny<ApplicationUser>());
Assert.AreEqual(ProcessStatusEnum.Success, actual.Status);
}
I am writing a mock test in MSTEST.
I am expecting Impacted test result.
From what I understand from the link you provided for this test you should use this type of test from the start of your project ("growth and maturation off the test" hints towards some kind of deep-learning abilities of the software). If you're kicking in the test halfway, the program might be already locked in commitment of performing particular tests in a certain way (MS stuff remains sometimes having "black box approaches"). If that is the case you should override/reset it and run from the start without having the program or user have selected (detailed) tests. This off-course might set you back for several hours of testing. But consider spending and loosing more time in the search of what goes wrong; it keeps counting an d consuming time if its off the essence to minimize that. Check also the graph provided on the linked page its very informative about the order of actions (e.g. 6).
In your first "black-screen" there is a difference in the parallel setup (consider also below bullets). the black-screen states some dll files are not found in "test assembly". If there is a possibility to run a test-log you might want to check that too to see what typos might have occurred.
From the page:
At present, TIA is not supported for:
Multi-machine topology (where the test is exercising an app deployed to a different machine)
Data driven tests
Test Adapter-specific parallel test execution
.NET Core
UWP
In short: reset the whole test and run "fresh" to see if the errors persist.

How can I debug a each scenarios in my feature file separately

I have a test project that I wrote to test different services in the same solution. I used specflow and I have many scenarios to test.
In order to debug my test I have to run my services. about 3 of them.
The problem I have now is If I go to the test explorer window and right click on a single scenario and try to debug, the option is disabled.
If I right click on the features file and select the option debug specflow scenarios it debug all my scenarios but I don't want that.
how can I debug a each scenarios in my feature file separately while running my services?
Note: I am using msTest and VS2012.
Well you could switch to NUnit, the NUnitTestAdapter supports running individual tests.
You don't have to do it permanently, just long enough to debug this test.
Or, add a Debugger.Launch() in the method that is bound to your When. Let all the other tests finish, and then step through this way. You will of course need to connect to the other services using Debug > Connect to process..., before stepping across the process boundary.

Windows phone project run tests

I am using WPToolkitTestFX in Windows Phone 8 project. So I am running tests
with special page, where in constructor I got:
this.Content = UnitTestSystem.CreateTestPage();
But if I want to run application, I need to change Navigation Page in WMAppManifest.xml. It is not good, because I often forgetting to change it back and pushing that WMAppManifest to source control system.
Is it possible to create difference run configuration for WP8 project? One for application, another for tests?
Put your unit tests in a different project with its own WMAppManifest.

How do I view the colorized output of Google Test in Xcode in the "All Output" window?

I'm new to Xcode (and Macs in general) and am trying to port some of my code base over to run on both OS X and iOS. I have a large set of unit tests written against the Google C++ Testing Framework (Google Test). I successfully compiled the framework and I can run some tests, but I'm unsure how to view the colorized output from within Xcode.
I'm used to hitting "Run" in Visual Studio and immediately seeing a console window (with colors) letting me know at a glance if the tests passed or failed.
I've managed to set up a "Run Script" "Build Phase" but that seems to only output to the Log Navigator which obliterates the colors and even the fixed-width output making it very difficult to see at a glance if the tests pass. I also can't find a way to display the log after running the tests. When I do this nothing appears in the "All Output" window.
I've played around with XcodeColors but that doesn't seem to work with scripts that use the ANSI color codes.
At this point I wouldn't be surprised if this simply can't be done within Xcode. That would be ideal, but if it isn't, is it possible to create a "Run Script" that will run the tests in an independent Terminal window? Colors work fine there.
Thanks for any help!
Here are links to a tool that colorizes the text in the Log window. It's free and the source is in github so you can figure out how it works. The first link says that it just uses simple ANSI codes to do the job.
http://deepitpro.com/en/articles/XcodeColors/info
https://github.com/robbiehanson/XcodeColors#readme
To kick off the execution from within Xcode, you will probably need to add a new target to your project. To add a Target, click on your project and then there is an Add Target button on the bottom of the screen. I don't know exactly what you're executing but here are my best guesses based on your question:
MacOSX/Application/Cocoa-AppleScript or Command Line Tool - Create a simple script or program that will execute your units tests.
MacOSX/Other/External Build System - Allows for execution of an external "make" program with args.
Once you have a way to execute your unit tests, you just need to figure out how to route the output from the unit tests to the Log window. If you can edit the Google Test project and make it use NSLog(), that would seem to be the easiest solution. You could create your own logging method, perform the ANSI colorization, and then send the final text to NSLog().
ADDED: OK. Interesting findings... Read all before starting. Here's what to do:
Start AppleScript Editor. This is in LaunchPad. Paste the following script into it:
tell application "Terminal"
activate
do script "<your commands>" in window 1
end tell
You can repeat the "do script" line as needed. Use this to execute your unit tests. In Script Editor, do Save As.../File Format=Script and save it to a safe location for now like your Documents directory. This will create a file like "UnitTests.scpt".
Now go to your iOS project. Select the project at the top-left. Select the Build Phases tab top-middle. Click the Add Build Phase button on the bottom right. Here's the interesting part.
Leave Shell as is ("/bin/sh"). Add one line:
osascript ~/Documents/UnitTests.scpt
That will execute the script after every build.
But here's the interesting part I found. Click on Build Settings (top-middle). Make sure All is selected (not Basic). Scroll down the list to find Unit Testing. Open Test Host. Hit the + next to Debug. You can also put the above osascript command here. You might be able to execute your unit tests here and if you can, the output will likely show up in the Log! Let me know what happens.
I am familiar in Java: JUnit + JCodecoverage, at mobile applications: Android and iPhone I was to lazy to develop with TDD, but if I would like to start than :
I would create a Hello Word app, with JUnitTesting options turned on:
Include Unit Test checked
That will create a Test App / target whatever, and you will be able to run that.
The same thing it is at Android too: you have to create a "test project"
Once I did and forgot how is working, but, there are other stuff too:
- long press the Play button on Xcode ( 4.4 ) and you will have a dropdown menu with: Run, Test, Profile,Analyze.
I can't present those, because if I press the Shift+ Cmd + 4 to screenshot it it is changing, but here it look like the changed menu:
IMHO: for banking, forex, other financial or military (high security software) I would use test driven development, with over 99% code coverage, but those simple 3-4 web-service call mobile apps, which display public data available in browsers are just waste of time to develop tests and upkeep it!
Many times I need to test with internet connection and without.
to be worse case with WI-FI connection , but router doesn't give IP or let go out the phone, but if I ask the phone state: it is connected...
The GUI flow hard to test from unit testing, where is / would be usefully for me: the data got from web-service and synchronization with internal cache. As I see it is still cheaper to do it with manu testing.

Unit Testing in QTestLib - running single test / tests in class / all tests

I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button.
All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes.
I'm sure I'm missing something. I'd like to be able to easily:
Run a single test method
Run the tests in an entire class
Run all tests
Any of those would call the appropriate setup / teardown functions.
EDIT: Bounty now available. There's got to be a better way, or a GUI test runner that handles it for you or something. If you are using QtTest in a test-driven environment, let me know what is working for you. (Scripts, test runners, etc.)
You can run only selected test cases (test methods) by passing test names as command line arguments :
myTests.exe myCaseOne myCaseTwo
It will run all inits/cleanups too. Unfortunately there is no support for wildcards/pattern matching, so to run all cases beginning with given string (I assume that's what you mean by "running the tests in an entire class"), you'd have to create script (windows batch/bash/perl/whatever) that calls:
myTests.exe -functions
parses the results and runs selected tests using first syntax.
To run all cases, just don't pass any parameter:
myTests.exe
The three features requested by the OP, are nowadays integrated in to the Qt Creator.
The project will be automatically scanned for tests and they apear on the Test pane. Bottom left in the screenshot:
Each test and corresponding data can be enabled by clicking the checkbox.
The context menu allows to run all tests, all tests of a class, only the selected or only one test.
As requested.
The test results will be available from the Qt Creator too. A color indicator will show pass/fail for each test, along additional information like debug messages.
In combination with the Qt Creator, the use of the QTEST_MAIN macro for each test case will work well, as each compiled executable is invoked by the Qt Creator automatically.
For a more detailed overview, refer to the Running Autotests section of the Qt Creator Manual.