How to config environment before running automated tests? - unit-testing

I need a good practice to deal with my issue.
The issue is: I need to run automatic tests against a site. The site has different configurations that completely change its design (on some pages). For example I can config 2 different pages of login. And I need to test them both.
First of all I must make sure that a correct test is run against a correct configuration. So before each test I need to change site's config. It is not good if I have a thousand of test.
So a solution that comes to my mind is to not reconfigure the site each time but do it once and run all the tests that are corresponding to this configuration. But this solution doesn't seems to me as an easy one to make.
For now what I did is: I created a method that is run once before all the other tests and in this method I configure the site to make config that are used in the majority of the tests. All the other tests for now change the config before execution and after execution they change it back. It's not good at all.
To do so I used NUnit3 SetUpFixture and OneTimeSetUp attributes:
/// <summary>
/// Runs once before all the test in order to config the environment
/// </summary>
[SetUpFixture]
public class ConfigTests
{
[OneTimeSetUp]
public void RunBeforeAnyTests()
{
IWebDriver driver = new ChromeDriver();
try
{
//Here I config the stie
CommonActions actions = new CommonActions(driver);
actions.SwitchOffCombinedPaymentPage();
driver.Dispose();
}
catch (Exception)
{
driver.Dispose();
}
}
}
What I thought after this is that I'll be able to send parameters to SetUpFixture but first of all it's impossible and second of all it won't resolve the problem as this feature will just be run twice and the tests will be run against the last configuration.
So guys, how to deal with a site testing that has a lot of configurations?

I'd use a test run parameter from the command-line (or in the .runsettings file if you are using the VS adapter) Your SetUpFixture can grab that parameter and do the initialization and any individual fixtures that need it can grab it as well.
See the --params option to nunit3-console and the TestContext.TestParameters property for accessing the values.
This answers your "first of all it's impossible" part. I didn't answer "second of all... " because I don't understand it. I'll add more if you can clarify.

Related

Visual Test With Applitools Of a simple CSS class change

would like to know if there is a tutorial to learn how to test with this tool this type of change?. I meant... I needed to introduce a CSS class change in a component but I don't know how to test it... i've checked youtube but I cant quite understand how would be the best approach...
Edit: Applitools customer support is very helpful - consider contacting them for a more personal assistance.
Applitools is a cloud service that smartly detects changes to your UI that are noticeable to the human eye.
It does so by comparing screenshots taken during your automatic end-to-end tests to baseline images taken during the last test that you approved.
I mention this because you tagged your question as unit-testing. This is not unit-testing as it requires launching a browser (automatically - using Selenium for example) to run the automated tests.
Per your question - you should make sure that you have a baseline image that contains the component in discussion before your class change (do it by running an automated test that calls Applitools Eyes API to create a checkpoint (eyes.checkWindow) when your test reaches a page that contains this component. Then, approve the new test as a baseline, apply your class change and run the test again.
Applitools will let you know if a visual difference occurred following your change. So, for example, if this is just an under-the-hood change - you will not see a visual difference, but if this actually affects the UI then you'll see a visual difference. If you're happy with this difference you simply approve this run as your new baseline.
To show an example I'll use the Applitools tutorial for Selenium + Java. See the comments - you run this once, make your class change and run it again:
import com.applitools.eyes.Eyes;
import com.applitools.eyes.RectangleSize;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import java.net.URISyntaxException;
public class TestApplitoolsWebsite {
public static void main(String[] args) throws URISyntaxException, InterruptedException {
WebDriver driver = new FirefoxDriver();
Eyes eyes = new Eyes();
// This is your api key, make sure you use it in all your tests.
eyes.setApiKey("YOUR_API_KEY");
try {
// Start visual testing with browser viewport set to 1024x768.
// Make sure to use the returned driver from this point on.
driver = eyes.open(driver, "Applitools", "Test Web Page", new RectangleSize(1024, 768));
driver.get("http://applitools.com");
// Visual validation point #1
eyes.checkWindow("Main Page");
driver.findElement(By.cssSelector(".features>a")).click();
// Visual validation point #2
// Let's assume this page contains the class you'll change
eyes.checkWindow("Features page");
// End visual testing. Validate visual correctness.
eyes.close();
} finally {
// Abort test in case of an unexpected error.
eyes.abortIfNotClosed();
driver.close();
}
}
}
Of course - if your class change impacts several places in the application / website then it's best to make sure your test covers all these places and that you take screenshots of all these places.

Why doesn't the log4net XmlConfigurator attribute work for my unit tests

I'm using log4net, trying to get logging in my unit tests. If I manually call
log4net.Config.XmlConfigurator.Configure();
Since that works, that seems to eliminate all of the "bad config, config location" issues.
it works, but there are a large number of test classes, so that is not good.
I added
[assembly: log4net.Config.XmlConfigurator(Watch=true)]
to the assemblyinfo of my test project, but when I run (either via native MSTest, or Resharper test runner) I get no logging.
Help?
Source
[AssemblyInitialize()]
public static void MyTestInitialize(TestContext testContext)
{
// Take care the log4net.config file is added to the deployment files of the testconfig
FileInfo fileInfo;
string fullPath = Path.Combine(System.Environment.CurrentDirectory, "log4net.config");
fileInfo = new FileInfo(fullPath);
As it says in the documentation for assembly attributes
Therefore if you use configuration attributes you must invoke log4net
to allow it to read the attributes. A simple call to
LogManager.GetLogger will cause the attributes on the calling assembly
to be read and processed. Therefore it is imperative to make a logging
call as early as possible during the application start-up, and
certainly before any external assemblies have been loaded and invoked.
Because the unit test runners load the test assembly in order to find and the tests, it isn't possible to initialise log4net using an assembly attribute in unit test projects, and you will have to use the XmlConfigurator.
Edit: as linked in a comment by OP this can be done in one place for the whole test project by using the AssemblyInitializeAttribute

Ordered Selenium unit tests n

I have a small problem, I have created some Selenium tests. The problem is I can't order the testcases I have created. I know unit testing should not be ordered but this is what I need in my situation. I have to follow these steps: login first, create a new customer, change some details about the customer and finally log out.
Since there is no option to order unit tests in NUnit I can't execute this.
I already tried another option, to create a unittest project in Visual Studio, because Visual Studio 2012 has the ability to create a ordered unit test. But this is not working because I can't run a unit test while I am running my ASP.NET project. Another solution file is also not a good option because I want to verify my data after it has been submitted by a Selenium test.
Does someone of you have another solution to solve my problem?
If you want to test all of those steps in a specific order (and by the sounds of it, as a single session) then really it's more like an acceptance test you are talking about; and in that case it's not a sin to write more complex test methods and Assert your conditions after each step.
If you want to test each step in true isolation (a pure unit test) then each unit test must be capable of running by itself without any reference to any other tests; but when you're testing the actual site UI itself this isn't really an option for you.
Of course if you really you want to have every single test somehow setup every single dependency without reference to any other actions (e.g in the last test you would need to fake the login token, your data layer will have to pretend that you added a new customer, etc. A lot of work for dubious benefit...)
I say this based on the assumption that you already have unit tests written for the server-side controllers, layers, models, etc, that you run without any reference to the actual site running in a browser and are therefore confident that the various back-end part of your site do what they are supposed to do
In your case I'd recommend more of a hybrid integration/acceptance test
void Login(IWebDriver driver)
{
//use driver to open browser, navigate to login page, type user/password into box and press enter
}
void CreateNewCustomer(IWebDriver driver)
{
Login(driver);
//and then use driver to click "Create Customer" link, etc, etc
}
void EditNewlyCreatedCustomer(IWebDriver driver)
{
Login(driver);
CreateNewCustomer(driver);
//do your selenium stuff..
}
and then your test methods:
[Test]
void Login_DoesWhatIExpect()
{
var driver = new InternetExplorerDriver("your Login URL here");
Login(driver);
Assert(Something);
}
[Test]
void CreateNewCustomer_WorksProperly()
{
var driver = new InternetExplorerDriver("your Login URL here");
CreateNewCustomer(driver);
Assert(Something);
}
[Test]
void EditNewlyCreatedCustomer_DoesntExplodeTheServer()
{
var driver = new InternetExplorerDriver("your Login URL here");
EditNewlyCreatedCustomer(driver);
Assert(Something);
}
In this way the order of the specific tests do not matter; certainly if the Login test fails then the CreateNewCustomer and EditNewlyCreatedCustomer tests will also fail but that's actually irrelevant in this case as you are testing an entire "thread" of operation

AngularJS mocking $logProvider in config block

Is there a way to inject providers when writing unit tests using Karma(Testacular) and Jasmine in angular?
Our team decided recently to use angularjs $log to write debugging details to the console. This way we can leverage the ability to disable the logging via the $logProvider.debugEnabled() method.
angular.module("App", ["prismLogin", "ui.bootstrap"])
.config(["$routeProvider", "$logProvider",
function ($routeProvider, $logProvider) {
$routeProvider
//routes here edited for brevity
//This is the offending line, it breaks several pre-existing tests
$logProvider.debugEnabled(true);
}]);
However after adding the $logProvider.debugEnabled(true); line several of our tests no longer execute successfully, failing with the following message:
TypeError: Object doesn't support property or method 'debugEnabled' from App
So my question again, is it possible to mock the $logProvider? Or should I provide my own configuration block for the test harness?
I attempted searching for a way to mock the app module with no luck. It seems to me that using the concrete app module instead of a mock is very brittle. I would like to avoid reworking tests associated with the app module every time a change is made in the app or run configuration blocks.
The tests that are failing are units of code with no relation to the $logProvider? I feel as if I a missing something here and making things much harder than they should be. How should one go about writing tests that are flexible and are not affected by other side effects introduced in your application?
It appears that this is a know issue with angular-mocks.
Until the issue is addressed , I was able to resolve the issue by adding the following method to the angular.mock.$LogProvider definition in angular-mocks.js at line 295.
this.debugEnabled = function(flag) {
return this;
};

xunit programmatically add new tests/"[Facts]"?

We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.