I'm developing a PAS plugin for Plone and inside test coverage I'm also testing generated HTML (I'm not using zope.testbrowser or similar, what I need to test is really simple).
The problem is that when test runs be call stack is totally different from browser usage (on a real Plone site I replicated the test environment).
Here a piece of the test that fail:
portal = self.layer['portal']
request = self.layer['request']
request.set('ACTUAL_URL', 'http://nohost/plone/folder/subfolder')
request.set('URL', 'http://nohost/plone/folder/subfolder')
login(portal, 'user2')
output = self.folder.subfolder()
Up there subfolder is an ATCT Folder and I'm calling it's default view (folder listing).
Now, the PAS plugin is implementing the checkLocalRolesAllowed method.
def checkLocalRolesAllowed(self, user, object, object_roles):
...
What's is happening:
using browser the first call to checkLocalRolesAllowed is done with object= <ATFolder at subfolder>, user='users2'
running the test the first call to checkLocalRolesAllowed is done on the portal root with: object= <PloneSite at plone>, user='user2'
On both situations the method is called a lot of additional time (which is normal) but the call order is totally different.
In the test the subfolder context is only the 4rth in the stack order (the Plone site on the first two times, ZCatalog the third).
Starting from the fact that the "good" behavior is the one reproduced by the browser, how can simulate the same behavior on tests? Why calling portal.folder.subfolder() is using the Plone site at the first call?
NB: I tested other combinations like using restrictedTraverse, portal.folder.subfolder.folder_listing(), ... nothing worked.
Related
Im using latest version of Postman v5.3.2
When running a folder with several requests inside of it, it is not executing them sequentially in the same order they appear in the main app. unlike when running the all collection that contains that folder.
Is this a bug or Postman do not support this?
When you need to run anything sequentially through the Collection Runner, you need to add logic to the "Tests" tab in your request.
There is a built-in method call "setNextRequest" that you can use in this situation. (documentation: https://www.getpostman.com/docs/postman/collection_runs/building_workflows) It's important to note that "setNextRequest" only runs when you use the Collection Runner. (not sure if this is a bug, or as intended)
Here is an example I made: https://www.getpostman.com/collections/78ffebd7823f47b26a21
In this example collection, you can see that Test checks for status 200, then if true, calls Test3 using setNextRequest on line 3. The same is done inside of Test3 on line 3, but we call Test2 instead. In Test2 we again run a test, but this time we're STOPPING the execution by supplying null to the method.
Note: you MUST to supply null as an argument to setNextRequest in Test2 to prevent the collection from running over and over.
The results in the Collection Runner should look like this:
Hope this helps someone.
would like to know if there is a tutorial to learn how to test with this tool this type of change?. I meant... I needed to introduce a CSS class change in a component but I don't know how to test it... i've checked youtube but I cant quite understand how would be the best approach...
Edit: Applitools customer support is very helpful - consider contacting them for a more personal assistance.
Applitools is a cloud service that smartly detects changes to your UI that are noticeable to the human eye.
It does so by comparing screenshots taken during your automatic end-to-end tests to baseline images taken during the last test that you approved.
I mention this because you tagged your question as unit-testing. This is not unit-testing as it requires launching a browser (automatically - using Selenium for example) to run the automated tests.
Per your question - you should make sure that you have a baseline image that contains the component in discussion before your class change (do it by running an automated test that calls Applitools Eyes API to create a checkpoint (eyes.checkWindow) when your test reaches a page that contains this component. Then, approve the new test as a baseline, apply your class change and run the test again.
Applitools will let you know if a visual difference occurred following your change. So, for example, if this is just an under-the-hood change - you will not see a visual difference, but if this actually affects the UI then you'll see a visual difference. If you're happy with this difference you simply approve this run as your new baseline.
To show an example I'll use the Applitools tutorial for Selenium + Java. See the comments - you run this once, make your class change and run it again:
import com.applitools.eyes.Eyes;
import com.applitools.eyes.RectangleSize;
import org.openqa.selenium.By;
import org.openqa.selenium.WebDriver;
import org.openqa.selenium.firefox.FirefoxDriver;
import java.net.URISyntaxException;
public class TestApplitoolsWebsite {
public static void main(String[] args) throws URISyntaxException, InterruptedException {
WebDriver driver = new FirefoxDriver();
Eyes eyes = new Eyes();
// This is your api key, make sure you use it in all your tests.
eyes.setApiKey("YOUR_API_KEY");
try {
// Start visual testing with browser viewport set to 1024x768.
// Make sure to use the returned driver from this point on.
driver = eyes.open(driver, "Applitools", "Test Web Page", new RectangleSize(1024, 768));
driver.get("http://applitools.com");
// Visual validation point #1
eyes.checkWindow("Main Page");
driver.findElement(By.cssSelector(".features>a")).click();
// Visual validation point #2
// Let's assume this page contains the class you'll change
eyes.checkWindow("Features page");
// End visual testing. Validate visual correctness.
eyes.close();
} finally {
// Abort test in case of an unexpected error.
eyes.abortIfNotClosed();
driver.close();
}
}
}
Of course - if your class change impacts several places in the application / website then it's best to make sure your test covers all these places and that you take screenshots of all these places.
I have a small problem, I have created some Selenium tests. The problem is I can't order the testcases I have created. I know unit testing should not be ordered but this is what I need in my situation. I have to follow these steps: login first, create a new customer, change some details about the customer and finally log out.
Since there is no option to order unit tests in NUnit I can't execute this.
I already tried another option, to create a unittest project in Visual Studio, because Visual Studio 2012 has the ability to create a ordered unit test. But this is not working because I can't run a unit test while I am running my ASP.NET project. Another solution file is also not a good option because I want to verify my data after it has been submitted by a Selenium test.
Does someone of you have another solution to solve my problem?
If you want to test all of those steps in a specific order (and by the sounds of it, as a single session) then really it's more like an acceptance test you are talking about; and in that case it's not a sin to write more complex test methods and Assert your conditions after each step.
If you want to test each step in true isolation (a pure unit test) then each unit test must be capable of running by itself without any reference to any other tests; but when you're testing the actual site UI itself this isn't really an option for you.
Of course if you really you want to have every single test somehow setup every single dependency without reference to any other actions (e.g in the last test you would need to fake the login token, your data layer will have to pretend that you added a new customer, etc. A lot of work for dubious benefit...)
I say this based on the assumption that you already have unit tests written for the server-side controllers, layers, models, etc, that you run without any reference to the actual site running in a browser and are therefore confident that the various back-end part of your site do what they are supposed to do
In your case I'd recommend more of a hybrid integration/acceptance test
void Login(IWebDriver driver)
{
//use driver to open browser, navigate to login page, type user/password into box and press enter
}
void CreateNewCustomer(IWebDriver driver)
{
Login(driver);
//and then use driver to click "Create Customer" link, etc, etc
}
void EditNewlyCreatedCustomer(IWebDriver driver)
{
Login(driver);
CreateNewCustomer(driver);
//do your selenium stuff..
}
and then your test methods:
[Test]
void Login_DoesWhatIExpect()
{
var driver = new InternetExplorerDriver("your Login URL here");
Login(driver);
Assert(Something);
}
[Test]
void CreateNewCustomer_WorksProperly()
{
var driver = new InternetExplorerDriver("your Login URL here");
CreateNewCustomer(driver);
Assert(Something);
}
[Test]
void EditNewlyCreatedCustomer_DoesntExplodeTheServer()
{
var driver = new InternetExplorerDriver("your Login URL here");
EditNewlyCreatedCustomer(driver);
Assert(Something);
}
In this way the order of the specific tests do not matter; certainly if the Login test fails then the CreateNewCustomer and EditNewlyCreatedCustomer tests will also fail but that's actually irrelevant in this case as you are testing an entire "thread" of operation
We have a folder full of JSON text files that need to be set to a single URI. Currently it's all done with a single xUnit "[Fact]" as below
[Fact]
public void TestAllCases()
{
PileOfTests pot = new PileOfTests();
pot.RunAll();
}
pot.RunAll() then parses the folder, loads the JSON files (say 50 files). Each is then hammered against the URI to see is each returns HTTP 200 ("ok"). If any fail, we're currently printing it as a fail by using
System.Console.WriteLine("\n >> FAILED ! << " + testname + "\n");
This does ensure that failures catch our eye but xUnit thinks all tests failed (understandably). Most importantly, we can't specify to xunit "here, run only this specific test". It's all or nothing the way it's currently built.
How can I programmatically add test cases? I'd like to add them when I read the number and names of the *.json files.
The simple answer is:
No, not directly. But there exists an, albeit a bit hacky, workaround, which is presented below.
Current situation (as of xUnit 1.9.1)
By specifiying the [RunWith(typeof(CustomRunner))] on a class, one can instruct xUnit to use the CustomRunner class - which must implement Xunit.Sdk.ITestClassCommand - to enumerate the tests available on the test class decorated with this attribute.
But unfortunately, while the invocation of test methods has been decoupled from System.Reflection + the actual methods,
the way of passing the tests to run to the test runner haven't.
Somewhere down in the xUnit framework code for invoking a specific test method, there is a call to typeof(YourTestClass).GetMethod(testName).
This means that if the class implementing the test discovery returns a test name that doesn't refer to a real method on the test class, the test is shown in the xUnit GUI - but any attempts to run / invoke it end up with a TargetInvocationException.
Workaround
If one thinks about it, the workaround itself is relatively straightforward.
A working implementation of it can be found here.
The presented solution first reads in the names of the files which should appear as different tests in the xUnit GUI.
It then uses System.Reflection.Emit to dynamically generate an assembly with a test class containing a dedicated test method for each of the input files.
The only thing that each of the generated methods does is to invoke the RunTest(string fileName) method on the class that specified the [EnumerateFilesFixture(...)] attribute. See linked gist for further explanation.
Hope this helps; feel free to use the example implementation if you like.
I've been trying to write some initial NUnit unit tests for MonoRail, having got some basics working already. However, while I've managed to check whether a Flash["message"] value has been set by a controller action, the BaseControllerTest class doesn't seem to store the output for a view at all, so whether I call RenderView or the action itself, nothing gets added to the Response.OutputContent data.
I've also tried calling InPlaceRenderView to try to get it to write to a StringWriter, and the StringWriter also seems to get nothing back - the StringBuilder that returns is also empty.
I'm creating a new controller instance, then calling
PrepareController(controller,"","home","index");
So far it just seems like the BaseControllerTest is causing any output to get abandoned. Am I missing something? Should this work? I'm not 100% sure, because while I'm also running these unit tests in MonoDevelop on Linux, although MonoRails is working OK there.
While I haven't got an ideal method for testing Views, this is possibly less important when ViewComponents can be tested adequately. To test views within the site itself, I can use Selenium. While in theory that can be made part of an NUnit test suite, that didn't run successfully under MonoDevelop in my tests (failing to start the connection to Selenium RC consistently, despite the RC interactive session working fine). However, the Selenium tests can be run as a set from Firefox, which is not too bad - unit testing with NUnit, then Integration/System testing scripting using a Selenium suite, and that setup will work in a Linux/MonoDevelop setup.
As for testing the underlying elements, you can check for redirections and check the flash value set or the like, so that's all fine, and for testing ViewComponents the part-mocked rendering does return the rendered output in an accessible form, so they've proved much easier to test in NUnit (with a base test class of BaseViewComponentTest) as follows:
[Test]
public void TestMenuComponentRendersOK()
{
var mc = new MenuComponent();
PrepareViewComponent(mc);
var dict = new System.Collections.Specialized.ListDictionary();
dict.Add("data",getSampleMenuData());
dict.Add("Name","testmenu");
// other additional parameters
mc.RenderComponent(mc,dict);
Assert.IsTrue(this.Output.Contains(""),"List items should have been added");
}