I need to test login function by having different Juint tests with different username & password values. all tests are located in same Juint. I use the following JAR files:
or these tests, browser will be only opened for the first test and closed at the last test.
My notes:
for chrome browser (v2.9.24) , all tests have successful run.
when I used the latest firefox browser(v45.0.1), only first test has a successful run and others are failed.
some of them can't catch the pop-window to enter values, and then it will continue for the next test.
In some cases, pop-window can't wait, it disappeared quickly and then unit test can't find username and password fields to fill values.
I need to test them on firefox, so how to overcome this problem.
Note:
I tried some of these suggestions but still the previous problem occurred.
WebDriverWait waitLog = new WebDriverWait(WebDriverRunner.getWebDriver(), 2);
waitLog.until(ExpectedConditions.elementToBeClickable($(".modal-dialog")));
Or set before test this parameter:
Configuration.fastSetValue=true;
However, I can't come with a way to solve this problem.
Related
I have created some Test Cases in Selenium Python and I have put it in a Test Suite. Each time a Test Case runs it opens the browser and navigates to the URL.
It then logs in, does some tests and then logs out and the browser closes.
Is there a way to run the tests to only open 1 instance of the browser, log in once and keep using that instance for the rest of the test cases?
I do not want to close the browser for every single test case and open a new browser and log in each time.
For e.g.
Test Case 1 runs, opens the browser, navigate to URL, log in, run some tests. Log out and close the browser.
Test Case 2 runs, opens the browser, navigate to URL, log in, run some tests. Log out and close the browser.
Test Case 3 opens the browser, navigate to URL, log in, run some tests. Log out and close the browser.
and so on.
I would like to do it this way.
Test Case 1 runs, opens the browser, navigate to URL, log in, run some tests. Log out and close the browser.
Test Case 2 use the same browser from test case 1, you are still logged in, run some tests.
Test Case 3 use the same browser from test case 1, you are still logged in, run some tests.
The last test case use the same browser from test case 1, you are still logged in, run some tests. Log out, close the browser.
Opening a new browser for every single test case and logging in takes longer for the tests to complete.
My code snippet is as follows:
class BaseTestCase(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Ie(Globals.IEdriver_path)
cls.driver.get(Globals.URL_justin_pc)
cls.login_page = login.LoginPage(cls.driver)
cls.driver.implicitly_wait(120)
cls.driver.maximize_window()
#classmethod
def tearDownClass(cls):
cls.login_page.click_logout()
cls.driver.close()
Test Case 1
class AdministrationPage_TestCase(BaseTestCase):
def test_add_Project(self):
print "*** test_add_project ***"
self.login_page.userLogin_valid(Globals.login_username, Globals.login_password)
menu_bar = MenuBarPage(self.driver)
administration_page = menu_bar.select_menuBar_item("Administration")
administration_page.click_add_project_button()
administration_page.add_project(project_name, Globals.project_description)
administration_page.click_save_add_project()
# etc ...
def test_edit_Project(self):
...
Test Case 2
class DataObjectsPage_TestCase(BaseTestCase):
def testa_add_Data_Objects_Name(self):
print "*** test_add_Data_Objects - Name ***"
self.login_page.userLogin_valid(Globals.login_username, Globals.login_password)
menu_bar = MenuBarPage(self.driver)
data_configuration_page = menu_bar.select_menuBar_item("Data Configuration")
project_navigator = ProjectNavigatorPage(self.driver)
data_objects = project_navigator.select_projectNavigator_item("Data Objects")
data_objects.click_add_button_for_data_objects()
def testb_add_Data_Objects_Address(self):
print "*** test_add_Data_Objects - Address ***"
...
def testc_add_Data_Objects_Phone(self):
...
Test Case 3 and so on
My Test Suite is:
def suite():
test_suite = unittest.TestSuite()
test_suite.addTest(unittest.makeSuite(TestCases.AdministrationPage_TestCase.AdministrationPage_TestCase))
test_suite.addTest(unittest.makeSuite(TestCases.DataObjectsPage_TestCase.DataObjectsPage_TestCase))
# etc...
Thanks,
Riaz
Slow automated tests caused by needlessly opening and closing browsers, hardcoded delays etc. are, as you say, a huge waste of time, and almost always avoidable in my experience.
The good news is that you don't need to do anything special to avoid this. If your tests are independent and run in sequence, also assuming that you only have one browser/version and set of Capabilities, then all your test runner needs to do is:
create singleton Driver at start of run (or application, if multiple runs allowed; or lazily when first required)
run each test in sequence, with no explicit close or quit calls, but with all other suitable cleanup (clearing any created cookies, logging out, clearing sessions, Local storage etc.)
quit Driver at the (very) end
It's not very much more work to support multiple browsers / capabilities in much the same way.
I have 2 tests which are testing a view that makes a call to an external module. I've mocked it with mock.patch. I'm calling the view by using django's test client.
The first test (a test for 404 being returned) completes successfully and the correct mock is called.
When the second test runs, everything runs as normal, but the mock that the code-under-test has access to is the mock from the previous test.
You can see in this example https://dpaste.de/7zT8 that the ids in the test output are incorrect (around line 91).
Where is this getting cached? My initial thought was that the import of the main module is somehow cached between test runs due to urlconf stuff. Tracing through the source code, I couldn't find that as the case.
Expected: Both tests pass.
Actual: Second test fails due to stale mocked import.
If I comment out the 404 test, the other test passes.
The view is registered in the url conf as the string-y version 'repos.views.github_webhook'.
I do not fully understand what causes the exact behaviour you are seeing, especially not why the mock is seemingly working correctly in the first test. But according to the mock docs, you should patch in the namespace under test, i.e. patch("views.tasks").
http://www.voidspace.org.uk/python/mock/patch.html#where-to-patch
I'm developing a PAS plugin for Plone and inside test coverage I'm also testing generated HTML (I'm not using zope.testbrowser or similar, what I need to test is really simple).
The problem is that when test runs be call stack is totally different from browser usage (on a real Plone site I replicated the test environment).
Here a piece of the test that fail:
portal = self.layer['portal']
request = self.layer['request']
request.set('ACTUAL_URL', 'http://nohost/plone/folder/subfolder')
request.set('URL', 'http://nohost/plone/folder/subfolder')
login(portal, 'user2')
output = self.folder.subfolder()
Up there subfolder is an ATCT Folder and I'm calling it's default view (folder listing).
Now, the PAS plugin is implementing the checkLocalRolesAllowed method.
def checkLocalRolesAllowed(self, user, object, object_roles):
...
What's is happening:
using browser the first call to checkLocalRolesAllowed is done with object= <ATFolder at subfolder>, user='users2'
running the test the first call to checkLocalRolesAllowed is done on the portal root with: object= <PloneSite at plone>, user='user2'
On both situations the method is called a lot of additional time (which is normal) but the call order is totally different.
In the test the subfolder context is only the 4rth in the stack order (the Plone site on the first two times, ZCatalog the third).
Starting from the fact that the "good" behavior is the one reproduced by the browser, how can simulate the same behavior on tests? Why calling portal.folder.subfolder() is using the Plone site at the first call?
NB: I tested other combinations like using restrictedTraverse, portal.folder.subfolder.folder_listing(), ... nothing worked.
I have a small problem, I have created some Selenium tests. The problem is I can't order the testcases I have created. I know unit testing should not be ordered but this is what I need in my situation. I have to follow these steps: login first, create a new customer, change some details about the customer and finally log out.
Since there is no option to order unit tests in NUnit I can't execute this.
I already tried another option, to create a unittest project in Visual Studio, because Visual Studio 2012 has the ability to create a ordered unit test. But this is not working because I can't run a unit test while I am running my ASP.NET project. Another solution file is also not a good option because I want to verify my data after it has been submitted by a Selenium test.
Does someone of you have another solution to solve my problem?
If you want to test all of those steps in a specific order (and by the sounds of it, as a single session) then really it's more like an acceptance test you are talking about; and in that case it's not a sin to write more complex test methods and Assert your conditions after each step.
If you want to test each step in true isolation (a pure unit test) then each unit test must be capable of running by itself without any reference to any other tests; but when you're testing the actual site UI itself this isn't really an option for you.
Of course if you really you want to have every single test somehow setup every single dependency without reference to any other actions (e.g in the last test you would need to fake the login token, your data layer will have to pretend that you added a new customer, etc. A lot of work for dubious benefit...)
I say this based on the assumption that you already have unit tests written for the server-side controllers, layers, models, etc, that you run without any reference to the actual site running in a browser and are therefore confident that the various back-end part of your site do what they are supposed to do
In your case I'd recommend more of a hybrid integration/acceptance test
void Login(IWebDriver driver)
{
//use driver to open browser, navigate to login page, type user/password into box and press enter
}
void CreateNewCustomer(IWebDriver driver)
{
Login(driver);
//and then use driver to click "Create Customer" link, etc, etc
}
void EditNewlyCreatedCustomer(IWebDriver driver)
{
Login(driver);
CreateNewCustomer(driver);
//do your selenium stuff..
}
and then your test methods:
[Test]
void Login_DoesWhatIExpect()
{
var driver = new InternetExplorerDriver("your Login URL here");
Login(driver);
Assert(Something);
}
[Test]
void CreateNewCustomer_WorksProperly()
{
var driver = new InternetExplorerDriver("your Login URL here");
CreateNewCustomer(driver);
Assert(Something);
}
[Test]
void EditNewlyCreatedCustomer_DoesntExplodeTheServer()
{
var driver = new InternetExplorerDriver("your Login URL here");
EditNewlyCreatedCustomer(driver);
Assert(Something);
}
In this way the order of the specific tests do not matter; certainly if the Login test fails then the CreateNewCustomer and EditNewlyCreatedCustomer tests will also fail but that's actually irrelevant in this case as you are testing an entire "thread" of operation
I've been trying to write some initial NUnit unit tests for MonoRail, having got some basics working already. However, while I've managed to check whether a Flash["message"] value has been set by a controller action, the BaseControllerTest class doesn't seem to store the output for a view at all, so whether I call RenderView or the action itself, nothing gets added to the Response.OutputContent data.
I've also tried calling InPlaceRenderView to try to get it to write to a StringWriter, and the StringWriter also seems to get nothing back - the StringBuilder that returns is also empty.
I'm creating a new controller instance, then calling
PrepareController(controller,"","home","index");
So far it just seems like the BaseControllerTest is causing any output to get abandoned. Am I missing something? Should this work? I'm not 100% sure, because while I'm also running these unit tests in MonoDevelop on Linux, although MonoRails is working OK there.
While I haven't got an ideal method for testing Views, this is possibly less important when ViewComponents can be tested adequately. To test views within the site itself, I can use Selenium. While in theory that can be made part of an NUnit test suite, that didn't run successfully under MonoDevelop in my tests (failing to start the connection to Selenium RC consistently, despite the RC interactive session working fine). However, the Selenium tests can be run as a set from Firefox, which is not too bad - unit testing with NUnit, then Integration/System testing scripting using a Selenium suite, and that setup will work in a Linux/MonoDevelop setup.
As for testing the underlying elements, you can check for redirections and check the flash value set or the like, so that's all fine, and for testing ViewComponents the part-mocked rendering does return the rendered output in an accessible form, so they've proved much easier to test in NUnit (with a base test class of BaseViewComponentTest) as follows:
[Test]
public void TestMenuComponentRendersOK()
{
var mc = new MenuComponent();
PrepareViewComponent(mc);
var dict = new System.Collections.Specialized.ListDictionary();
dict.Add("data",getSampleMenuData());
dict.Add("Name","testmenu");
// other additional parameters
mc.RenderComponent(mc,dict);
Assert.IsTrue(this.Output.Contains(""),"List items should have been added");
}