Straight up been looking for answers to this question for months, still no idea how to actually do it - how does one automate tests that run in the browser? Selenium tests run on the backend and of course can be automated, and we are doing this now. Recently I have been tasked with automating browser based unit tests and I have a significant knowledge gap -
I don't know how to actually do it -
For example - how do does an automated test runner collect test results and exit codes of (unit) tests that run in the browser? Can anyone explain how this actually done and the steps to accomplish this?
Is Karma the best tool to accomplish this?
You can use http://phantomjs.org/. phantomjs is a headless webbrowser, which you can see as fullstack webbrowser without gui, usable as library. Together with karma you can execute your unit tests without relying on any GUI implementations.
Here a blogpost, which explains the different components in such a secenario http://orizens.com/wp/topics/my-setup-for-testing-js-with-jasmine-karma-phantomjs-angularjs/
Means you can execute your karma unit tests on a headless linux server.
Clarification:
The need for phantomjs doesn't come from unit tests. The need for phantomjs comes from the fact that your js unit tests depend onto the browser api.
Its a good design principle to structure the code, that the coupling to the browser api is not cluttered over all the code. Try to introduce a thin layer which encapsulates the browser api dependencies. Like this you can test your js mostly without the need for phantomjs.
To execute your unit tests with phantomjs can take its time. If you have a lot of unit tests, its better to factor out the dependencies to the browser api. So you can execute more tests without the need for phantomjs and only a minority of the unit tests need to be executed with phantomjs.
You can probably use cucumber. If you have 20 test cases, that you can need to execute.
You can create a feature file which will contain all the scenarios.
The Runner classes and Method on what needs to be done can be defined in a different package. Let's say you have a scenario to
1. Open browser.
2. Enter google link.
3. Login using credentials.
Create a feature file with the above information.
Use a Cucumber runner class. And create package methods such as
#When("^Open Browser$")
public void open_Browser() throws Throwable {
WebDriver driver = new FirefoxDriver();
driver.get("www.google.com");
}
similarly you can create different methods to run. To run the jar, you can use Command Line Interface technique.
Great piece of article LINK
Basics:
This is for Python automation, you'll need to have some previous knowledge/experience.
pip install selenium
pip install nose
Above should be executed in cmd or shell...
For this test we will open the AWeber website at http://www.aweber.com
using Firefox, and make sure that the title of the page is "AWeber Email Marketing Services & Software Solutions for Small Business".
import unittest
from selenium import webdriver
class AweberTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Firefox()
def test_title(self):
self.driver.get('https://www.aweber.com')
self.assertEqual(
self.driver.title,
'AWeber Email Marketing Services & Software Solutions for Small Business')
#classmethod
def tearDownClass(cls):
cls.driver.quit()
Running the test using nose:
nose aweber.py
Next test, clicking on elements:
self.driver.get('https://www.aweber.com')
order_tab = self.driver.find_element_by_css_selector('#ordertab>a')
order_tab.click()
There are many selectors in that we can use find_element_by_(css/xpath/name/id) - Locating elements
And in this case we used the method click but we can also .send_keys("asdf") ,scroll, execute java script using
browser.execute_script("alert('I canNNNNN not do javascript')")
Full code example: LINK-Pastebin
Related
I have been rushing to get my app, a mixture between nodejs server driving an sqlite database and a litElement based client side providing the ui, into a useable state as a beta release. I achieved that a couple of days ago and now I am (belatedly I know) thinking how to put together a test framework. However I am really struggling to understand how to best test the client side. I think its because I am having difficulty understanding conceptually what the two main choices of framework are. Before I go into more detail, let me explain the structure of the app in top level terms.
At the project root level there are three main directories node_modules which comprises all the modules I've pulled in (including lit-element and web-components-loader which are client side elements - but see below) server which contains all the code for the server side of my application and client which consist of all the code for the client side of my app. I run rollup ONLY at module install time to "package" lit-element and the directives I use and the web-component-loader and effectively treeshake and copy them to the client/libs. As a result of this my client is coded to assume the modules are in the libs directory AND I DO NOT NEED OR HAVE any build stage. I guess the root of the client is index.html which pulls in a service-worker.js and main-app.js. main-app is the root of a tree in lit-element based components that make up the entire client app. Nginx is the web server for all the static files in the client, but also acts as a proxy to pass any urls that start /api to a standard node http web server (not even express, although I do use the router, body-parser and final-handler modules) and these get passed to various api handlers each of which is a separate javascript file - although these can "require" a few common modules that I have written and those in the node_modules directory.
I plan on using jest as a test environment. For my server I think it is easy. For each api handler I want to test I can build a test script that "requires" the javascript file I want to test. I am in two minds about whether to use a sqlite database for testing or mock something - I am leaning towards the former as I am using better-sqlite3 and it is totally synchronous and very fast. I already have scripts to create empty databases, so I have no worry about test isolation.
Client testing is where I get confused. I "think" that in essence jest can run tests the same way as for the server, one element at a time. BUT, these elements and my test scripts are going to need a "web-platform" set of APIs - not least of which is the entire shadow dom and custom components stuff that lit-element uses. This is where, I think, puppeteer or electron in with there associated jest plugins which can put these platform apis into the test environment. But, and this is the essence of my confusion, puppeteer instructions all start with a something like
const browser = await puppeteer.launch({headless: true});
const page = await browser.newPage()
await page.goto(SOME URL);
What is this URL? - do I have to also run a server? I cannot relate this snippet to running a test controlled by jest. All the examples seem to use webpack and typescript, neither of which I know anything about.
The other module I have seen mentioned is electron, in particular this article, which everything else seems to point to (or the same text).
https://www.ninkovic.dev/blog/2020/testing-web-components-with-jest-and-lit-element
From the code snippets in this article it "seems" like it might be what I want, BUT ...
I cannot find very many references to electron other than on its own web site. Here it is telling you to use electron as a tool to build a cross platform application, but nowhere can I find what it is - it assumes you already know. I don't want a UI for my unit testing I want it to be headless like in puppeteer.
Hence my confusion and why I am unsure how to achieve what I want. Can someone give me some pointers as to
How can I set up puppeteer to run headless tests without needing a server OR
What exactly is electron (and can I use it to
a) run my tests
b) provide me with tools to examine the dom elements I have created to see I have created the right ones) and how is it different from puppeteer and can I use it to conduct headless tests of my client.
UPDATE
I've done some more digging and am beginning to understand the differences. Let me summarise what I think I have found.
Puppeteer is great for end to end testing of your site. You run the tests by launching the included puppeteer at the home page of your site (or the more likely scenario of a development test site) and programatically pretend to be a user who can click on buttons etc. You can use various methods, including functions such as document.querySelector() to check your UI has behaved how you think, or you can take screen shots and compare with standardised version. I could possible use it for unit tests, but I would have to run a server, create a test fixture html page for every test and navigate to it. jest-puppeteer is a package with some of that built in.
Electron is a platform for building apps. What the url I was referencing was using a test runner app jest-electron built using electron. So worrying about electron is a red herring, I should be worrying about jest-electron.
My main concern right now, I think, is that I need different jest configurations for my three scenarios
unit tests on the server
unit tests on the client
end to end testing of the complete app.
Given I have only one package.json file and one set of node_modules I need to figure out a way to have three different jest-config.js files.
We have a moderately large test suite for business logic and this completes within a few seconds. We're running this as a condition to commit (a hook that must pass) and that has been working well to block the most stupid mistakes from making it off my machine.
We've recently started adding end-to-end frontend tests with webdriver. Some of these tests pass over third party integrations. The tests are useful but they're really slow and require a network connection.
We also have some logic tests that are extremely long that are commented out (yeah!) unless we suspect something wrong.
Is there a sensible way to split these slow tests out so they only run when we specifically want them to and not every time you run ./manage.py test?
If you use default Django test runner there is no simple way of doing what you want. Maybe rearranging the test directory structure so you could call ./manage.py test path/to/directory_with/webtests or ./manage.py test path/to/directory_with_fast_tests
Another solution is using pytest Custom Markers
As Documentation states:
import pytest
#pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
Register custom marker:
# content of pytest.ini
[pytest]
markers =
webtest: mark a test as a webtest.
Then you just run pytest -v -m webtest and only marked tests will be executed.
I am revising a kotlin-js web browser application.
Currently tests are run through selenide and are limited as they interact entirely through the DOM and cannot call the code or inspect data.
More tests are needed, and I am thinking that an actual js framework is needed such as Qunit, mocha or jasmine etc.
The project is configured through gradle, but I have not found how to have tests run by a gradle project that simulate being run in the browser.
The overall question is, how to best deliver unit tests?
questions:
Is there a better approach than the selenid approach?
Is there a kotlin-js based test alternative that can be run from a gradle task?
What combination has been found to work, ideally without resorting to node.js in order to run a browser app. e.g instructions on using Qunit, jasmine, mocha or an other test running as a gradle task?
Alternatively (not preferred) Is there some way to call javascript code (functions etc) and access page global variables from the selenide test code?
Any answers to either question appreciated.
I have developed a JSP servlet based web application and I would like to perform some functional testing on it.I know that functional test is to make sure that the application is performing actions which it is supposed to perform.
I have googled and found out that Selenium can be used for automated functional testing.I saw that I can record my actions which can be replayed to me.
Now since I am new to testing applications, I dont understand how replaying the actions is useful in testing.
I have not performed any unit tests on my application,i mean formally using jUnit and stuff, although I used to just run parts of my code to check if it was working properly.Is that a bad thing as in not using formal unit testing frameworks.
Replaying is only useful to verify if the test is doing everything the tester intended. The key point is that Selenium can export the testcase you're seeing replaying to a fullworthy testcase class for among others JUnit. This class can then be added to the group of other testcases you have for the webapp. This can then be executed after automatic build as part of continuous integration.
For basic functional testing, the Selenium IDE, in addition to record/playback capabilities, provides assertions and verifications for elements in your web app. Establishing these strategically (around perceived problem areas) will enable you to regress through your application ensuring newer implementations do not break existing functionality.
I'm familiar with TDD and use it in both my workplace and my home-brewed web applications. However, every time I have used TDD in a web application, I have had the luxury of having full access to the web server. That means that I can update the server then run my unit tests directly from the server. My question is, if you are using a third party web host, how do you run your unit tests on them?
You could argue that if your app is designed well and your build process is sound and automated, that running unit tests on your production server isn't necessary, but personally I like the peace of mind in knowing that everything is still "green" after a major update.
For everyone who has responded with "just test before you deploy" and "don't you have a staging server?", I understand where you're coming from. I do have a staging server and a CI process set up. My unit tests do run and I make sure they all pass before an an update to production.
I realize that in a perfect world I wouldn't be concerned with this. But I've seen it happen before. If a file is left out of the update or a SQL script isn't run, the effects are immediately apparent when running your unit tests but can go unnoticed for quite some time without them.
What I'm asking here is if there is any way, if only to satisfy my own compulsive desires, to run a unit test on a server that I cannot install applications on or remote into (e.g. one which I will only have FTP access to in order to update files)?
I think I probably would have to argue that running unit tests on your production server isn't really part of TDD because by the time you deploy to your production environment technically speaking, you're past "development".
I'm quite a stickler for TDD, and when I'm preaching the benefits to clients I often find myself saying "you can't half adopt TDD, it's all or nothing"
What you probably should have is some form of automated testing that you perform "after" deployment but these are not part of TDD.
Maybe you should look at your process again.
You could write functional tests in something like WATIR, WATIN or Selenium that test what is returned in the reponse page after posting certain form data or requesting specific URLs.
For clarification: what sort of access do you have to your web server? FTP or WebDAV only? From your question, I'm guessing ssh access isn't available - you're dropping files in a directory to deploy. Is that correct?
If so, the answer for unit testing is likely 'do it before you deploy'. You can set up functional testing driven by an automated tool like Selenium to test your app remotely via the web interface, but that's not really unit testing the sense that you're restricted to testing the system as a whole.
Have you considered setting up a staging server, perhaps as a VMWare instance, that mirrors or at least mimics your deployment environment?
What's preventing you from running unit tests on the server? If you can upload your production code and let it run there, why can't you upload this other code and run it as well?
I've written test tools for sites using python and httplib/urllib2 generally it would have been overkill but it was suitable in these cases. Not sure it's going to be of general use though.