We are using Codeigniter and have 2 options to call our API controllers:
we can use a client that calls the controller's url through Curl,
we can use a client that calls the controller from the command line.
This is perfectly fine for the functionality of our site. However, when I run PHPUnit, the coverage reports for the Controllers are blank while the coverage reports for all Models are correct.
In tracing how xdebug creates the reports, it appears that using the Curl-based client or the CLI client are called outside of the scope of the test function, so xdebug_get_code_coverage() does not track the controller code that is executed.
Is it possible to configure xdebug to recognize code coverage in this scenario? Is it possible to call Codeigniter controllers within the scope of the PHPUnit test function? Any other possible solutions?
Yes, that's easily possible. See http://www.phpunit.de/manual/current/en/selenium.html for more information about it
Basically you put some special files in your web root:
PHPUnit_Extensions_SeleniumTestCase can collect code coverage information for tests run through Selenium:
Copy PHPUnit/Extensions/SeleniumTestCase/phpunit_coverage.php into your webserver's document root directory.
In your webserver's php.ini configuration file, configure PHPUnit/Extensions/SeleniumTestCase/prepend.php and PHPUnit/Extensions/SeleniumTestCase/append.php as the auto_prepend_file and auto_append_file, respectively.
In your test case class that extends PHPUnit_Extensions_SeleniumTestCase, use protected
$coverageScriptUrl = 'http://host/phpunit_coverage.php';
to configure the URL for the phpunit_coverage.php script.
When running a URL with the GET parameter PHPUNIT_SELENIUM_TEST_ID, the coverage information gets tracked and PHPUnit can collect it by requesting the coverageScriptUrl.
An alternative: see our SD PHP Test Coverage tool.
It doesn't use xdebug to collect coverage data, so it won't have xdebug's specific problems. It instruments a script to collect test coverage data; once instrumented, no matter how the script is executed, you will get test coverage data.
(The instrumentation is temporary; you throw the instrumented code away once you have
the test coverage data collected, so it doesn't affect your production code base).
This approach does require you to explicitly list all PHP scripts for which you want coverage data; you can ignore some if you want. Usually it isn't worth the bother; most of the users simply list all PHP scripts.
Related
I have been rushing to get my app, a mixture between nodejs server driving an sqlite database and a litElement based client side providing the ui, into a useable state as a beta release. I achieved that a couple of days ago and now I am (belatedly I know) thinking how to put together a test framework. However I am really struggling to understand how to best test the client side. I think its because I am having difficulty understanding conceptually what the two main choices of framework are. Before I go into more detail, let me explain the structure of the app in top level terms.
At the project root level there are three main directories node_modules which comprises all the modules I've pulled in (including lit-element and web-components-loader which are client side elements - but see below) server which contains all the code for the server side of my application and client which consist of all the code for the client side of my app. I run rollup ONLY at module install time to "package" lit-element and the directives I use and the web-component-loader and effectively treeshake and copy them to the client/libs. As a result of this my client is coded to assume the modules are in the libs directory AND I DO NOT NEED OR HAVE any build stage. I guess the root of the client is index.html which pulls in a service-worker.js and main-app.js. main-app is the root of a tree in lit-element based components that make up the entire client app. Nginx is the web server for all the static files in the client, but also acts as a proxy to pass any urls that start /api to a standard node http web server (not even express, although I do use the router, body-parser and final-handler modules) and these get passed to various api handlers each of which is a separate javascript file - although these can "require" a few common modules that I have written and those in the node_modules directory.
I plan on using jest as a test environment. For my server I think it is easy. For each api handler I want to test I can build a test script that "requires" the javascript file I want to test. I am in two minds about whether to use a sqlite database for testing or mock something - I am leaning towards the former as I am using better-sqlite3 and it is totally synchronous and very fast. I already have scripts to create empty databases, so I have no worry about test isolation.
Client testing is where I get confused. I "think" that in essence jest can run tests the same way as for the server, one element at a time. BUT, these elements and my test scripts are going to need a "web-platform" set of APIs - not least of which is the entire shadow dom and custom components stuff that lit-element uses. This is where, I think, puppeteer or electron in with there associated jest plugins which can put these platform apis into the test environment. But, and this is the essence of my confusion, puppeteer instructions all start with a something like
const browser = await puppeteer.launch({headless: true});
const page = await browser.newPage()
await page.goto(SOME URL);
What is this URL? - do I have to also run a server? I cannot relate this snippet to running a test controlled by jest. All the examples seem to use webpack and typescript, neither of which I know anything about.
The other module I have seen mentioned is electron, in particular this article, which everything else seems to point to (or the same text).
https://www.ninkovic.dev/blog/2020/testing-web-components-with-jest-and-lit-element
From the code snippets in this article it "seems" like it might be what I want, BUT ...
I cannot find very many references to electron other than on its own web site. Here it is telling you to use electron as a tool to build a cross platform application, but nowhere can I find what it is - it assumes you already know. I don't want a UI for my unit testing I want it to be headless like in puppeteer.
Hence my confusion and why I am unsure how to achieve what I want. Can someone give me some pointers as to
How can I set up puppeteer to run headless tests without needing a server OR
What exactly is electron (and can I use it to
a) run my tests
b) provide me with tools to examine the dom elements I have created to see I have created the right ones) and how is it different from puppeteer and can I use it to conduct headless tests of my client.
UPDATE
I've done some more digging and am beginning to understand the differences. Let me summarise what I think I have found.
Puppeteer is great for end to end testing of your site. You run the tests by launching the included puppeteer at the home page of your site (or the more likely scenario of a development test site) and programatically pretend to be a user who can click on buttons etc. You can use various methods, including functions such as document.querySelector() to check your UI has behaved how you think, or you can take screen shots and compare with standardised version. I could possible use it for unit tests, but I would have to run a server, create a test fixture html page for every test and navigate to it. jest-puppeteer is a package with some of that built in.
Electron is a platform for building apps. What the url I was referencing was using a test runner app jest-electron built using electron. So worrying about electron is a red herring, I should be worrying about jest-electron.
My main concern right now, I think, is that I need different jest configurations for my three scenarios
unit tests on the server
unit tests on the client
end to end testing of the complete app.
Given I have only one package.json file and one set of node_modules I need to figure out a way to have three different jest-config.js files.
How do I run a specific set of appium java testng test cases in AWS Device farm? I could see that device farm ignores all the annotations for testNG including group and enabled. Is there any way out?
To only run a subset of tests the project will need to include the testng.xml file in the root of the *-tests.jar. Here is a GitHub post I've authored showing how to do that.
https://github.com/aws-samples/aws-device-farm-appium-tests-for-sample-app/pull/14
In the standard environment the tests are parse and executed individually. As a result some testng features like priority and nested groups don't get honored. Also tests are executed slower because the Appium server would be restarted between tests.
https://docs.aws.amazon.com/devicefarm/latest/developerguide/test-environments.html#test-environments-standard
If these feature are needed the project will need to use the custom environments in Device Farm.
https://docs.aws.amazon.com/devicefarm/latest/developerguide/custom-test-environments.html
This produces one set of logs and video of all the tests since the test package is not parsed.
Hth
-James
I had configured JaCoCo in WebSphere as JavaAgent (Refer: http://www.jacoco.org/jacoco/trunk/doc/agent.html).
Restarted the server, and ran a series of automated tests on the application (to give some load) and then stopped the server.
I can see the jacoco.exec getting generated in the Server (as configured to /tmp/ location).
Now, How do I generate the HTML report ?
Before voting down this question or marking it as duplicate, here is the reason why I'm posting this question. I went through the JaCoCo Documentation like http://www.jacoco.org/jacoco/trunk/doc/maven.html and also multiple StackOverflow questions but still I'm confused.
What I understood is that the Maven plugin allows us to run the Unit tests, Integration tests and then generate a report.
What I'm looking for is a report based on the load I had given to my application deployed in Websphere. I can see the jacoco.exec file generated, but not sure from the documentation on how to generate the HTML reports.
Thanks in advance.
You could use the jacoco:report-aggregate goal with Maven.
You could refer this http://www.eclemma.org/jacoco/trunk/doc/report-aggregate-mojo.html
P.S. : However, when i had the same issue, I used Sonar to read the exec file that was generated. It gives much more than just code coverage.
I was able to generate JaCoCo report as follows :
Configured JaCoCo as Java Agent
Restart the Server and do some transactions/give some load (in my case I ran a series of automated tests)
Stop the Server (This will actually generate the jacoco.exec file)
Create/Configure the Ant script and run it (This will read the jacoco.exec file and generate the html report). Refer : http://www.eclemma.org/jacoco/trunk/doc/ant.html
Even though my Project is a Maven project, I used ant script for the Report generation. I automated all the above steps using Bamboo and it made running and maintaining this Job easier.
Straight up been looking for answers to this question for months, still no idea how to actually do it - how does one automate tests that run in the browser? Selenium tests run on the backend and of course can be automated, and we are doing this now. Recently I have been tasked with automating browser based unit tests and I have a significant knowledge gap -
I don't know how to actually do it -
For example - how do does an automated test runner collect test results and exit codes of (unit) tests that run in the browser? Can anyone explain how this actually done and the steps to accomplish this?
Is Karma the best tool to accomplish this?
You can use http://phantomjs.org/. phantomjs is a headless webbrowser, which you can see as fullstack webbrowser without gui, usable as library. Together with karma you can execute your unit tests without relying on any GUI implementations.
Here a blogpost, which explains the different components in such a secenario http://orizens.com/wp/topics/my-setup-for-testing-js-with-jasmine-karma-phantomjs-angularjs/
Means you can execute your karma unit tests on a headless linux server.
Clarification:
The need for phantomjs doesn't come from unit tests. The need for phantomjs comes from the fact that your js unit tests depend onto the browser api.
Its a good design principle to structure the code, that the coupling to the browser api is not cluttered over all the code. Try to introduce a thin layer which encapsulates the browser api dependencies. Like this you can test your js mostly without the need for phantomjs.
To execute your unit tests with phantomjs can take its time. If you have a lot of unit tests, its better to factor out the dependencies to the browser api. So you can execute more tests without the need for phantomjs and only a minority of the unit tests need to be executed with phantomjs.
You can probably use cucumber. If you have 20 test cases, that you can need to execute.
You can create a feature file which will contain all the scenarios.
The Runner classes and Method on what needs to be done can be defined in a different package. Let's say you have a scenario to
1. Open browser.
2. Enter google link.
3. Login using credentials.
Create a feature file with the above information.
Use a Cucumber runner class. And create package methods such as
#When("^Open Browser$")
public void open_Browser() throws Throwable {
WebDriver driver = new FirefoxDriver();
driver.get("www.google.com");
}
similarly you can create different methods to run. To run the jar, you can use Command Line Interface technique.
Great piece of article LINK
Basics:
This is for Python automation, you'll need to have some previous knowledge/experience.
pip install selenium
pip install nose
Above should be executed in cmd or shell...
For this test we will open the AWeber website at http://www.aweber.com
using Firefox, and make sure that the title of the page is "AWeber Email Marketing Services & Software Solutions for Small Business".
import unittest
from selenium import webdriver
class AweberTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Firefox()
def test_title(self):
self.driver.get('https://www.aweber.com')
self.assertEqual(
self.driver.title,
'AWeber Email Marketing Services & Software Solutions for Small Business')
#classmethod
def tearDownClass(cls):
cls.driver.quit()
Running the test using nose:
nose aweber.py
Next test, clicking on elements:
self.driver.get('https://www.aweber.com')
order_tab = self.driver.find_element_by_css_selector('#ordertab>a')
order_tab.click()
There are many selectors in that we can use find_element_by_(css/xpath/name/id) - Locating elements
And in this case we used the method click but we can also .send_keys("asdf") ,scroll, execute java script using
browser.execute_script("alert('I canNNNNN not do javascript')")
Full code example: LINK-Pastebin
I haven't found a way to test REST methods for my application automatically, while using setUp and tearDown methods to preserve the uniqueness of each test.
gaetestbed gives me a clean datastore in between tests. And httplib2 allows me to easily call REST methods and parse their responses; but in order to do so, a local instance of my application must be running on port 8080 and called each test. This defeats the purpose of a gaetestbed-like refresh of the datastore, since data is preserved in between calls.
I could start and stop the GAE server within the setUp and tearDown methods, but this seems wasteful and time-consuming. Is there a better way?
Using gaetestbed, or your own unittest code like this, simply instantiate your handler classes directly, using a mocked/fake environment dictionary, and call the methods (initialize() and get()/post()/etc in the case of webapp) directly.
When doing functional tests using REST methods we ended up writing helper calls to clean out the internal caches and force our engine to sync with the database.
I haven't used gaetestbed, but I would have thought that you could flush the datastore between tests?
"Use case: develop and test locally with real data
Restore to: your local development server.
Once you have restored the data to your local development server, I would highly suggest that you take a backup of your datastore.
You can find your datastore files in a temporary folder on your local machine (e.g., on my Mac, it's /var/folders/bz/bzDU030xHXK-jKYLMXnTzk+++TI/-Tmp-/). To find the folder on your own machine, flush the datastore (./manage.py flush) and you will see the path to your datastore folder printed in the resulting output."