I have test profile I am trying to apply to some of my quarkus tests
#Nested
#TestMethodOrder(MethodOrderer.OrderAnnotation.class)
#TestProfile(NoMonitoringProfile.class)
class ComplexMissionLifeCycleTest {
I'm trying to run tests in order, but unfortunately when I apply the test profile it seems quarkus is applying the profile to each #Test in the class and restarting the application instance each time. This does a few things. It kills the data I'm storing for my test and more it makes the running of the test brutally slow.
Is there a way to apply a profile to the whole class and only have quarkus start once with that profile?
Related
We have a moderately large test suite for business logic and this completes within a few seconds. We're running this as a condition to commit (a hook that must pass) and that has been working well to block the most stupid mistakes from making it off my machine.
We've recently started adding end-to-end frontend tests with webdriver. Some of these tests pass over third party integrations. The tests are useful but they're really slow and require a network connection.
We also have some logic tests that are extremely long that are commented out (yeah!) unless we suspect something wrong.
Is there a sensible way to split these slow tests out so they only run when we specifically want them to and not every time you run ./manage.py test?
If you use default Django test runner there is no simple way of doing what you want. Maybe rearranging the test directory structure so you could call ./manage.py test path/to/directory_with/webtests or ./manage.py test path/to/directory_with_fast_tests
Another solution is using pytest Custom Markers
As Documentation states:
import pytest
#pytest.mark.webtest
def test_send_http():
pass # perform some webtest test for your app
Register custom marker:
# content of pytest.ini
[pytest]
markers =
webtest: mark a test as a webtest.
Then you just run pytest -v -m webtest and only marked tests will be executed.
Straight up been looking for answers to this question for months, still no idea how to actually do it - how does one automate tests that run in the browser? Selenium tests run on the backend and of course can be automated, and we are doing this now. Recently I have been tasked with automating browser based unit tests and I have a significant knowledge gap -
I don't know how to actually do it -
For example - how do does an automated test runner collect test results and exit codes of (unit) tests that run in the browser? Can anyone explain how this actually done and the steps to accomplish this?
Is Karma the best tool to accomplish this?
You can use http://phantomjs.org/. phantomjs is a headless webbrowser, which you can see as fullstack webbrowser without gui, usable as library. Together with karma you can execute your unit tests without relying on any GUI implementations.
Here a blogpost, which explains the different components in such a secenario http://orizens.com/wp/topics/my-setup-for-testing-js-with-jasmine-karma-phantomjs-angularjs/
Means you can execute your karma unit tests on a headless linux server.
Clarification:
The need for phantomjs doesn't come from unit tests. The need for phantomjs comes from the fact that your js unit tests depend onto the browser api.
Its a good design principle to structure the code, that the coupling to the browser api is not cluttered over all the code. Try to introduce a thin layer which encapsulates the browser api dependencies. Like this you can test your js mostly without the need for phantomjs.
To execute your unit tests with phantomjs can take its time. If you have a lot of unit tests, its better to factor out the dependencies to the browser api. So you can execute more tests without the need for phantomjs and only a minority of the unit tests need to be executed with phantomjs.
You can probably use cucumber. If you have 20 test cases, that you can need to execute.
You can create a feature file which will contain all the scenarios.
The Runner classes and Method on what needs to be done can be defined in a different package. Let's say you have a scenario to
1. Open browser.
2. Enter google link.
3. Login using credentials.
Create a feature file with the above information.
Use a Cucumber runner class. And create package methods such as
#When("^Open Browser$")
public void open_Browser() throws Throwable {
WebDriver driver = new FirefoxDriver();
driver.get("www.google.com");
}
similarly you can create different methods to run. To run the jar, you can use Command Line Interface technique.
Great piece of article LINK
Basics:
This is for Python automation, you'll need to have some previous knowledge/experience.
pip install selenium
pip install nose
Above should be executed in cmd or shell...
For this test we will open the AWeber website at http://www.aweber.com
using Firefox, and make sure that the title of the page is "AWeber Email Marketing Services & Software Solutions for Small Business".
import unittest
from selenium import webdriver
class AweberTest(unittest.TestCase):
#classmethod
def setUpClass(cls):
cls.driver = webdriver.Firefox()
def test_title(self):
self.driver.get('https://www.aweber.com')
self.assertEqual(
self.driver.title,
'AWeber Email Marketing Services & Software Solutions for Small Business')
#classmethod
def tearDownClass(cls):
cls.driver.quit()
Running the test using nose:
nose aweber.py
Next test, clicking on elements:
self.driver.get('https://www.aweber.com')
order_tab = self.driver.find_element_by_css_selector('#ordertab>a')
order_tab.click()
There are many selectors in that we can use find_element_by_(css/xpath/name/id) - Locating elements
And in this case we used the method click but we can also .send_keys("asdf") ,scroll, execute java script using
browser.execute_script("alert('I canNNNNN not do javascript')")
Full code example: LINK-Pastebin
I am using Arquillian for testing Java EE application against Glassfish, So far I am facing a performance problem, each Test case takes more than a minute to complete, having 60 Test cases means an hour to run. and hence the build will take more an hour.
I understand that running a test case might take this time in starting a glassfish container, creating and deploying a war file.
Is there a way to group test cases under each project, add all of the classes, create a single Deployment archive and run multiple tests in a single deployment as if they are a single test class?
Arquillian does not support suites by itself.
But I wrote extension that makes suites testing possible.
https://github.com/ingwarsw/arquillian-suite-extension
There you should find documentation with examples.
Are you using an embedded glassfish instance? Because running a remote instance should work faster.
Use a test suite (#Suite) and set up your arquillian container in a #BeforeClass annotated method
see http://www.mkyong.com/unittest/junit-4-tutorial-5-suite-test/
Edit :
And if all your class extend an AbstractTestClass which declare the #BeforeClass annotated method which initialize the container only if it's not already done ?
I need to run unit tests for a Symfony2 application against two different DB configurations, one using a MySQL database and the other using a SQLite database.
I currently choose the DB configuration to use when running unit tests by editing app/config/config_test.yml. I either uncomment the MySQL-related db settings and comment-out the SQLite-related db settings or vice versa.
I'd like to not have to do this and to instead have two configuration files - perhaps app/config/config-test-mysql.yml and app/config/config-test-sqlite.yml - and choose the test environment from the command line when the tests are run.
Having looked at the default Symfony2 phpunit config in app/phpunit.xml.dist and having looked at the bootstrap file that config utilises (app/bootstrap.php.cache), I cannot determine how the environment defaults to test when running unit tests.
How can I choose the environment to use when running unit tests?
I haven't tried this solution but I am sure this is a good lead.
My unit test class extends Symfony\Bundle\FrameworkBundle\Test\WebTestCase which enables you to create a Client which itself creates a Kernel.
In your unit test, you could do this:
use Symfony\Bundle\FrameworkBundle\Test\WebTestCase;
class DatabaseRelatedTest extends WebTestCase
{
private static $client;
public function setUp()
{
// this is the part that should make things work
$options = array(
'environment' => 'test_mysql'
);
self::$client = static::createClient($options);
self::$client->request('GET', '/foo/bar'); // must be a valid url
}
}
You will be able to access the EntityManager and by extension the Connection using the container of the client.
self::$client->getContainer()->get('doctrine')
The ideal would be to pass the environment's name to the setUp method using the phpunit.xml.dist file. It's probably a half-answer but I believe it's a good lead.
I have been watching various videos and reading various blogs where they go about unit testing a repository.
The most common pattern is to create a Fake repository that implements the same interface as the real one. Then the fake one uses an internal Dictionary or something.
So in effect you are unit testing the logic of the fakerepository which will never go into production.
Now you may use dependency injection to inject a mock DBContext by using some IDBContext interface. However then you are just testing each repository method which in effect just forward to the dbcontext (which is mocked).
So unless each repository method has lots of logic before calling on the dbcontext then it seems a bit pointless?
I think it would be better to have the tests on repository as integration tests and actually have them hitting the Database?
The new EF 4.1 makes this easy as it can create the database on the fly based on a connection string in your test project, then you can delete it after tests are run using the dbcontext.Database methods.
Your objections are partially correct. Their correctness depends on the way how the repository is defined.
First faking or mocking repository is not for testing repository itself but for testing layers using the repository.
If the repository exposes IQueryable and upper layer can build linq-to-entities query then mocking repository means testing non existing logic. You need integration test and run the query against a real testing database. You can either redeploy database for each test which will make it very slow or you can run each test in a transaction and rollback it when the test completes.
If the repository doesn't exposes IQueryable you can still think about it as a black box and mock it. Query logic will be inside the repository and it will be tested separately with integration tests.
I would refer you to set of other answers about repository itself and testing.
The best approach I have seen is from Sharp Architecture where they use a SQLLite database, created in the TestFixtureSetup based on the NHibernate mapping info.
The repository tests then use this In-Memory database.
Technically this is still integration test as database involved, but practically, it ticks all the boxes for a unit test since:
1) The database is transient - no connection string configs to worry about, nor do you need a complete db sitting on a server somewhere for the unit test to use.
2) The setup is fast, and the tests equally so as all in memory.
3) As it uses the NHibernate mapping info to generate the schema, you don't have to worry about keeping the unit test setup synchronised with code changes.
http://wiki.sharparchitecture.net/default.aspx?AspxAutoDetectCookieSupport=1
It may be possible to use the same approach with EF.