How to make unit test in pyCharm - unit-testing

I want to make some unit tests, so I set up a list with values that all should be asserted true, just like this question. But I want it to run in PyCharm (With pressing Alt+Shift+F10)
If I just use the code from the answers, I just get No tests were found

You need to double check the settings for the tests run configuration:
By default PyCharm will inspect files that start with test and that are subclasses of unittest.TestCase, however you can control the Pattern and the subclasses option.
Change Pattern according to your test file names, it accepts Python regular expression.

Note that PyCharm will inspect only classes that inherit from unittest.TestCase so you should write the tests inside a class inherited from unittest.TestCase

PyCharm 2019.1+ and pytest
First, create a file named pytest.ini in order to set up custom configurations. For example, by default pytest will consider any file matching with test_*.py and *_test.py globs as a test module, so I encourage to have this file in order to define your custom file name patterns.
pytest.ini
[pytest]
python_files = test_*.py tests.py *_test.py
Now, open up the Run/Configuration window:
Add a new configuration, select Python tests and pytest:
In the following window you choose a name for your configuration, and you can also choose the target, but if you want pytest to use the pytest.ini file do not select Script path, APPLY, and OK.
Finally, run the test by clicking on the Play button.

Related

Taking full Input dataset when testing transformations in Palantir Foundry

In Palantir Foundry, I could see that we can write unit tests using Pytest or TransformRunner. My understanding is that, with Pytest we cannot pass an output of transform for unit testing and in TransformRunner we cannot use the dataset that we have to use originally. We need some test data. But I would like to use the whole input dataset on which my transform should run in production and do run tests on the output of it. How can I achieve that?
You can't access foundry datasets from the CI, you'll need to have the data snippet in a file within your repo and then load it.
test/fixtures/data/input/a.csv
col_a,col_b
1,2
TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), '..', '..', 'fixtures', 'data')
def test_runner_single_table(spark_session):
pipeline = Pipeline()
#transform_df(Output('/test_single_table/output/test'),
input_a=Input('/test_single_table/input/a'))
def transform_1(input_a):
return input_a.withColumn('col_c', input_a['col_a'] + input_a['col_b'])
pipeline.add_transforms(transform_1)
runner = TransformRunner(pipeline, '/test_single_table', TEST_DATA_DIR)
output = runner.build_dataset(spark_session, '/test_single_table/output/test')
assert output.first()['col_c'] == 3
TransformsRunner will translate the Input path into the directory path. In the example above:
TEST_DATA_DIR tells the runner where the data is in your environment
'/test_single_table' tells the runner what subpath can be ignored, since this path only exists on foundry datasets, not within your repo
input/a will be resolved against the Input('[ignored_sub_path]/input/a') and folder structure you defined in your repo.
You can print this properties and it will show up in the CI checks, if you want to understand them better.

How to add unit tests to CKEditor custom plugins

I have made an initial custom plugin for CKEditor, but it's not obvious from the CKEditor documentation what is the best way to structure a custom plugin project or how to add unit tests.
I want to set up the project so that it follows good practices (like Test Driven Development) as I keep developing it, and possibly add it into a CI/CD pipeline.
Current setup
My custom plugin adds subject tags for the topic of a sentence.
I based it on the inline widget tutorial (which creates a Classic Editor v29 application with a local custom plugin added in) - https://ckeditor.com/docs/ckeditor5/latest/framework/guides/tutorials/implementing-an-inline-widget.html
|---app.js
|---index.html
|---node_modules/
|---package.json
|---subject/
|---|---subject.js
|---|---subjectcommand.js
|---|---subjectediting.js
|---|---subjectui.js
|---webpack.config.js
Test suite
The CKEditor Testing Environment documentation says that the #ckeditor/ckeditor5-dev-tests package can be used outside ckeditor5 - https://ckeditor.com/docs/ckeditor5/latest/framework/guides/contributing/testing-environment.html
So I rearranged the plugin directories to separate out src and tests:
|---app.js
|---index.html
|---node_modules/
|---package.json
|---subject/
|---|---src/
|---|---|---subject.js
|---|---|---subjectcommand.js
|---|---|---subjectediting.js
|---|---|---subjectui.js
|---|---tests/
|---|---|---subject.js
|---webpack.config.js
Then ran the test suite:
node ./node_modules/.bin/ckeditor5-dev-tests --files=*
But it looks like the ckeditor5-dev-tests package might only test plugins that are part of the CKEditor software (e.g. when you are contributing to the CKEditor project itself). All the --files option glob conversions map to node_modules/ckeditor5-* See: https://github.com/ckeditor/ckeditor5-dev/tree/master/packages/ckeditor5-dev-tests#rules-for-converting---files-option-to-glob-pattern
Question
I'm not sure if I'm using the testing suite wrong, or if I am taking the wrong approach in the project structure - maybe the project should just be for the plugin by itself (without app.js, etc.), then add it to the Classic Editor later somehow?

Testbox 2.1 - Skip an entire directory or CFC file

We want to create a HelperUtility.cfc with common methods for our tests to use. If we put the file in /tests/lib/HelperUtility.cfc, can we tell TestBox, don't try running any tests in /tests/lib? If not, can we add something to the component tag to skip the entire file, rather than adding skip to all the methods in the component individually?
There's no way to do that unfortunately.
I have tried to skip some manual mocks that were created inside a tests/mock folder, but you cannot configure TestBox at runtime to skip a specific folder if you decide to run the tests for a parent folder.
The only work around that worked for me was to create a specs subfolder in the parent tests and then call the testbox runner with a directory argument of the specs...
For example: http://localhost:8500/testbox/system/runners/HTMLRunner.cfm?directory=tests.specs

Set screenshot path from default project location to different folder location

I have a suite which has 50 test cases. When I execute my suite, I get all the failed screenshots listed in the project's folder. I want to point and store those screenshots to a different directory with the name of the test case. I wanted it to be a one time setup than doing it explicitly for every test cases.
There's quite a few ways to change the screenshots default directory.
One way is to set the screenshot_root_directory argument when importing Selenium2Library. See the importing section of Selenium2Library's documentation, and importing libraries in the user guide.
Another way is to use the Set Screenshot Directory keyword, which will do pretty much the same thing as specifying a path when importing the library. Though, using this keyword you can set the path to a new one whenever you like. For example, you could make it so that each test case could have it's own screenshot directory using this keyword. According to your question, this may be the best solution.
And finally, you may also post-process screenshots using an external tool, or even a listener, that would move all screenshots to another directory. Previously mentioned solutions are in most cases much better, but you still may want to do this in some cases, where say, the directory where you want screenshots to be saved would be created only after the tests have finished executing.
I suggest you to do the follow:
For new directory, you should put the following immediately after where you open a browser such:
Open Browser ${URL} chrome
Set screenshot directory ${OUTPUT FILE}${/}..${/}${TEST_NAME}${/}
For replace the screenshot name from the default to your own name, create the following keyword:
sc
Capture page screenshot filename=${SUITE_NAME}-{index}.png
Then, create another keyword and run it on Setup's test case:
Register Keyword To Run On Failure sc
In the above example, I created a new folder with the test case name, which create a screenshot (in case of failure) with the name of suite project name (instead of 'selenium-screenshot-1.png').

What is the format of the django config file for manage.py?

I'm hooking up selenose (selenium) tests and using liveserver in the process. It appears that I automatically start running into problems with ports being used so want to configure liveserver to use more that one port. I see how to do that via the command line (--liveserver=localhost:8100-8110) but would like to use a config file.
I have one I'm using for nose already and thought I might be able to reuse it but can't find anything to support that belief and my test runs say it won't work. I was expecting to be able to add something like the following:
[???]
liveserver=localhost:8100-8110
but replace the '???' with an actual header.
for some reason django uses an environment variable for this. you can set it in your settings if you want
import os
os.environ['DJANGO_LIVE_TEST_SERVER_ADDRESS'] = 'localhost:8000-9000'