Does test-kitchen support running multiple specific suites? - test-kitchen

For example if my kitchen.yml contains these three suites (example is abbreviated):
suites:
- name: dogs
- name: cats
- name: horse
I would like to be able to run:
kitchen converge -c 2 dogs cats
Is this possible?

test-kitchen supports running multiple suites concurrently. You can use a regular expression "REGEXP" pattern to match on the suites you want to run.
$ kitchen help converge
Usage:
kitchen converge [INSTANCE|REGEXP|all]
Options:
-c, [--concurrency=N] # Run a converge against all matching instances concurrently. Only N instances will run at the same time if a number is given.
-p, [--parallel], [--no-parallel] # [Future DEPRECATION, use --concurrency] Run a converge against all matching instances concurrently.
-t, [--test-base-path=TEST_BASE_PATH] # Set the base path of the tests
-l, [--log-level=LOG_LEVEL] # Set the log level (debug, info, warn, error, fatal)
[--log-overwrite], [--no-log-overwrite] # Set to false to prevent log overwriting each time Test Kitchen runs
[--color], [--no-color] # Toggle color output for STDOUT logger
Description:
The instance states are in order: destroy, create, converge, setup, verify, destroy. Change one or more instances
from the current state to the converge state. Actions for all intermediate states will be executed. See
http://kitchen.ci for further explanation.
So you could use the following regex pattern to match on the "dogs" and "cats" suites and have kitchen run them. The "-c" option without a number following it will run all the suites that match the regex concurrently.
kitchen converge 'dogs|cats' -c
A "-p" option would also have the same behavior as "-c" without any number following it.
Hope that helps.

Related

How to hide the line number with Google Test

I'm trying to hide the line number in the console GTest ouput if a test failed.
For example in:
/Projects/Dya/tests/main.cpp:22: Failure
Expected: object->calc(expr, params)
Which is: "5" To be equal to: "2"
I'd like to hide this:
/Projects/Dya/tests/main.cpp:22:
Is that possible?
Thanks for your answers.
I'm pretty sure you cannot. I see an option for hiding elapsed time and turning off colorized output, but nothing for filenames.
If you run your test program with the -h flag it will show you the list of supported options. Here's what I get:
This program contains tests written using Google Test. You can use the
following command line flags to control its behavior:
Test Selection:
--gtest_list_tests
List the names of all tests instead of running them. The name of
TEST(Foo, Bar) is "Foo.Bar".
--gtest_filter=POSTIVE_PATTERNS[-NEGATIVE_PATTERNS]
Run only the tests whose name matches one of the positive patterns but
none of the negative patterns. '?' matches any single character; '*'
matches any substring; ':' separates two patterns.
--gtest_also_run_disabled_tests
Run all disabled tests too.
Test Execution:
--gtest_repeat=[COUNT]
Run the tests repeatedly; use a negative count to repeat forever.
--gtest_shuffle
Randomize tests' orders on every iteration.
--gtest_random_seed=[NUMBER]
Random number seed to use for shuffling test orders (between 1 and
99999, or 0 to use a seed based on the current time).
Test Output:
--gtest_color=(yes|no|auto)
Enable/disable colored output. The default is auto.
--gtest_print_time=0
Don't print the elapsed time of each test.
--gtest_output=xml[:DIRECTORY_PATH/|:FILE_PATH]
Generate an XML report in the given directory or with the given file
name. FILE_PATH defaults to test_details.xml.
Assertion Behavior:
--gtest_death_test_style=(fast|threadsafe)
Set the default death test style.
--gtest_break_on_failure
Turn assertion failures into debugger break-points.
--gtest_throw_on_failure
Turn assertion failures into C++ exceptions.
--gtest_catch_exceptions=0
Do not report exceptions as test failures. Instead, allow them
to crash the program or throw a pop-up (on Windows).
Except for --gtest_list_tests, you can alternatively set the corresponding
environment variable of a flag (all letters in upper-case). For example, to
disable colored text output, you can either specify --gtest_color=no or set
the GTEST_COLOR environment variable to no.
For more information, please read the Google Test documentation at
https://github.com/google/googletest/. If you find a bug in Google Test
(not one in your own code or tests), please report it to
<googletestframework#googlegroups.com>.

Why does Jenkins double the total amount of tests if the build is successfull?

I have set up a Jenkins server and I set up a project (c++) that uses googletest, xUnit and cobertura. It calculates a the test-coverage, and my tests passes as well. I did it all in one single shell script.
Now my problem is, if all tests pass I have a total amount of e.g. 20 items. If a single test item fail, I only have 10 items in total (total=passed+skipped+fail).
When I check a successful build details, for instance, I click on :"last stable build...", "Test Result", "root", "foo" then I am on a page "Test Result: foo" below a table with "All tests", containing 2 rows of "foo".
I I do the same for a failed build, I only have 1 item in this list. Somehow here the factor 2 comes from.
I'd like to know why it's count different and where is my mistake? IMHO I would assume both would only contain one item.

How to display plots in Jenkins of measured durations from tests in C++ over multiple builds

I have multiple test cases which actually measure the duration of a calculation using Boost timers.
The test cases are defined using Boost Test.
For running the tests I use CTest since I use CMake.
The tests are all specified with add_test().
Since CTest generates XML files I can display the Unit Test results in a corresponding Jenkins job.
For the performance tests I do not only want to display if the test cases succeeded but also the measured durations.
Is it possible in C++ (with Boost Test/CMake) to somehow mark measured durations and to convert them into a file which contains pairs for the test cases with two columns?
unittest0.dat:
test case | time
bla0 10 s
bla3 40 s
Then I would like to display this file and all similar files from previous builds in Jenkins as a plot.
The user should be able to follow the measured values over multiple jobs from the past to see if the performance has improved.
Therefore Jenkins would have to convert the data into files like:
bla0.dat:
build number | time
0 10 s
1 15 s
2 20 s
Maybe there is a complete different approach I don't know about.

Weka: ReplaceMissingValues for a test file

I am a bit worried when using Weka's ReplaceMissingValues to input the missing values only for the test arff dataset but not for the training dataset. Below is the commandline:
java -classpath weka.jar weka.filters.unsupervised.attribute.ReplaceMissingValues -c last -i "test_file_with_missing_values.arff" -o "test_file_with_filled_missing_values.arff"
From a previous post (Replace missing values with mean (Weka)), I came to know that Weka's ReplaceMissingValues simply replace each missing value with the mean of the corresponding attribute. This implies that the mean needs to be computed for each attribute. While computation of this mean is perfectly fine for the training file, it is not okay for the test file.
This is because in the typical test scenario, we should not assume that we know the mean of the test attribute for the input missing values. We only have one test record with multiple attributes for classification instead of having the entire set of test records in a test file. Therefore, instead, we shall input the missing value based on the mean computed using the training data. Then above command would become incorrect as we would need to have another input (the means of the train attributes).
Has anybody thought about this before? How do you work around this by using weka?
Easy, see Batch Filtering
Instances train = ... // from somewhere
Instances test = ... // from somewhere
Standardize filter = new Standardize();
filter.setInputFormat(train); // initializing the filter once with training set
Instances newTrain = Filter.useFilter(train, filter); // configures the Filter based on train instances and returns filtered instances
Instances newTest = Filter.useFilter(test, filter); // create new test set
The filter is initialized using the training data and then applied on both training and test data.
The problem is when you apply the ReplaceMissingValue filter outside any processing pipeline, because after writing the filtered data, you can't distinguish between "real" values and "imputed" values anymore. This is why you should do everything that needs to be done in a single pipeline, e.g., using the FilteredClassifier:
java -classpath weka.jar weka.classifiers.meta.FilteredClassifier
-t "training_file_with_missing_values.arff"
-T "test_file_with_missing_values.arff"
-F weka.filters.unsupervised.attribute.ReplaceMissingValues
-W weka.classifiers.functions.MultilayerPerceptron -- -L 0.3 -M 0.2 -H a
This example will initialize the ReplaceMissingValues filter using the "training_file_with_missing_values.arff" data set, then apply the filter on "test_file_with_missing_values.arff" (with the means learning from the training set), then train a multilayer perceptron on the filtered training data and predict the class of the filtered test data.

Weka NominalToBinary makes test and training sets incompatible

So I have a training and testing sets and they contain multi-valued nominal values. As long as I need to train & test NaiveBayesMultinomial classifier, which doesn't support multi-valued nominal values, I do the following:
java weka.filters.supervised.attribute.NominalToBinary -i train.arff -o train_bin.arff -c last
java weka.filters.supervised.attribute.NominalToBinary -i test.arff -o test_bin.arff -c last
Then i run this:
java weka.classifiers.bayes.NaiveBayesMultinomial -t train_bin.arff -T test_bin.arff
And the following error arises:
Weka exception: Train and test files not compatible!
As far as I understood after I examined both .arff files, they became incompatible after I ran NominalToBinary, since train and test sets are different and thus different binary variables are generated.
Is it possible to perform NominalToBinary conversion in a way that sets keep being compatible?
Concatenate the two sets into one, perform the NominalToBinary conversion, then split them again. This way, they should be normalized the same way.
But are you sure the files were compatible before? Or does maybe your test and/or training set contain attribute cases that the other doesn't have?