I'm trying to hide the line number in the console GTest ouput if a test failed.
For example in:
/Projects/Dya/tests/main.cpp:22: Failure
Expected: object->calc(expr, params)
Which is: "5" To be equal to: "2"
I'd like to hide this:
/Projects/Dya/tests/main.cpp:22:
Is that possible?
Thanks for your answers.
I'm pretty sure you cannot. I see an option for hiding elapsed time and turning off colorized output, but nothing for filenames.
If you run your test program with the -h flag it will show you the list of supported options. Here's what I get:
This program contains tests written using Google Test. You can use the
following command line flags to control its behavior:
Test Selection:
--gtest_list_tests
List the names of all tests instead of running them. The name of
TEST(Foo, Bar) is "Foo.Bar".
--gtest_filter=POSTIVE_PATTERNS[-NEGATIVE_PATTERNS]
Run only the tests whose name matches one of the positive patterns but
none of the negative patterns. '?' matches any single character; '*'
matches any substring; ':' separates two patterns.
--gtest_also_run_disabled_tests
Run all disabled tests too.
Test Execution:
--gtest_repeat=[COUNT]
Run the tests repeatedly; use a negative count to repeat forever.
--gtest_shuffle
Randomize tests' orders on every iteration.
--gtest_random_seed=[NUMBER]
Random number seed to use for shuffling test orders (between 1 and
99999, or 0 to use a seed based on the current time).
Test Output:
--gtest_color=(yes|no|auto)
Enable/disable colored output. The default is auto.
--gtest_print_time=0
Don't print the elapsed time of each test.
--gtest_output=xml[:DIRECTORY_PATH/|:FILE_PATH]
Generate an XML report in the given directory or with the given file
name. FILE_PATH defaults to test_details.xml.
Assertion Behavior:
--gtest_death_test_style=(fast|threadsafe)
Set the default death test style.
--gtest_break_on_failure
Turn assertion failures into debugger break-points.
--gtest_throw_on_failure
Turn assertion failures into C++ exceptions.
--gtest_catch_exceptions=0
Do not report exceptions as test failures. Instead, allow them
to crash the program or throw a pop-up (on Windows).
Except for --gtest_list_tests, you can alternatively set the corresponding
environment variable of a flag (all letters in upper-case). For example, to
disable colored text output, you can either specify --gtest_color=no or set
the GTEST_COLOR environment variable to no.
For more information, please read the Google Test documentation at
https://github.com/google/googletest/. If you find a bug in Google Test
(not one in your own code or tests), please report it to
<googletestframework#googlegroups.com>.
Related
I would like to run a Postman test on just the final iteration of a test run - I am building a variable (array) of response time values across all of the iterations and then want to test this variable for extreme values once we've reached the last / final iteration.
I hoped pm.info.iteration would have my answer but didn't see anything relevant.
I'm using a data file - the test runner highlights how many iterations are applicable (rows in the csv) as soon as the file is chosen so I'm guessing that Postman does know 'final' iteration? I just haven't worked out how to get at it.
My workaround is to hard code the number of iterations per test run based on how many rows my csvs currently have (e.g. if(pm.iteration.info === 70) but not ideal as the data file is likely to grow.
As #DannyDainton mentioned you can use iterationCOunt
iteration index starts from 0 ,so use
(pm.info.iteration === (pm.info.iterationCount-1) )
For example if my kitchen.yml contains these three suites (example is abbreviated):
suites:
- name: dogs
- name: cats
- name: horse
I would like to be able to run:
kitchen converge -c 2 dogs cats
Is this possible?
test-kitchen supports running multiple suites concurrently. You can use a regular expression "REGEXP" pattern to match on the suites you want to run.
$ kitchen help converge
Usage:
kitchen converge [INSTANCE|REGEXP|all]
Options:
-c, [--concurrency=N] # Run a converge against all matching instances concurrently. Only N instances will run at the same time if a number is given.
-p, [--parallel], [--no-parallel] # [Future DEPRECATION, use --concurrency] Run a converge against all matching instances concurrently.
-t, [--test-base-path=TEST_BASE_PATH] # Set the base path of the tests
-l, [--log-level=LOG_LEVEL] # Set the log level (debug, info, warn, error, fatal)
[--log-overwrite], [--no-log-overwrite] # Set to false to prevent log overwriting each time Test Kitchen runs
[--color], [--no-color] # Toggle color output for STDOUT logger
Description:
The instance states are in order: destroy, create, converge, setup, verify, destroy. Change one or more instances
from the current state to the converge state. Actions for all intermediate states will be executed. See
http://kitchen.ci for further explanation.
So you could use the following regex pattern to match on the "dogs" and "cats" suites and have kitchen run them. The "-c" option without a number following it will run all the suites that match the regex concurrently.
kitchen converge 'dogs|cats' -c
A "-p" option would also have the same behavior as "-c" without any number following it.
Hope that helps.
I am working on a war game project using legacy C++ code. Attacks appear to be functioning strangely in "extreme cases" (e.g., a combat in one game "tile" in which 5000 arrows are fired at a group of 3 regiments, one of which can be the target for any of the ~5000 potential "hits" in the course of one game turn). My task is to collate data on what is happening during a game turn.
I need to record how the value for one particular Local variable changes during the course of one game turn for any unit that comes under fire (restricting to a 'test scenario' in which only regiments in one tile are under attack).
My Visual Studio "Watch window" at the beginning of a debug session looks about like this:
Name----------------------Value------------Type
targ-----------------------011-------------short
(gUnit[011]).damage--------0---------------short
(gUnit[047]).damage--------0---------------short
(gUnit[256]).damage--------0---------------short
What this mean is: at the first breakpoint, the regiment represented in nested array gUnit[011]) has been determined to be the target ("targ") for a "hit" and the "damage" for that regiment prior to this hit is = 0.
After one loop that may change to this:
Name----------------------Value------------Type
targ-----------------------256-------------short
(gUnit[011]).damage--------3---------------short
(gUnit[047]).damage--------0---------------short
(gUnit[256]).damage--------0---------------short
Which means: after one "hit" Regiment 011 has suffered 3 damage, and now for the next hit that has been determined to occur Regiment 047 is the target. So for each cycle of the call loop, any of the potential targets can be reassigned as the target and the value of 'damage' can change. For this example of 5000 arrows fired, there could be 5000 hits (though it would be unlikely to be > 1300 hits for that many shots fired). For each hit that occurs in the span of one game turn, I need to record:
target and damage
I have breakpoints setup and I can do what I need to do manually meaning:
Advance to first call to the function that determines a hit ["MissileHit()"]
Advance again and note that value for (gUnit[011]).damage has changed from 0 to 3. /* i.e., the Regiment with the ID "011" has taken "3" damage . . . */
Repeat 5000 times for experiment ONE (and noting that, with each loop targ can change to any of the three arrays that represent the units that are under attack in the tile.
Repeat for another 7 experiments (with slightly different conditions) for phase one of testing.
Repeat for perhaps 3 or 4 more phases of testing . . .
In sum, THOUSANDS of rows with at least two columns:
targ.......damage
(as a side note, this csv will get additional columns plugged in to it after each test cycle, so the final spreadsheet will look more like this:
atkrID....Terrain....atkrExpr....atkrFatig....targID....targDamage
The end result of all this data collection being: descriptive statistics can be calculated on the various in-game variables that mediate the outcomes of attacks)
Obviously, doing this "manually" would be ridiculous.
Is there an easy way to get Visual Studio to generate csv files that will record this data for me?
I can certainly go down the path of writing scripts to run when these breakpoints execute, and fstreaming stuff into a file. But, I'm still wet-behind the ears (both with Visual Studio, C++ and developing in general!) so, I thought it worth asking if there are built in features of Visual Studio that would facilitate this.
I think the VS extension like Drop's comment would be a better path for this issue, but I think it would require the high extension knowledge, so I just provide another workaround for this issue: Add the data breakpoint and output the value to the output windows, even if you also need to copy and past the value to your files, but I think it is also convenient for you to analyze the value during debugging.
Visual Studio. Debug. How to save to a file all the values a variable has had during the duration of a run?
I have set up a Jenkins server and I set up a project (c++) that uses googletest, xUnit and cobertura. It calculates a the test-coverage, and my tests passes as well. I did it all in one single shell script.
Now my problem is, if all tests pass I have a total amount of e.g. 20 items. If a single test item fail, I only have 10 items in total (total=passed+skipped+fail).
When I check a successful build details, for instance, I click on :"last stable build...", "Test Result", "root", "foo" then I am on a page "Test Result: foo" below a table with "All tests", containing 2 rows of "foo".
I I do the same for a failed build, I only have 1 item in this list. Somehow here the factor 2 comes from.
I'd like to know why it's count different and where is my mistake? IMHO I would assume both would only contain one item.
I have multiple test cases which actually measure the duration of a calculation using Boost timers.
The test cases are defined using Boost Test.
For running the tests I use CTest since I use CMake.
The tests are all specified with add_test().
Since CTest generates XML files I can display the Unit Test results in a corresponding Jenkins job.
For the performance tests I do not only want to display if the test cases succeeded but also the measured durations.
Is it possible in C++ (with Boost Test/CMake) to somehow mark measured durations and to convert them into a file which contains pairs for the test cases with two columns?
unittest0.dat:
test case | time
bla0 10 s
bla3 40 s
Then I would like to display this file and all similar files from previous builds in Jenkins as a plot.
The user should be able to follow the measured values over multiple jobs from the past to see if the performance has improved.
Therefore Jenkins would have to convert the data into files like:
bla0.dat:
build number | time
0 10 s
1 15 s
2 20 s
Maybe there is a complete different approach I don't know about.