Karma Log Level Values - unit-testing

I use Karma as my test runner when running my unit tests and while looking at the Karma Configuration documentation, I noticed that there exists different levels of logging.
Currently, our code base uses: logLevel: config.LOG_INFO,
Is there a reason to use this one instead of the others?
Possible values:
config.LOG_DISABLE
config.LOG_ERROR
config.LOG_WARN
config.LOG_INFO
config.LOG_DEBUG
Also, anyone have an idea of what each log level does?

Is there a reason to use this one instead of the others?
Yes they each have varying levels of output. For example when trying to debug Karma errors that are difficult to track down and not being shown in the browser console or command window output (depending on where you have the configured results to display), you can change the following value in configuration which will yield more information output:
logLevel: config.LOG_DEBUG
This will give you a 'play by play' verbose detail of the Karma output.
Also, anyone have an idea of what each log level does?
The detailed documentation is sketchy at best and even the source on Github doesn't have great details. However the constants are somewhat self explanatory. Based on another property though, it dictates that these constants provide details in descending order (DEBUG being the most verbose, and DISABLE being the least/nothing):
LOG_DISABLE, LOG_ERROR, LOG_WARN, LOG_INFO, LOG_DEBUG
https://github.com/karma-runner/karma/blob/c5dc62db7642b8ca9504e71319e3b80143b8510a/docs/dev/04-public-api.md

Related

Can I remove all the effect of a Boost.log V2 from the final product?

I am in the process of selecting a logging system for our software development. We are using Boost extensively so the obvious option is boost.log V2
but before I select it to be used in my team, I have some questions that I could not find the answer in the documentation:
1- Can I remove the effect of it completely from the generated code? For example assume that I have this code and I need it to be in this way for debugging:
int main()
{
for( int i=0;i<100;i++)
{
int j=doSomething(i);
BOOST_LOG_TRIVIAL(trace) << << "I=2<<i <<" j="<<j;
}
}
is there any way that I remove the effect of logging system in above code so I am not loosing any performance as the result of using it?
2- Can I add section to the logging at the same time that I am adding severity? My code has several sections and we are working on a section at any time. I want to be able to set the logging to log the data for a specific section and not for the whole application which may have several section and possibly hundred of logging entry which needs to be filtered based on the part that I am working on it.
3- possibility of sending different logging to different sinks so for example some logging goes to console and some other goes to a file?
Can I remove the effect of it completely from the generated code?
If you mean removing any use of Boost.Log at compilation stage (e.g. by a preprocessor switch) then no, Boost.Log does not provide that. You will have to implement your own support for that, including conditional compilation of Boost.Log initialization and your own logging macros that expand to nothing when logging is disabled at compile time.
If you mean just disabling logs completely without removing the compile-time dependency then you can use core::set_logging_enabled or filters for that. It will still have small performance cost to check the condition for every log record, but no log records will be produced.
Can I add section to the logging at the same time that I am adding severity?
Yes, you can use channels for that. You can apply filters to the channel name to select which messages to keep and which to suppress. Here is a related answer.
possibility of sending different logging to different sinks so for example some logging goes to console and some other goes to a file?
Yes, again, this can be achieved with channels and filters. See the linked above SO answer describing that.

Log level in boost unit test framework

I am using the boost unit test framework. I use the BOOST_TEST_MESSAGE function, and therefore I need to set the log level to at least message.
From reading the doc, I can do the following:
I can add somehwere boost::unit_test::unit_test_log.set_threshold_level( boost::unit_test::log_messages); However, the doc indicates it to be generally considered bad practice.
I can set the environment variable BOOST_TEST_LOG_LEVEL appropriately. This is a bad solution for me, as I will distribute my code, and I do not have a good way to constrain the user to set this environment variable appropriately in their bashrc.
Does anyone know a proper solution to this?
The best solution was simply to use the command line argument --log_level when running my binary.

Exclude all messages in PC-Lint

I am using PC-Lint for my C++ project.
Is there a way to switch off all error and warning messages by default, so I can then reintroduce the required messages explicitly?
I have read the chapter of the PC-Lint manual entitled "Error Inhibition Options" and the best I could find was setting the wLevel to -w0 No messages (except for fatal errors)
Yes, it is possible, you can simply use -e* or -w0. However, the manual truly states (Chapter 16. Living with Lint):
DO NOT simply suppress all warnings with something
like: -e* or -w0 as this can disguise hard errors and make subsequent diagnosis very difficult.
So, yes, you can use it if your code is basically cleaned, and you want to review recent changes for a certain set of messages. But if you want to start cleaning your code, and are swamped with messages because of the default warning level -w3, I suggest to start using -w1, and resolve all issues there; most of the warnings/errors given at level one indicate problems with finding all header files, having al implicit macros set properly, and/or mimicking the compiler you use normally in a sufficiently precise way.
As always, I hesitate to advertise my own work, but if you want, take a look at my "How to wield PC Lint" PDF, where I have documented detailed instructions to handle the initial deployment of PC Lint and tackling the many warnings/errors/infos/notes you may be buried under.
When I started introducing PC-Lint to a new project I did the following:
As suggested by Johan Bezem, ran a -w1 level check over the whole thing. This doesn't actually find any new problems, but checks that your program is syntactically valid and finds any configuration issues. Nothing major, assuming your project compiles already.
Run the test again with -w2 level. This found 53,000 issues, which was a bit much to tackle in one go.
Pick a typical bad file, then suppress any errors that seem
irrelevant or non-urgent (eg. error 525: (Warning -- Negative indentation from line xxx)
adding -e525 to the command line or config file, until you find one that seems serious.
In my case this was
error 442: (Warning -- for clause irregularity: testing direction inconsistent with increment direction), i.e. a 'for' loop
that looked like it should be counting up was actually counting
down.
Reset the test level back to -w1 but added in the critical problem by number, -w1 +e442 in this case. Re-run it over the whole project then fix all the instances of that problem.
Back to stage 2 and try again.
This combination of fixing actual problems and suppressing likely false alarms soon gets your numbers under control.
So that everything gets better over time we also implement a script that does a thorough (full -w2 or -w3) check on any files that are created or modified.
I also found the tool LintProject very helpful as it can do an entire Visual Studio solution in one go, with tables with numbers of errors and worst offenders!

How to keep the unit test output in Jenkins

We have managed to have Jenkins correctly parse our XML output from our tests and also included the error information, when there is one. So that it is possible to see, directly in the TestCase in Jenkins the error that occurred.
What we would like to do is to have Jenkins keep a log output, which is basically the console output, associated with each case. This would enable anyone to see the actual console output of each test case, failed or not.
I haven't seen a way to do this.
* EDIT *
Clarification - I want to be able to see the actual test output directly in the Jenkins interface, the same way it does when there is an error, but for the whole output. I don't want only Jenkins to keep the file as artifact.
* END OF EDIT *
Anyone can help us on this?
In the Publish JUnit test result report (Post-build Actions) tick the Retain long standard output/error checkbox.
If checked, any standard output or error from a test suite will be
retained in the test results after the build completes. (This refers
only to additional messages printed to console, not to a failure stack
trace.) Such output is always kept if the test failed, but by default
lengthy output from passing tests is truncated to save space. Check
this option if you need to see every log message from even passing
tests, but beware that Jenkins's memory consumption can substantially
increase as a result, even if you never look at the test results!
This is simple to do - just ensure that the output file is included in the list of artifacts for that job and it will be archived according to the configuration for that job.
Not sure if you have solve it yet, but I just did something similar using Android and Jenkins.
What I did was using the http://code.google.com/p/the-missing-android-xml-junit-test-runner/ to run the tests in the Android emulator. This will create the necessary JUnit formatted XML files, on the emulator file system.
Afterwards, simply use 'adb pull' to copy the file over, and configure the Jenkin to parse the results. You can also artifact the XML files if necessary.
If you simply want to display the content of the result in the log, you can use 'Execute Shell' command to print it out to the console, where it will be captured in the log file.
Since Jenkins 1.386 there was a change mentioned to Retain long standard output/error in each build configuration. So you just have to check the checkbox in the post-build actions.
http://hudson-ci.org/changelog.html#v1.386
When using a declarative pipeline, you can do it like so:
junit testResults: '**/build/test-results/*/*.xml', keepLongStdio: true
See the documentation:
If checked, the default behavior of failing a build on missing test result files or empty test results is changed to not affect the status of the build. Please note that this setting make it harder to spot misconfigured jobs or build failures where the test tool does not exit with an error code when not producing test report files.

What do you need from a test harness?

I'm one of the people involved in the Test Anything Protocol (TAP) IETF group (if interested, feel free to join the mailing list). Many programming languages are starting to adopt TAP as their primary testing protocol and they want more from it than what we currently offer. As a result, we'd like to get feedback from people who have a background in xUnit, TestNG or any other testing framework/methodology.
Basically, aside from a simple pass/fail, what information do you need from a test harness? Just to give you some examples:
Filename and line number (if applicable)
Start and end time
Diagnostic output such as the difference between what you got and what you expected.
And so on ...
Most definitely all things from your list for each individual item:
Filename
Line number
Namespace/class/function name
Test coverage
Start time and end time
And/or total time (this would be more useful for me than the top two items)
Diagnostic output such as the
difference between what you got and
what you expected.
From the top of my head not much else but for the group of tests I would like to know
group name
total execution time
It must be very, very easy to write a test, and equally easy to run them. That, to me, is the single most important feature of a testing harness. If someone has to fire up a GUI or jump through a bunch of hoops to write a test, they won't use it.
An arbitrary set of tags - so I can mark a test as, for example "integration, UI, admin".
(you knew I was going to ask for this didn't you :-)
To what you said I'd add:
Method/function/class name
Coverage counting tool, with exceptions (Do not count these methods)
Result of N last runs available
Mandate that ways to easily parse test results must exist
Any sort of diagnostic output - especially on failure is critical. If a test fails, you don't want to always have to rerun the test under a debugger to see what happened - there should be some cludes in the output.
I also like to see a before and after snapshot of critical system variables like memory or hard disk space available as those can provide great clues as well.
Finally, if you're using random seeds for any of the tests, write the seed out to the logfile so that the test can be reproduced if necessary.
I'd like the ability to concatenate and nest TAP streams.
A unique id (uuid, md5sum) to be able to identify an individual test -- say, for use when inserting test results in a database, or identifying them in a bug tracker to make it possible for QA to rerun an individual test.
This would also make it possible to trace an individual test's behavior from build-to-build through the entire lifecycle of multiple revisions of a product. This could eventually allow larger-scale correlations between 'historic' events (new hire, product release, hardware upgrades) and the profile(s) of tests that fail as a result of such events.
I'm also thinking that TAP should be emitted through a dedicated side-channel rather than mixed in with stdout. I'm not sure this is under the scope of the protocol definition.
I use TAP as output protocol for a set of simple C++ test methods, and have seen the following shortcomings:
test steps cannot be put into groups (there's only the grouping into several test scripts; but for running all tests in our software, I need at least one more level of grouping, so that a single test step would be identified by like "DB connection" -> "Reconnection Test" -> "test step #3")
seeing differences between expected and actual output is useful; I either print the diff to stderr (as comment) or actually launch a graphical diff tool
the protocol and tools must be really language-independent. For example, so far I only know of the Perl "prove" tool for running tests, which is limited to running Perl scripts
In the end, the test output must be suitable as basis for easily generating an HTML report file which lists succeeded tests very concisely, gives detailed output for failed tests, and makes it possible to quickly jump into the IDE to the failing test line.
optional ascii coloured output, green for good, yellow for pending, red for errors
the idea of things being pending
a summary at the end of the test report of commands that will run the individual tests where
List item
something went wrong
something in the test was pending
Extension idea for TAP:
1..4
ok 1 - yay
not ok 2 - boo
ok 3 - yay #json:{...}
ok 4 - see my json
Ability to attach a #json comment...
- can be safely ignored by existing code
- well-defined tags can be easily reserved at testanything.org
- easy to produce, parse and read complex types
- yaml is a pain