I'm running my webApp using Jetty with my instrumented classes.
After the shutdown of Jetty i'm taking the generated .set file and creating a cobertura report using the command line tool.
I always get 100% coverage results on any class.
It seems that Cobertura takes into account only the lines that were executed during testing, and doesn't get the full class data.
I've tried to add source files to the reports - no help.
I also tried to take the .ser file created after the instrumentation and merge it with .ser file created after Jetty shutdown (it is actually the same file, but before running Jetty I backed-up the .ser which was created after instrumentation) - no help here either.
Can someone please help??
Thanks
100% coverage is a clear indicator, that the sources are missing for the report. You should check your configuration for creating the report.
Make sure that:
you give the right folder
the source folder is structured like the packages, and not just all classes in one dir
As explained at http://cobertura.sourceforge.net/faq.html, in the answer to the question "When I generate coverage reports, why do they always show 100% coverage everywhere?",
"Cobertura is probably using the wrong .ser file when generating the reports. When you instrument your classes, Cobertura generates a .ser file containing basic information about each class. As your tests run, Cobertura adds additional information to this same data file. If the instrumented classes can not find the data file when running then they will create a new one. It is important that you use the same cobertura.ser file when instrumenting, running, and generating reports."
In my case, I experienced this issue when instrumented classes were in one .ser and during execution I was generating another .ser. Generating the HTML report "just" from the second .ser shown the problem mentioned in the question. Merging the two datafiles (.ser), and regenerating the report, solved the issue.
Refer to http://cobertura.sourceforge.net/commandlinereference.html for "Merging Datafiles" information.
Related
Situation:
I'm attempting to get coverage reports for a project that uses both C++ and Python. I'm using LCOV/GCOV for C++, and attempting to use Coverage.py for the python stuff. The only issue is, most of the python code that's being used is simply utility functions being called one function at a time. No initialization, no real life-cycle, or exit. So no real way to use the API to start/stop/save, or use the coverage command line to measure.
With this, I thought the easiest way to accomplish this would be using the sitecustomize.py method like outlined here. I have gotten that to work, and it measures all configured python code as expected. Now I'm looking at how to accomplish this with compiled python code (.pyc).
I can get it to work if I keep source(.py) and (.pyc) in the same directory when running, and then reporting. However, I'm looking for a way to RUN the files and generate the measurement data. Then at a later time point to the actual source files, and run the actual reports. Ideally I wouldn't need the source(.py) files at all, but I haven't found a way to accomplish this.
Objective:
In the end I want to be able to compile the python files(.pyc), install them on the target, and run coverage like stated above. It will generate coverage data files, then pull those files to my host machine which houses the source(.py) .. and do the actual coverage reporting.
Is this possible currently?
[Edit] Thanks to Ned's advice, I looked into the [paths] usage, and it worked exactly how I needed it to.
Screen of the log file which has failed
Screen of the log file which has Passed
I have been working on RobotFramework. As I am new to it, got these problem.
My Test has 17 Test Cases. I Tested it and got log file as shown in image.
Initially it was showing perfect log with all description of each test cases but now I don't know what exactly I have done. It's showing incomplete log with no any description of test case.
How can it might happen:
Related to Browser (I'm using Chrome).
Or some problem with either Test Suite setup/tear down.
Please point out the exact issue.
I was facing the similar issue with the Robot framework. I tried to remove the robotframework.jar file from my environmental variable CLASSPATH. It worked like a charm.
This jar is necessary if you are using robot keyword annotations in your project. Turns out it wasn't necessary at all in my project. If it is in your case, try to update the Jar file by using latest release.
Long story short - Familiar with BASE 9, now using EG (7.1) due to a new role with another company. The transition is painful, but there is one thing that bothers me the most and that is the log.
As I am sure most know, it will rewrite/refresh for every piece of code you execute.
Surely there must be an option to maintain a "running log" within the SAS code you are running/building (not necessarily for the whole project, but just for the program node within the project).
Can this be done?
Any assistance is greatly appreciated. Searched for some reference, but none citing the subject specifically.
Yes - from SAS's support pages:
You’ll notice that a separate log node is generated for each code node. By turning on Project Logging, you can
easily tell Enterprise Guide that you’d like a single SAS log to be generated for all of the tasks and code nodes in your
Project. This single Project Log will be created in addition to the individual logs created for each task or code node.
Helpful Hint: If Project Logging is turned on, the log represents a running log of the entire project. To
turn on the Project Logging, select Project Log in the Context Menu of the Process Flow, and then select
Turn On.
I'm trying to use JRules BRMS 7.1 for a project. And I found out that DVS has some limitation in testing Ruleset.
It is that it cannot test the content in collections of complex type in Excel scenario file templates.
But I understand it is normal as that kind of content is too complex for an Excel table format.
So anyone has any idea what is the best way to test a ruleset that need tons of test cases with lots of complex type input without using DVS?
If developers are doing the testing, then use JUnit with an embedded rule engine. If non-technical users need to perform testing, it may be simplest to upgrade to WODM 7.5 which does not have this limitation. If that is not an option, then it is possible to use JRules 7.1 DVS, but it is somewhat complex and involves creating a separate wrapper rule project that takes the output collections as input and in its XOM, performs the comparison with the actual results.
Raj Rao is correct, you can use array as expected results (input is easy) but you will have to use hidden JRules API and it is painful anyway.
JUnit or 7.5 is the answer.
Unless you want to pay IBM to do it, even so they may say it is not possible because it is not detailled anywhere :(
Cheers
PS: BTW, arrays of complex types as input is easy for sure and well documented, I think.
If you have deployed your rules as a HTDS service to RES, then you could use SoapUI to test the HTDS web service.
SoapUI allows you to set up test cases that can be used to test different scenarios.
To validate the rules using Decision Validation Services, you create an Excel scenario file template that you populate with scenarios to test.
Before generating the Excel scenario file template, you must check that your project does not contain any errors or warnings that could prevent the generation of the Excel file.
step1:in your rule explorer select your project in rule project enable the dvs part click check point and make sure that you don't have any errors.
2:create scenario file click next give the name for test project name.xls.
3:pass the values in scenario and expected results in expected results column.
4:you can test multiple scenarios at a time.
5:now close and save the excel file.
6:run configuration right click dvs excel file give any name for test
7:in excel file field click browse and select xls file
8.in rule project field select your rule project
9:in HTML report field select your project and click OK.
10:click apply and run
11:in rule studio right click on your project and click refresh
12:the HTML file will be generated in project.
13:right click and open with web browser and observe the result of your scenarios.
14:you have successfully enabled dvs
We have managed to have Jenkins correctly parse our XML output from our tests and also included the error information, when there is one. So that it is possible to see, directly in the TestCase in Jenkins the error that occurred.
What we would like to do is to have Jenkins keep a log output, which is basically the console output, associated with each case. This would enable anyone to see the actual console output of each test case, failed or not.
I haven't seen a way to do this.
* EDIT *
Clarification - I want to be able to see the actual test output directly in the Jenkins interface, the same way it does when there is an error, but for the whole output. I don't want only Jenkins to keep the file as artifact.
* END OF EDIT *
Anyone can help us on this?
In the Publish JUnit test result report (Post-build Actions) tick the Retain long standard output/error checkbox.
If checked, any standard output or error from a test suite will be
retained in the test results after the build completes. (This refers
only to additional messages printed to console, not to a failure stack
trace.) Such output is always kept if the test failed, but by default
lengthy output from passing tests is truncated to save space. Check
this option if you need to see every log message from even passing
tests, but beware that Jenkins's memory consumption can substantially
increase as a result, even if you never look at the test results!
This is simple to do - just ensure that the output file is included in the list of artifacts for that job and it will be archived according to the configuration for that job.
Not sure if you have solve it yet, but I just did something similar using Android and Jenkins.
What I did was using the http://code.google.com/p/the-missing-android-xml-junit-test-runner/ to run the tests in the Android emulator. This will create the necessary JUnit formatted XML files, on the emulator file system.
Afterwards, simply use 'adb pull' to copy the file over, and configure the Jenkin to parse the results. You can also artifact the XML files if necessary.
If you simply want to display the content of the result in the log, you can use 'Execute Shell' command to print it out to the console, where it will be captured in the log file.
Since Jenkins 1.386 there was a change mentioned to Retain long standard output/error in each build configuration. So you just have to check the checkbox in the post-build actions.
http://hudson-ci.org/changelog.html#v1.386
When using a declarative pipeline, you can do it like so:
junit testResults: '**/build/test-results/*/*.xml', keepLongStdio: true
See the documentation:
If checked, the default behavior of failing a build on missing test result files or empty test results is changed to not affect the status of the build. Please note that this setting make it harder to spot misconfigured jobs or build failures where the test tool does not exit with an error code when not producing test report files.