Robot framework log not generating properly - python-2.7

Screen of the log file which has failed
Screen of the log file which has Passed
I have been working on RobotFramework. As I am new to it, got these problem.
My Test has 17 Test Cases. I Tested it and got log file as shown in image.
Initially it was showing perfect log with all description of each test cases but now I don't know what exactly I have done. It's showing incomplete log with no any description of test case.
How can it might happen:
Related to Browser (I'm using Chrome).
Or some problem with either Test Suite setup/tear down.
Please point out the exact issue.

I was facing the similar issue with the Robot framework. I tried to remove the robotframework.jar file from my environmental variable CLASSPATH. It worked like a charm.
This jar is necessary if you are using robot keyword annotations in your project. Turns out it wasn't necessary at all in my project. If it is in your case, try to update the Jar file by using latest release.

Related

SonarQube: see details of failed tests

In SonarQube 5.6.6, I can see on http://example.com/component_measures/metric/test_failures/list?id=myproject that my unit test results were successfully imported. This is indicated by
Unit Test Failures: 1
which I produced by a fake failing test.
I also see the filename of the failing test class in a long list, and I see the number of failed tests (again: 1).
But I can't find any more information: which method, stack trace, stdout/err, just everything which is also included in the build/reports/test/index.html files generated by gradle? Clicking to the list entry points me to the code and coverage view, but I can't find any indicator, which test failed.
Am I doing something wrong in the frontend, is it a configuration problem, or am I looking for a feature which doesn't exist in SonarQube?
This is how it looks currently:
http://example.com/component_measures/domain/Coverage: Here I see that one test failed:
http://example.com/component_measures/metric/test_success_density/list: I can see which file it is:
But clicking on the line above only points me to the source file. Below the test which "failed". There is no indication that this test failed. And I can't find any way to see stack trace or the name of the failed test method:
Btw: The page of the first screenshot show Information about unit tests. But if the failing test is an integration test, I don't even see these numbers
Update
Something like this is probably what I'm looking for:
(found on https://deors.wordpress.com/2014/07/04/individual-test-coverage-sonarqube-jacoco/)
I never saw such a view on my installation, don't know how to get it and if it is implemented in the current version.
Unfortunately Test execution details is a deprecated feature Sonar Qube 5.6
If you install older version such as Sonar Qube 4.x, we will get following screen which provides test case result details.
But this screen itself has been removed.
Ref # https://jira.sonarsource.com/browse/SONARCS-657
Basically the issue is Unit test case details report requires links back to source code files. But now the unit test cases are only linked to assemblies.

Cobertura report has 100% coverage anywhere

I'm running my webApp using Jetty with my instrumented classes.
After the shutdown of Jetty i'm taking the generated .set file and creating a cobertura report using the command line tool.
I always get 100% coverage results on any class.
It seems that Cobertura takes into account only the lines that were executed during testing, and doesn't get the full class data.
I've tried to add source files to the reports - no help.
I also tried to take the .ser file created after the instrumentation and merge it with .ser file created after Jetty shutdown (it is actually the same file, but before running Jetty I backed-up the .ser which was created after instrumentation) - no help here either.
Can someone please help??
Thanks
100% coverage is a clear indicator, that the sources are missing for the report. You should check your configuration for creating the report.
Make sure that:
you give the right folder
the source folder is structured like the packages, and not just all classes in one dir
As explained at http://cobertura.sourceforge.net/faq.html, in the answer to the question "When I generate coverage reports, why do they always show 100% coverage everywhere?",
"Cobertura is probably using the wrong .ser file when generating the reports. When you instrument your classes, Cobertura generates a .ser file containing basic information about each class. As your tests run, Cobertura adds additional information to this same data file. If the instrumented classes can not find the data file when running then they will create a new one. It is important that you use the same cobertura.ser file when instrumenting, running, and generating reports."
In my case, I experienced this issue when instrumented classes were in one .ser and during execution I was generating another .ser. Generating the HTML report "just" from the second .ser shown the problem mentioned in the question. Merging the two datafiles (.ser), and regenerating the report, solved the issue.
Refer to http://cobertura.sourceforge.net/commandlinereference.html for "Merging Datafiles" information.

How to keep the unit test output in Jenkins

We have managed to have Jenkins correctly parse our XML output from our tests and also included the error information, when there is one. So that it is possible to see, directly in the TestCase in Jenkins the error that occurred.
What we would like to do is to have Jenkins keep a log output, which is basically the console output, associated with each case. This would enable anyone to see the actual console output of each test case, failed or not.
I haven't seen a way to do this.
* EDIT *
Clarification - I want to be able to see the actual test output directly in the Jenkins interface, the same way it does when there is an error, but for the whole output. I don't want only Jenkins to keep the file as artifact.
* END OF EDIT *
Anyone can help us on this?
In the Publish JUnit test result report (Post-build Actions) tick the Retain long standard output/error checkbox.
If checked, any standard output or error from a test suite will be
retained in the test results after the build completes. (This refers
only to additional messages printed to console, not to a failure stack
trace.) Such output is always kept if the test failed, but by default
lengthy output from passing tests is truncated to save space. Check
this option if you need to see every log message from even passing
tests, but beware that Jenkins's memory consumption can substantially
increase as a result, even if you never look at the test results!
This is simple to do - just ensure that the output file is included in the list of artifacts for that job and it will be archived according to the configuration for that job.
Not sure if you have solve it yet, but I just did something similar using Android and Jenkins.
What I did was using the http://code.google.com/p/the-missing-android-xml-junit-test-runner/ to run the tests in the Android emulator. This will create the necessary JUnit formatted XML files, on the emulator file system.
Afterwards, simply use 'adb pull' to copy the file over, and configure the Jenkin to parse the results. You can also artifact the XML files if necessary.
If you simply want to display the content of the result in the log, you can use 'Execute Shell' command to print it out to the console, where it will be captured in the log file.
Since Jenkins 1.386 there was a change mentioned to Retain long standard output/error in each build configuration. So you just have to check the checkbox in the post-build actions.
http://hudson-ci.org/changelog.html#v1.386
When using a declarative pipeline, you can do it like so:
junit testResults: '**/build/test-results/*/*.xml', keepLongStdio: true
See the documentation:
If checked, the default behavior of failing a build on missing test result files or empty test results is changed to not affect the status of the build. Please note that this setting make it harder to spot misconfigured jobs or build failures where the test tool does not exit with an error code when not producing test report files.

VS2010 and Create Unit Tests... no tests generated

I'm trying to add some unit tests to an existing code base using Visual Studio 2010's unit test generator. However, in some cases when I open a class, right click --> Create Unit Tests..., after I select the methods to generate tests for it will create what is essentially a blank test. Are there situations where this can happen? In every case I select at least one public method to gen tests for, and all it generates is this:
using TxRP.Controllers; //The location of the code to be tested
using Microsoft.VisualStudio.TestTools.UnitTesting;
That's it. Nothing else. Strange, right?
I should note that this is all MVC 2 controller code, and I have been able to gen tests for other controllers with no problem, and all my controllers follow pretty much the same format. No error seems to be thrown, as it gens the empty page happily and adds it to the project as if everything is just fine.
Has anyone had experience with the same type of thing happening, and was there any answer found as to why?
UPDATE:
There is in fact an error during generation:
While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
After some research, the only possible solution I found is that this error occurrs if you're trying to generate tests to a test file that already exists. However, this solution is not working for me...
If you try to generate tests for a class which already has existing tests in another file in the project, it will just generate an empty file as described above. Changing the filename is not sufficient, nor is using a different location within the project. Basically it seems to enforce the one-testfile-per-class convention across the entire project.
This problem is caused by the previously generated test file having been moved to a folder other than the root folder in the test project.
Resolution
Move the test file into the test project root folder.
Generate the new tests
Move the test file back to the folder location you want in the test project.
I have no clue why they dont call it a BUG! in a typical enterprise level software development it is more than a coincidence where multiple people generate unit tests for different methods of the same class # different points of time.
We always end up with this error and it is not helping us any way! Feels as if the Context Menu "Create Unit Tests" has lil use!
Error description:
"While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
"

Apps with label XYZ could not be found

today I ran into an error and have no clue how to fix it.
Error: App with label XYZ could not be found. Are you sure your INSTALLED_APPS setting is correct?
Where XYZ stands for the app-name that I am trying to reset. This error shows up every time I try to reset it (manage.py reset XYZ). Show all the sql code works.
Even manage.py validate shows no error.
I already commented out every single line of code in the models.py that I touched the last three months. (function by function, model by model) And even if there are no models left I get this error.
Here http://code.djangoproject.com/ticket/10706 I found a bugreport about this error. I also applied one the patches to allocate the error, it raises an exception so you have a trace back, but even there is no sign in what of my files the error occurred.
I don't want to paste my code right now, because it is nearly 1000 lines of code in the file I edited the most.
If someone of you had the same error please tell me were I can look for the problem. In that case I can post the important part of the source. Otherwise it would be too much at once.
Thank you for helping!!!
I had a similar problem, but I only had it working after creating an empty models.py file.
I was running Django 1.3
Try to clean up all your build artifacts: build files, temporary files and so on. Also ./manage.py test XYZ will show you stack trace. Later try to run python with -m pdb option and step through the code to see where you fail and why.
You don't specify which server you're using. With Apache you'll almost certainly need a restart for things to take effect. If you're using the development one try restarting that. If this doesn't work you may need to give us some more details.
I'd also check your paths as you may have edited one file but you may be using a different one.
Plus check what's still in your database, as some of your previous versions may be interfering.
Finally as a last resort I'd try a clean install (on another django instance) and see if that goes cleanly, if it does then I'd know that I'd got a conflict, if not then the problem's in the code.