SLOC in cppcheck - c++

I want to write checker that can be added to other checkers in CppCheck. This checker must check SLOC of all member function, for example the function should contain no more than 200 significant lines of code. But in CppCheck I only found method that checks the existence of a body hasBody(), but not a count of lines.

I am a cppcheck developer. I am no expert in this topic. I think it depends on exactly what you want to count. how many lines is this:
void f() { int x=3; int y=x+2; dostuff(x+y+4); }
I would guess that you want to go through the tokens and count semicolons or something:
for (tok = functionScope->classStart; tok != functionScope->classEnd; tok = tok->next()) {
if (tok->str() == ";")
++lines;
}
I think this checker you suggest is interesting but it does not fit well in the core cppcheck tool. I would suggest that you write an addon. I will be happy to add it in our addons folder and show it in the GUI etc.
By the way.. I have thought that it would be nice to integrate (execute and read results) ohcount, cccc, or whatever in the GUI so extended statistics can be shown.

Related

Corrupted strings coming from libapt-pkg

I'm attempting to make a GUI package manager for Ubuntu similar to Synaptic which uses libapt but have run into a few strange bugs to do with getting information from the pkgCache. Specifically when using a pkgIterator and trying to access the FullName() and Name() properties but others like Section() are fine (you can look at documentation here).
Looking at the way Synaptic does it, it is pretty simple and I tried replicating it 1:1 with no luck.
const char *name() {
const char *n = package->Name();
if (n == NULL)
return "";
return n;
}
Which will usually return some garbage string that looks like a null termination was missed. The interesting thing is that all of the strings for packages in the same group from repositories will have the same garbage text while locally installed packages will have a NULL pointer and therefore will return an empty string.
Any ideas? I believe that I'm opening the pkgCache correctly and the iterator is not NULL. Theres also not many places to look online when debugging libapt so any references would be helpful as well!.

Unit testing - log and then fail?

I am used to test drive my code. Now that I am new to Go I am trying to get it right as fast as possible. I am using the testing package in the standard library which seem to be good enough. (I also like that it is not yet another external dependency. We are currently at 2 dependencies overall - compared to any Java- or Ruby project.....) Anyways - it looks like an assert in golang looks like this:
func TestSomething(t *testing.T) {
something := false
if something {
t.Log("Oh noes - something is false")
t.Fail()
}
}
I find this verbose and would like to do it on one line instead:
Assert( something, "Oh noes - something is false" )
or something like that. I hope that I have missed something obvious here. What is the best/idiomatic way to do it in go?
UPDATE: just to clarify. If I was to do something like this:
func AssertTrue(t *testing.T, value bool, message string) {
if value {
t.Log(message)
t.Fail()
}
}
and then write my test like this
func TestSomething(t *testing.T) {
something := false
AssertTrue(t, something, "Oh noes - something is false")
}
then it would not be the go way to do it?
There are external packages that can be integrated with the stock testing framework.
One of them I wrote long ago, gocheck, was intended to sort that kind of use case.
With it, the test case looks like this, for example:
func (s *Suite) TestFoo(c *gocheck.C) {
// If this succeeds the world is doomed.
c.Assert("line 1\nline 2", gocheck.Equals, "line 3")
}
You'd run that as usual, with go test, and the failure in that check would be reported as:
----------------------------------------------------------------------
FAIL: foo_test.go:34: Suite.TestFoo
all_test.go:34:
// If this succeeds the world is doomed.
c.Assert("line 1\nline 2", gocheck.Equals, "line 3")
... obtained string = "" +
... "line 1\n" +
... "line 2"
... expected string = "line 3"
Note how the comment right above the code was included in the reported failure.
There are also a number of other usual features, such as suite and test-specific set up and tear down routines, and so on. Please check out the web page for more details.
It's well maintained as I and other people use it in a number of active projects, so feel free to join on board, or follow up and check out the other similar projects that suit your taste more appropriately.
For examples of gocheck use, please have a look at packages such as mgo, goyaml, goamz, pipe, vclock, juju (massive code base), lpad, gozk, goetveld, tomb, etc. Also gocheck, manages to test itself. It was quite fun to bootstrap that.
But when You try write test like Uncle Martin, with one assert in test and long function names, then simple assert library, like http://github.com/stretchr/testify/assert can make it much faster and easier
I discourage writing test in the way you seem to have desire for. It's not by chance that the whole stdlib uses the, as you call it, "verbose" way.
It is undeniably more lines, but there are several advantages to this approach.
If you read Why does Go not have assertions? and s/error handling/test failure reporting/g you can get a picture of why the several "assert" packages for Go testing are not a good idea to use,
Once again, the proof is the huge code base of the stdlib.
The idiomatic way is the way you have above. Also, you don't have to log any message if you don't desire.
As defined by the GO FAQ:
Why does Go not have assertions?
Go doesn't provide assertions. They are undeniably convenient, but our
experience has been that programmers use them as a crutch to avoid
thinking about proper error handling and reporting. Proper error
handling means that servers continue operation after non-fatal errors
instead of crashing. Proper error reporting means that errors are
direct and to the point, saving the programmer from interpreting a
large crash trace. Precise errors are particularly important when the
programmer seeing the errors is not familiar with the code.
We understand that this is a point of contention. There are many
things in the Go language and libraries that differ from modern
practices, simply because we feel it's sometimes worth trying a
different approach.
UPDATE
Based on your update, that is not idiomatic Go. What you are doing is in essence designing a test extension framework to mirror what you get in the XUnit frameworks. While there is nothing fundamentally wrong, from an engineering perspective, it does raise questions as to the value + cost of maintaining this extension library. Additionally, you are creating an in-house standard that will potentially ruffle feathers. The biggest thing about Go is it is not C or Java or C++ or Python and things should be done the way the language is constructed.

Unit test directive inside a directive

I'm having troubles unit testing a directive that wraps the ng-grid component.
I've written this punkler that shows the issue : http://plnkr.co/edit/HlB8Bt9M2TzsyM6XaDso?p=preview
There is a lot of code I know, but I can't reduce it more than that.
Basically, there is a custom-grid directive that wrapps the ng-grid component from angular-ui. I've made this directive because I have lots of grids in my app and I wouldn't duplicate the configuration of the grid.
The grid displayed on top of the test results use this directive. So , you can see it works fine :)
However, there is probably something I miss about how to test this directive.
I've written a simple test that assert that the first row, first col displays 'Martoni' but it fails. The same test using the ng-grid directive pass.
Any idea what's wrong in my test code ?
http://plnkr.co/edit/WwTyuQXNklL7CnjOxsB2?p=preview
I've had issues before calling directives recursively (or at least nested-ly), particularly when they make use of the $compile service (and ng-repeat's, especially). I'm convinced there's a bug there but I haven't taken the time to find an isolated case. Anyway I think what you've found is some sort of bug, but there's an easy workaround.
If you look at the source for ngGrid you'll see that columns are only added if the width is big enough. When I stepped through in your second example w was negative, which led to addCol never being called.
var w = col.width + colwidths;
if (col.pinned) {
addCol(col);
var newLeft = i > 0 ? (scrollLeft + totalLeft) : scrollLeft;
domUtilityService.setColLeft(col, newLeft, self);
totalLeft += col.width;
} else {
if (w >= scrollLeft) {
if (colwidths <= scrollLeft + self.rootDim.outerWidth) {
addCol(col);
}
}
}
colwidths += col.width;
This led me to believe that your elements had 0 height/width, which could be because they weren't actually in the document while they were being unit-tested.
So to fix it I added the following before your compile(elm)($scope);
angular.element('body').append(elm);
And then to clean up:
afterEach(function () {
angular.element(elm).remove();
});
I don't know if it was intentional or not, but you called $new() on $rootScope in the first unit test but didn't use the result of that to compile with, whereas you did it in the second.

How to test asynchronuous code

I've written my own access layer to a game engine. There is a GameLoop which gets called every frame which lets me process my own code. I'm able to do specific things and to check if these things happened. In a very basic way it could look like this:
void cycle()
{
//set a specific value
Engine::setText("Hello World");
//read the value
std::string text = Engine::getText();
}
I want to test if my Engine-layer is working by writing automated tests. I have some experience in using the Boost Unittest Framework for simple comparison tests like this.
The problem is, that some things I want the engine to do are just processed after the call to cycle(). So calling Engine::getText() directly after Engine::setText(...) would return an empty string. If I would wait until the next call of cycle() the right value would be returned.
I now am wondering how I should write my tests if it is not possible to process them in the same cycle. Are there any best practices? Is it possible to use the "traditional testing" approach given by Boost Unittest Framework in such an environment? Are there perhaps other frameworks aimed at such a specialised case?
I'm using C++ for everything here, but I could imagine that there are answers unrelated to the programming language.
UPDATE:
It is not possible to access the Engine outside of cycle()
In your example above, std::string text = Engine::getText(); is the code you want to remember from one cycle but execute in the next. You can save it for later execution. For example - using C++11 you could use a lambda to wrap the test into a simple function specified inline.
There are two options with you:
If the library that you have can be used synchronously or using c++11 futures like facility (which can indicate the readyness of the result) then in your test case you can do something as below
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (!Engine::isResultReady());
//read the value
assert(Engine::getText() == "WHATEVERVALUEYOUEXPECT");
}
If you dont have the above the best you can do have a timeout (this is not a good option though because you may have spurious failures):
void testcycle()
{
//set a specific value
Engine::setText("Hello World");
while (Engine::getText() != "WHATEVERVALUEYOUEXPECT") {
wait(1 millisec);
if (total_wait_time > 1 sec) // you can put whatever max time
assert(0);
}
}

Unit Tests for comparing text files in NUnit

I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
For input A and B, generate the output.
Compare the contents of the generated file to the contents expected output
When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
As for the multiple tests with different data, use the NUnit RowTest extension:
using NUnit.Framework.Extensions;
[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
Line 32 Column 12 - Found 'x' when 'y' was expected
You could optionally enhance that by displaying multiple lines of output
Difference at Line 32 Column 12, first difference shown
A = this is a txst
B = this is a tests
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.