Unit test for a vector - c++

Please,I was searching it here, but couldn't do it correctly.
So I have a function which returns to me the vector of sortig numbers. Then I tried to create a using test for this very vector.
Here is what I have right now:
#include "stdafx.h"
#include "CppUnitTest.h"
#include "Finder.h"
using namespace Microsoft::VisualStudio::CppUnitTestFramework;
namespace FinderUnitTest
{
TEST_CLASS(UnitTest1)
{
public:
TEST_METHOD(TestMethod1)
{
Finder f;
std::vector<int> v1 = f.find_words();
//find_words(); is working okay without tests
for (int i=0;i<1;i++)
Assert::AreEqual(57,v1[i]);
}
};
}
It really doesn't matter, how many time for goes. I'd like to get the unit test without mistake, I have one right now, it is:
Message: Invalid parameter detected in function std::vector >::operator [], c:\program files (x86)\microsoft visual studio\2017\community\vc\tools\msvc\14.12.25827\include\vector line 1795. Expression: "vector subscript out of range"
How I get that when I build my two projects, my fuction, which returns the vector of sorting numbers,doesn't have any data because when I run the test,there is empty. Am I right?
I just want to compare two first numbers of my vector with 57.

If you are expecting find_words to return a vector like [57, 57, ...] then this test should fail. However, it should not error, but rather it should Assert. You want to fix your checks so they detect the problem as an assert violation.
Finder f;
std::vector<int> v1 = f.find_words();
Assert::IsTrue(v1.size() >= 2); // there are at least two entries
Assert::AreEqual(57,v1[0]); // the first is 57
Assert::AreEqual(57,v1[1]); // the second is 57
I don't see where you gave finder anything to search, but if you say it should find 57's, you're the boss. Just be sure to check that. Once the unittest gives this assert, the unittest has done its job, and you can go back and see if you gave Finder the right inputs, or if there is a bug inside Finder.
X21's comment was good programming practice about how to detect and avoid the crash. It fixes the crash by not checking the values at all, since to do so would be an error. It was not directed at writing a unit test. The unit test must detect and assert when the output is not what you expect. Crashing would be better than not checking at all, inside a test.

Related

'identifier undefined' in C++11 for-loop with USTRUCT

I am implementing logging functionality in Unreal Engine 4.27 (in C++). A key part of my code is a function that is called once per game-tick. This function is responsible for iterating over an array of actors that I would like to log data for, checking whether a new log entry should be written at this point in time and calling the necessary functions to do that.
I am iterating over elements of a TArray of UStructs: LogObject->LoggingInfo = TArray<FActorLoggingInformation>. This array is defined as a UProperty of LogObject. In the loop I have to change the values of the elements so I want to work with the original items and "label" the current item as "ActorLoggingInfo". I have seen this done generally in cpp and also with TArrays. And yet my code does not work, there is no error message, but ActorLoggingInfo is undefined, thus the if-condition is never met.
This is the for-loop:
for (FActorLoggingInformation& ActorLoggingInfo : LogObject->LoggingInfo) {
if (ActorLoggingInfo.LogNextTick == true) {
ActorLoggingInfo.LogNextTick = false;
...
}
...
}
This is the definition of FActorLoggingInformation:
USTRUCT(BlueprintType)
struct FActorLoggingInformation
{
GENERATED_BODY()
public:
FActorLoggingInformation()
{
}
FActorLoggingInformation(int32 LogTimer, AActor* Actor, FString LogName)
{
this->LogTimer = LogTimer;
this->LogNextTick = false;
...
}
// Specifies Logging Frequency in ms
UPROPERTY(BlueprintReadOnly, VisibleAnywhere)
int32 LogTimer;
bool LogNextTick;
...
};
This is the debugger at run-time:
Additional Notes:
1. Something that consistently works for me is omitting the &, using:
for (FActorLoggingInformation ActorLoggingInfo : LogObject->LoggingInfo)
However, this is creating useless duplicates on a per-tick basis and complicates applying changes to the original objects from within in the for-loop, so it is not a viable option.
2. I have also tried auto& instead of FActorLoggingInformation& as used in the examples above, but I encountered the same issue, so I thought it would be best to be as explicit as possible.
I would be very thankful if you had any ideas how I can fix this :)
Thanks in advance!
Thanks to Avi Berger for helping me find my problem!
In fact, ActorLoggingInfo was actually never undefined and the code within the body of the if-clause was also executed (it just didn't do what it was intended to do).
When stepping through the code in the debugger it never showed the steps within the if-body and ActorLoggingInfo was shown as undefined so when no logs were written, I assumed it was something to do with that instead of my output function not working properly. So lesson learnt, do not blindly trust the debugger :)

how to do unit test elegant on graph operator

I have a application deal with graph computation. I want cover unit test on to it, but I found it is hard to do the test.
The main class is shown as follows:
Grid store the graph strcture
GridInput parse inputfile and save into Grid.
GridOperatorA do some operation on Grid.
GridOperatorB do some operation on Grid.
the production code is some thing like
string configure_file = "data.txt";
GridInput input(configure_file);
Grid grid = input.parseGrid();
GridOperatorA a;
a.operator(grid);
GridOpeartorB b;
b.operator(grid);
I found the code is hard to test.
My unit test code shown as follow
// unit test for grid input
string configure_file = "data.txt";
GridInput input(configure_file);
Grid grid = input.parseGrid();
// check grid status from input file
assert(grid.someAttribute(1) == {1,2,3,4,...,100}); // long int array hard to understand
...
assert(grid.someAttribute(5) == {100,101,102,...,200}); // long int array hard to understand
// unit test for operator A
string configure_file = "data.txt";
GridInput input(configure_file);
Grid grid = input.parseGrid();
GridOperatorA a;
a.operator(grid);
// check grid status after opeator A
assert(grid.someAttribute(1) == {1,3,,7,4,...,46}); // long int array hard to understand
...
assert(grid.someAttribute(5) == {59,78,...,32}); // long int array hard to understand
// unit test for operator B
string configure_file = "data.txt";
GridInput input(configure_file);
Grid grid = input.parseGrid();
GridOperatorA a;
a.operator(grid);
GridOperatorA b;
b.operator(grid);
// check grid status after opeator B
assert(grid.someAttribute(1) == {3,2,7,9,...,23}); // long int array hard to understand
...
assert(grid.someAttribute(5) == {38,76,...,13}); // long int array hard to understand
In my option, my unit test is not good, it have many backness
the unit test is slow, in order to test OperatorA,OperatorB it need to do file IO
the unit test is not clear, they need to check the grid status after operator, but check a lot of array is hard for programmer to understand what the array stand for. a few days later, programmer can not understand what have happened.
the unit test is only for one configure file, if I need to test grid from many configure file, there will be even more array hard to understand.
I have read some technique to break dependency, such as mock object. I can mock the grid read from configure file. But the mock data is just like the data store in configure file. I can mock the Grid after operatorA, but the mock data is just like the grid status after operatorA. They will also leads to a lot of array hard to understand.
I do not know how to do unit test elegant in my situation. Any voice is appreciate. Thanks for your time.
To get rid of the io
you can pass something like a data provider to GridInput. In you production code it will read the file. In test code you can replace it with a test double (stub) that does provide hardcoded data. You already mention that above.
you could also let "someone else" (i.e. other code) take care of loading the file and just pass the loaded data to the grid. Just looking at the Grid, testing gets simpler because there is no file handling required at all.
To make the test more readable you can do some of this:
use nice test method names that are not just testMethod. Name them after what you are testing. You could use your comments as method names. Test only one aspect in a single test.
replace the inline array with properly named constants. The name of the constants can help to understand what is checked at a given assertion.
same holds for the parameters to the someAttribute() method.
another option is to create you own assert methods to hide some of the details. Something like assertThatMySpecialConditionIsMet(grid).
You could also write a test data generator to avoid hardcoding the arrays. Not something i would suggest for the first test. After a couple of tests a pattern ight get visible that can be moved to a generator.
Just a couple of hints to get you started.... :-)

Why storage fault in regex destructor?

I am getting a storage fault when my code destructs a regex and I am mystified as to the reason. I suspect I am missing something stupid about regex.
A little background: I am a reasonably experienced C++ developer but this is my first adventure with the regex class. My environment is a little unusual: I edit and alpha test in MS Visual C++ and then take the code to another environment. The other environment is fully Posix-compliant and just happens to be an IBM mainframe. The code works fine on Windows but fails every time on the mainframe. The problem is not something fundamental to my mixed environment: I have been working in this pair of environments in this way for years with complete C++ success.
I define the regex in the class declaration:
#include <regex>
...
class FilterEvalEGNX : public FilterEval
{
...
std::tr1::basic_regex<char> regexObject;
// I also tried plain regex with no difference
Subsequently in the class implementation I assign a pattern to the regex. The code should be more complex than this but I simplified it down to assigning a static string to eliminate any possible side effects from the way the string would be handled in real life.
std::tr1::regex::flag_type flags = std::tr1::regex::extended;
// I have also tried ECMA and it made no difference
try
{
static const char pat[] = "(ISPPROF|SPFTEMP)";
regexObject.assign(pat, flags);
}
catch (std::tr1::regex_error &e)
{
// handle regex error
}
That works without error. Of course, there is subsequent pattern matching code but it is not part of the problem: if I destruct the class immediately after the above code I get the storage fault.
I don't do anything to the regex in my class destructor. The rest of the class has been working for years; I am adding the regex now. I think some "external" overlay of the regex is unlikely.
Here is the traceback of the calls leading up to the fault:
std::tr1::_EBCDIC::_Destroy(std::tr1::_EBCDIC::_Node_base*)
+00000066 40 CRTE128N Exception
std::tr1::_EBCDIC::basic_regex<char,std::tr1::_EBCDIC::regex
+000000C8 2022 FilterEvalEGNX.C Call
std::tr1::_EBCDIC::basic_regex<char,std::tr1::_EBCDIC::regex
+0000007C 1913 FilterEvalEGNX.C Call
FilterEvalEGNX::~FilterEvalEGNX()
The code in the vicinity of line 1913 of regex is
~basic_regex()
{ // destroy the object
_Tidy();
}
The code in the vicinity of line 2022 of regex is
void _Tidy()
{ // free all storage
if (_Rep && --_Rep->_Refs == 0)
_Destroy(_Rep);
_Rep = 0;
}
_Destroy() appears to be implemented in the run-time and I do not think I have the source.
Any ideas? Thanks,
Believe it or not, it appears to be a bug in the C++ runtime. I tweaked my simple example and now I can duplicate the problem in a 15-line main(). I am going to run this by some peers and then report it to IBM. They actually fix this stuff! They don't just respond with "yes, you have found an issue."

Clang code complete for not well-formed code?

I'm working on adding code complete via Clang to text editor to make it IDE.
The source code:
struct s {
int a;
float b;
};
void main() {
s var;
var.
The problem is that code complete for the position after dot returns nothing and if i add } at the end and retry code complete for the position after dot it shows the correct list.
I understand that the main function definition should be closed, but users frequently type chars one-by-one and don't want to close function first and then return back to variable and then get code complete. How can that be walked-around to avoid go back/return?
My idea was to get diagnostics and add } if i get according diagnostics, but it's unwished walk-around. Can Clang be smart enough to make it itself?

Unit Tests for comparing text files in NUnit

I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
For input A and B, generate the output.
Compare the contents of the generated file to the contents expected output
When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
As for the multiple tests with different data, use the NUnit RowTest extension:
using NUnit.Framework.Extensions;
[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
Line 32 Column 12 - Found 'x' when 'y' was expected
You could optionally enhance that by displaying multiple lines of output
Difference at Line 32 Column 12, first difference shown
A = this is a txst
B = this is a tests
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.