I've got a problem when unit testing my program.
The problem is simple but i'm not sure why this is not working.
1 -> i build all my program
2 -> i build my unitTest
3 -> the test is running.
All is ok when it is not about getting global data from the data segment. It seems as if the variable are not initialized / or simply found. So of course all my tests become wrong.
My question is:
Is it totally wrong to build an executable, then running the test on it? Or should i must compile all my code + the unit test in the same time, and then running it? Or is it just a lack of SenTesting framework?
I forgot to mention that this is a C++ const string. Dunno if that change something.
*EDIT***
My assumption was wrong, but i still don't understand the magic beyond! Seems a C++ magic hoydi hoo?
char cstring[] = "***";
std::string cppString = "***";
NSString* nstring = #"***";
- (void)testSync{
STAssertNotNil(nstring, nil); // fine
STAssertNotNil((id)strlen(bbb), nil); // fine
STAssertNotNil((id)cppString.size(), nil); // failed
}
EDIT 2**
Actually this is normal that the C++ is not initialized at this part of the code. If i do a nm on my executable, it appears that my C and Obj-C global are put into the dataSegment. I thought my C++ string was in the same case, but it is actually put into the bss segment. That's means it's uninitialized. The fact is the C++ compiler do some magic beyond and the C++ string is initialized after the main() call and act like if it were into the dataSegment.
I didn't know that testSuit doesn't have main() call, so the C++ object are never initialized. There is some technique in order to call the .ctor before the testSuit. But i am too lazy too explain and it's some kind of topic. I have just replaced my C++ string with a simple char array, and it work perfectly since my value are now POD.
By the way there is no devil in global variable if they are just read-only. ;)
OK, I can see a few faults here.
First of all, this code gives errors on my environment (Xcode 5) and for good reasons (with ARC enabled). I don't know how you got the thing to compile. The reason is that you are casting an integer (or long) to an object, and this will result in many errors, as it is normally an invalid operation. So, the real question is not why the third "assert" failed, but why the second one succeeded.
As far as the second part of your question is concerned, I have to admit that I do not completely understand your question, and you may have to explain it more thoroughly.
In general, unit testing is testing specific parts of your code. Therefore, you typically don't perform the tests on an actual final executable (this is not called unit testing, I believe), nor do you have to compile "all your c++ code + your unit tests at the same time".
Since you are using Xcode, I will give you some indications.
Write your application (or at least a part of it), and find the aspects / functions / objects you want to perform unit tests on.
In separate files, write unit tests that instantiate these objects and test their methods, call them and compare the inputs and outputs.
You should have a second target in your application, that will compile only the unit test source code and the relevant main program code.
Build this target, or press command-U and it will report successes and failures.
So, you need to separate your source code and isolate your classes / methods to make them testable like this. This needs a good architecture and design of the application on your part, and you may need to make some compromises in flexibility (that is up to you to decide). Oh, and I believe that in a testable code you should avoid global variables in general, for various reasons. Global variables are helpful sometimes, but they generally make unit testing really difficult, (and if misused may lead to spaghetti code, but this is a whole different story)
I hope I helped, even without fully understanding the second part of your post.
Related
Say I have the following two functions:
add_five (number) -> number + 2
add_six (number) -> add_five(number) + 1
As you can see, add_five has a bug.
If I now test add_six it would fail because the result is incorrect, but the code is correct.
Imagine you have a large tree of functions calling each other, it would be hard to find out which function contains the bug, because all the functions will fail (and not only the one with the bug).
So my question is: should unit tests fail because of incorrect behaviour (wrong results) or because of incorrect code (bugs).
should unit tests fail because of incorrect behaviour (wrong results) or because of incorrect code (bugs)?
Unit tests usually fail because of wrong results. That's what you write in assertions: you call a method and you define the expected result.
Unit tests cannot identify incorrect code. If the operation is return number+5 and your CPU or your RAM has a hardware problem and return something different, then the test will fail as well, even if the code is correct.
Also consider:
public int add_five(int number)
{
Thread.Sleep(5000);
return number+5;
}
How shall the unit test know whether the Sleep is intended or not?
So, if any unit test fails, it's your job to look at it, find out why it fails and if it fails in a different method, write a new unit test for that method so you can exclude that method next time.
Presumably you have a test for add_five/1 and a test for add_six/1. In this case, the test for add_six/1 will fail alongside the test for add_five/1.
Let's assume you decide to check add_six/1 first. You see that it depends on add_five/1, which is also failing. You can immediately assume that the bug in add_five/1 is cascading up.
Your module dependencies form a directed (hopefully acyclic) graph. If a dependency of you function or module is broken, that should be what you target for debugging first.
Another option is to mock out the add_five function when testing your add_sixfunction, but this quickly creates a lot of extra typing and logic duplication. If you change the spec of add_five, you have to change every place you reimplemented it as a mock.
If you use a quickcheck-style testing library, you can test for certain logic errors based on the properties of what you're testing. These bugs are detected using randomly generated cases that produce incorrect results, but all you as a tester write are library-specific definitions of the properties you're testing for. However, this will also suffer from dependency breakages unless you've mocked out dependent modules/functions.
When I want to ask a question on e.g. stackoverflow I usually have to post the source code.
The problem is, I am using quite a big custom framework, classes structure etc. and the problem-related parts may be localized in many places (sometimes it's very hard to detect which parts of code are important for question). I cannot post the full source code (it would be too big to read efficiently).
For that reason I usually make an effort to write an minimal code (usually in one main.cpp instead of tons of classes) that reproduces the problem.
I wonder - is it possible to automate that process?
The typical things to do here is to replace methods/functions' calls with their bodies, merge files into one .cpp, remove all the "not called" methods & classes, unused variables etc.
The real difficulty here is telling the difference between "it doesn't do what I want", "the bug went away because I removed the essential code" and the case of "it now crashes because I removed something important". And really, hitting the delete key after marking some code "don't need it" is the easy part.
Finding out what is essential to show a problem is the hard part, and it's very difficult to automate this - because it is necessary to understand the difference between what the code should do and what it actually does. Just randomly removing code will not work, because the "new" code may be broken because you removed some essential step, not because you remove unused crud - only humans [that understand the problem] can do that.
Consider this:
Object* something;
void Initialize()
{
something = new Object(1, 2, 3);
}
int main()
{
Initialize();
// Some more code, some of which SOMETIMES sets something = NULL.
something->doStuff(); // Will crash if object is NULL.
}
If we remove Initialize, the code will fail every time, not just every third time. But it's because the Object has not been initialized, rather than the bug in the code [which may be that we should add if (something) before something->doStuff(), or because we shouldn't set it to NULL in the "some more code", so "don't do that"].
When I work on a tricky problem, especially at work where we have test systems that automatically produce code for testing different functionality under different conditions, my first step is to make [or take some existing] code a "small standalone test", which is small and simple, and only does "what is necessary", rather than trying to reduce many thousands of lines of complex code that does lots of extra stuff.
For some projects, there are tools around that helps with identifying "which bit of the code is the problem", for example [bugpoint][1] that finds which "pass" in LLVM is guily of causing a crash.
If you have a version control system that supports this you can "bisect" code to come up with the version that introduced a particular fault [at least sometimes]. However, I had a case at work where some of my code "apparently broke things", but it turns out that some other code was "broken all the time since a long time back" because the other code was not clearing a pointer field that the API manual says should be set to NULL, and my code was inspecting the pointer to find out if it was pointing at the right type thing - which goes horribly wrong when the value is "whatever happens to be in that part of the stack", so it is not NULL and not a valid pointer. I just added the code that made this bug apparent rather than hiding itself.
[1] http://llvm.org/docs/CommandGuide/bugpoint.html
In the developer code, there are many places where it calls assert(xyz):
(from assert.h)
#define assert(_Expression) (void)( (!!(_Expression)) || (_wassert(_CRT_WIDE(#_Expression), _CRT_WIDE(__FILE__), __LINE__), 0) )
When I run my tests through gtest and one of these asserts fails, then my executable completely shuts down.
I want a way for gtest to just catch this assert, fail the test, and the continue execution. Is this possible?
As from google test's reference documentation
How to Write a Death Test
Google Test has the following macros to support death tests:
where statement is a statement that is expected to cause the process to die, predicate is a function or function object that evaluates an integer exit status, and regex is a regular expression that the stderr output of statement is expected to match. Note that statement can be any valid statement (including compound statement) and doesn't have to be an expression.
You can use these test macros to intercept native exit() or _exit() calls of your tested code, if these return different values from 0.
As for your comment
"What if the test itself doesn't expect it, but it happens anyway? I don't want the rest of my execution to stop. Just that test to fail, then continue on."
Sorry, you can't prevent that. That's what assert() statements are designed for, and act as a self assertion for certain functions that test the inputs or conditions they achieve.
You may try to compile your testing and under test code using the -DNDEBUG compiler option, but this will leavee you with even more obscure issues hitting undefined behavior or such.
If a test case is likely to hit an unexpected assertion, there's either something wrong with your test cases input values, or with the code tested.
So you should setup reproducible conditions, that either the test case fails with the assert (and the unit tester runnable carries on), or the whole thing blows up (exits the test runner process), which means your tested input didn't pass (and you'll need to change the testcase, or fix the the code under test).
Basically, if the code you are testing is broken, the test cannot continue.
To keep gtest from crashing, make sure the code you are testing at least compiles properly, and input it is gathering is valid.
I am saying this not to be mean, but rather out of personal experience. I use gtest and gmock for my own projects. I have been playing around with code lately that was a bit out of my league (after all, the only way to grow is to stretch beyond your perceived limits).
The code was taking data from a data file, and this was crashing my tests, not because there was anything wrong with the test, but because I wasn't doing proper error checking yet for the functions that were reading from the file, and it was throwing a wrench in things when the program was getting a string, and wanted an integer.
Believe it or not, exceptions are a GOOD thing in tests. You don't want to just ignore them and move on, you want to figure out what is causing them and make it stop. That is the entire reason for testing.
I have a function in my program that preforms a whole bunch of floating point math. It returns an array of values which is not currently being used in my program yet.
I want to test this piece of code for speed under maximum optimizations, however since the code isn't used, the compiler conveniently skips the function all together and I can't get a time on it.
How do force the compiler to run that section of code under maximum optimizations even though the result is not used (I want the computer to just give me a sense as to how fast the section runs).
I'm running Visual C++ 2008.
You could use SecureZeroMemory() to overwrite the result after is has been received from the function. You don't even need to overwrite the whole result, one array element will be enough, maybe you can even pass zero as "number of bytes", so that nothing is done by the function.
This will do the trick on Windows - SecureZeroMemory() is intended to never be optimized out by the compiler. Using it is pretty straightforward and it's rather fast.
I'm sure there are many compiler tricks, but the easiest way is to just make it look like you are using the value. In this case, just pass the returned array to some other function. The other function doesn't need to do anything, but that should be enough to convince the compiler you need the results.
If you find that your empty second function is being optimized out as well, then just stick it in a shared library (DLL) and it is impossible for the compiler to know how it is being used.
How you allocate the result can also change this. If you pass the original function a pointer, you could just pass it a heap pointer. Since that pointer may be used somewhere else it is highly unlikely the compiler could optimize away the code, as it has no idea if the results will be used or not.
You could also just legitimately use the data. It makes sense to verify the results in another function. If doing performance testing just put this verification part outside of the timed section. This is generally how I do such performance tests (make sure the result is checked/used).
This is what a test case is for. Write a test case in a separate binary (even just in the main() method) which sets a throwaway local variable to the result of the function. Time using your preferred method (e.g by capturing time(NULL) from immediately before and after the assignment and printing the time difference). You should have a decent idea of running time from that.
EDIT: time(NULL) is whole-second precision = bad and evil. Use clock(), as shown here, for the most accurate precision in the C/C++ standard library.
if you are using visual studio the code down here would work, but idon't know about any other solutions for gcc
#pragma optimize( "", off )
.
.
.
#pragma optimize( "", on )
I'm trying to write a test to verify a compiling error. It's about assigning a number to a String type property. But since it's a compiling error, the unit test code won't even be compiled at the first place. So anyone have any suggestion on how to do this?
I'm thinking maybe I can assign the number in runtime and check to see if certain exception got thrown? But I'm not sure how exactly to do that.
Thanks in advance!
If I understand you correctly, you have some piece of code that may not compile and you want to write the unit test that fails if the code really doesn't compile. If that is the case, then you should not write any unit tests. You should understand that you should write unit tests only for your code, not the code that somebody else wrote.
You didn't specify the programming language, so I will do some pseudo-code. Say you are writing a function to add two numbers:
function add(a, b) {
return a + b;
}
Very simple. You should unit test this, e.g. by making tests like the below:
function testAdd() {
assertEquals(4, add(2, 2));
assertEquals(46, add(12, 34));
}
However, you should not write a test that tests whether + operator is working fine. This is the job of whoever wrote the library that implements how + operator really works.
So, if that's the case, don't write any unit tests. Compiling your code is job of your compiler. The compiler should report that there is a compilation error in a proper way. You should not test whether compiler does its job correctly - testing that is the job of the people who are writing the compiler.
You did not specify the language
Ruby, Python -- dynamic & strong typesystem. This means that the types deduced during runtime (dynamic), but implicit conversions between types are prohibited
Js, Perl -- dynamic & weak typesystem.
C++ -- static and strong typesystem.
Let's assume we are talking about C++. Moreover I can create more really example. Imagine that you implement static assert for your project which doesn't not use c++11 compiler
template <bool>
struct my_static_assert;
template <>
struct my_static_assert<true> {};
If you want to check that such mechanism works ok. You should create unittest function which do the follow steps:
Create some file to compiler
Create external process of the compiler and pass test compile unit to it
Wait for compiler process completion
Retrive return code from the compiler process
Your function check return code from 4.
I checked google-test guide but it seems that they doesn't support such concept https://github.com/google/googletest/blob/master/docs/advanced.md