I have 3 jobs on jenkins, 1 to build a test program, 2 are to run the test program after the build is complete. The issue I have is I need a specific 2 functions to run on one of the jobs and the other 5 functions to run on the 2nd test job, all the functions are on 1 cpp file.
Im not sure if I have to edit something in a Makefile, ive tried running . ./test.cpp && func() which didnt work and not sure what else to research.
You can't directly reference specific functions in an executable. At best you can alter the test program to accept parameters that can be used to specify which functions to call.
A simpler solution is to separate the tests into two files.
If you have the time and inclination, there are many test frameworks out there that you can possibly leverage.
Related
I find it useful to run my tests under --gtest_repeat for 100 or 1000 times after all tests pass as a final test to make sure there are no race conditions. However in situations where SetUpTestSuite / TearDownTestSuite can be run only once (e.g. tests use a singleton class that cannot be created multiple times) during program execution this is not possible.
So is there a way to repeat tests without re-running SetUpTestSuite?
If not how would you overcome this issue? I can try using ctest but that would make it difficult to drop into a debugger as I think I would need to attach to the process being run by ctest somehow.
Note: I can use any version of googletest as needed.
I have problems with acceptance tests (ember 0.10.0). The thing is, tests run successfully if I run them one by one (passing test ID in URL), but when I try to run them all at once, they fail cause of some async problems I think (such as trying to click on an element which has not been rendered yet). Has anybody faced that? Here's the gist with the example of one of my tests
P.S. I tried to upgrade versions of: qunit, ember-qunit, ember-cli-qunit, but the problem still exists (edited)
UPD 1
Here's the screenshot: https://pp.vk.me/c627830/v627830110/e718/tAwcDMJ0J4g.jpg
UPD 2
I simplified tests as much as I could and now, 50 percent they r passing. I mean, I run all tests and they are marked as done successfully, I run all tests again and they are failed. That blows my mind.
Common reasons for failing are:
Some resource that is used by more than one test isn't reset properly between tests. Typical shared resources are: databases, files, environment settings, locks. This is the most probable cause.
Some asynchronous work gets different timing and doesn't complete in a time, and you use a timer instead of more reliable ways to wait for completion.
Basically I created a new test file in a particular package with some bare bones test structure - no actual tests...just an empty struct type that embeds suite.Suite, and a function that takes in a *testing.T object and calls suite.Run() on said struct. This immediately caused all our other tests to start failing indeterministically.
The nature of the failures were associated with database unique key integrity violations on inserts and deletes into a single Postgres DB. This is leading me to believe that the tests were being run concurrently without calling our setup methods to prepare the environment properly between tests.
Needless to say, the moment I move this test file to another package, everything magically works!
Has anyone else run into this problem before and can possibly provide some insights?
What I've found from my use, is that "go test" runs a single package's test cases sequentially (unless t.Parallel() is called), but if you supply multiple packages (go test ./foo ./bar ./baz), each package's tests are run parallel to other packages. Definitely caused similar headaches with database testing for me.
As it turns out, this is a problem rooted in how go test works, and has nothing to do with testify. Our tests were being ran on ./... This causes the underlining go test tool to run tests in each package in parallel, as justinas pointed out. After digging around more on StackOverflow (here and here) and reading through testify's active issue on this problem, it seems that the best immediate solution is to use the -p=1 flag to limit the number of packages to be run in parallel.
However, it is still unexplained why the tests consistently passed prior to adding these new packages. A hunch is perhaps the packages/test files were sorted and ran in such a manner that concurrency wasn't an issue prior to adding the new packages/files.
I am using CodeBlocks to write my programs in C++ and I noticed the following. Both my main class and one my Unit Test class are in the same folder (say FolderName). From both of them, I call a method that inputs a file which is in the same folder (FileName.txt). From main I call it like this, and it works fine.
obj.("FileName.txt");
From the test file, I need to give the whole address of the file for it to work.
obj.("/home/myName/FolderName/FileName.txt");
I know there must be a way of setting the Unit Test file so that it works like the main but I could not figure it out. I am not sure if this is important but I am working on Linux
My apologies if you've already figured this out, but I'll answer for anyone else who may be wondering.
CodeBlocks creates an executable for your unit test and stores it in /home/myName/FolderName/bin/unitTest/. CodeBlocks runs this executable when you execute your unit test. Therefore, your pwd is not /home/myName/FolderName/ but /home/myName/FolderName/bin/unitTest/.
You're using gtest, but regardless of which framework you use, there are a few ways to do what you're asking:
The best option is to use the address obj.("../../FileName.txt")
The other option is to copy FileName.txt to /home/myName/FolderName/bin/unitTest/ (or whatever you named your unit test build option). You can then simply use "FileName.txt" in your unit test.
Cheers.
So I'm interested in doing some unit testing of a library that interacts with a kernel module. To do this properly, I'd like to make sure that things like file handles are closed at the end of each test. The only way this really seems possible is by using fork() on each test case. Is there any pre-existing unit test framework that would automate this?
An example of what I would expect is as follows:
TEST() {
int x = open("/dev/custom_file_handle");
TEST_ASSERT_EQUAL(x, 3);
}
TEST() {
int y = open("/dev/other_file_handle");
TEST_ASSERT_EQUAL(x, 3);
}
In particular, after each test, the file handles were closed, which means that the file descriptor should likely be the same value after each test.
I am not actually testing the value of the file descriptor. This is just a simple example. In my particular case, only one user will be allowed to have the file descriptor open at any time.
This is targeting a Linux platform, but something cross platform would be awesome.
Google Test does support forking the process in order to test it. But only as "exit" and/or "death" tests. On the other hand, there is nothing to prevent you from writing every test like that.
Ideally, though, I would recommend that you approach your problem differently. For example, using the same Google Test framework, you can list test cases and run them separately, so writing a simple wrapper that invokes each binary multiple times to run different test will solve your problem. Fork has its own problems, you know.
The Check unit testing library for C by default executes each test in a separate child process.
It also supports two different kinds of fixtures - ones that are executed before/after each test - in the child process - (called 'checked') and ones that are executed before/after a test-suite - in the parent process - (called 'unchecked' fixtures).
You can disable the forking via the environment variable CK_FORK=no or an API call - e.g. to simplify debugging an issue.
Currently, libcheck runs under Linux, Hurd, the BSDs, OSX and different kinds of Windows (mingw, non-mingw etc.).