julia: rerun unittests upon changes to files - unit-testing

Are there julia libraries that can run unittests automatically when I make changes to the code?
In Python there is the pytest.xdist library which can run unittests again when you make changes to the code. Does julia have a similar library?

A simple solution could be made using the standard library module FileWatching; specifically FileWatching.watch_file. Despite the name, it can be used with directories as well. When something happens to the directory (e.g., you save a new version of a file in it), it returns an object with a field, changed, which is true if the directory has changed. You could of course combine this with Glob to instead watch a set of source files.
You could have a separate Julia process running, with the project's environment active, and use something like:
julia> import Pkg; import FileWatching: watch_file
julia> while true
event = watch_file("src")
if event.changed
try
Pkg.pkg"test"
catch err
#warn("Error during testing:\n$err")
end
end
end
More sophisticated implementations are possible; with the above you would need to interrupt the loop with Ctrl-C to break out. But this does work for me and happily reruns tests whenever I save a file.

If you use a Github repository, there are ways to set up Travis or Appveyor to do this. This is the testing method used by many of the registered modules for Julia. You will need to write the unit test suite (with using Test) and place it in a /test subdirectory on the github repository. You can search for julia and those web services for details.

Use a standard GNU Makefile and call it from various places depending on your use-case
Your .juliarc if you want to check for tests on startup.
Cron if you want them checked regularly
Inside your module's init function to check every time a module is loaded.
Since GNU makefiles detect changes automatically, calls to make will be silently ignored in the absence of changes.

Related

Can Cmake add 'time' before ./main in the command line to measure program execution time?

I am wanting to measure the time it takes for my C++ video processing program to process a video. I am using CLion to write the program and have Cmake set up to compile and automatically run the program with a test video. However, in order to find execution time I have been using the following command in the MacOS terminal:
% time ./main ../Media/test_video.mp4
Is there a way for me to configure Cmake to automatically include time in the execution of ./main to streamline my process further?
So far I've tried using set(CMAKE_ARGS time "test_video.mp4") and some command line argument functions but they don't seem to be acting in the way that I'm looking for.
It is possible to use add_custom_target to do what you want. I'll not consider this option further as it seems abusing the build system for something it wasn't designed to do. Yet it may have an advantage over using CLion configuration: it would be available to be used outside of CLion. That advantage seems minor: why not run the desired command directly in those contexts?
The first CLion method would be to define an external tool which run time on the current build target. In File|Settings...|Tools|External Tools define a new tool with /bin/time as program, $CMakeCurrentProductFile$ $Prompt$ as arguments. When choosing that tools (in Tools|External Tools) it will now prompt you for the argument and then run /bin/time on the current target with the provided arguments. Advantage: you don't have to define the tool once, it will be available in every project. Inconvenients: the external tools are available in every project, thus it doesn't make sense to be more specific than $Prompt$ for the arguments and the working directory; it isn't possible to specify environment variables, it isn't possible to enforce the need of a build before running the command.
The second CLion method would to be define a Run/Debug Configuration. Again use /bin/time as program (chose "Custom Executable"), specify $CMakeCurrentProductFile$ as first argument (here it makes sense to provide the other arguments as desired, but note that $Prompt$ is still a valid choice if needed). Advantages: it makes sense to be as specific as needed; you have all the feature of configurations (environment variables, input redirections, specifying actions to be executed before the execution). Inconvenient: it doesn't work with other CLion features which assume that the program is the target such as the debugger, the profiler, ... so you may have to duplicate configurations to get them.
Note that the methods aren't exclusive: you can define an external tools and then add configurations for the case where it is more convenient.

gtest unit tests target configuration file path

In my C++ application, I have a text file (dataFile.txt) that is installed on the Linux target machine in the following path:
/SoftwareHomeDir/Configuration/Application/dataFile.txt
This file exists on my Rational ClearCase source code environment under the path:
/ProjectName/config/Application/dataFile.txt
I am developping a unitTest in gtest that does following:
Read a specific data from dataFile.txt , if the data does not exist than write it into the file.
1) I am avoiding to create an environment variable to check whether I am in the compilation environment or the target machine. Then add additional test code in the final release. I really want to separate test code from final code.
2) I am not using any IDE (no visual studio, no qt, etc.), just notepad++
3) The compilatio. server is shared (access with a username, however the root folder "/" is shared. Which means that if I create the path "/SoftwareHomeDir/Confiugration/Application/dataFile.txt", it will be visible by all users, and if another user is running his gtest unitTest, he may overwrite my file.
4) In the final code, the path to the dataFile is hard coded, and it is very costly (will take few seconds to run) to implement a filesearch(filename) method to look for the file in the entire hard drive before reading the file.
Question:
I am looking for a solution to unit-test my code in the compilation environment that is using /ProjectName/config/Application/dataFile.txt
The solution to my problem was to combine gmock with gtest as described by the link
https://github.com/google/googletest/blob/master/googlemock/docs/CookBook.md#delegating-calls-to-a-fake
The only modification I made to my code is that instead of defining the path to the configuration data using #define, I created a function getConfigFilePath() that returns the hardcoded path of the configuration file in the installed application. From here, I mocked the class and in my mock, I call a fake getConfigFilePath() that returns, when the real code is executing, the hardcoded path of the config file in the project tree in ClearCase. This is precisely what I was looking for.

How to add test unit to a shared library project using autotools

I'm developing a library which should do a lot of calculation. It uses GNU autotools build system. There is a test project that links to this library and runs various test procedures. Each procedure compares results with pre-calculated values from MATLAB.
I found that testing process is boring and time consuming. Every time I need to do make, sudo make install in library and make in test project, then run the program and see what's going on.
What is the standard way to add check target to a library using autotools? It should meet this requirements:
User should be able to make check and see results without having to install the library itself. The executable should link to recently compiled, and not-yet-installed shared objects.
Running make check should also run the test program. (Not only compile it). Result of make check depends on return value of test unit program. The make should show error if test unit fails.
If user decides not to make check then no executable should be compiled.
Since you're already using autotools, you've got most of the infrastructure already. I don't know the directory layout, but let's say your have: SUBDIRS = soroush tests in a top-level Makefile.am, alternatively, you might have SUBDIRS = tests in the soroush directory. What matters is that a libtool-managed libsoroush.la exists before descent into the tests directory.
The prefix check_ indicates that those objects, in this case PROGRAM, should not be built until make check is run. So in tests/Makefile.am => check_PROGRAMS = t1 t2 t3
For each test program you can specify: t1_SOURCES = t1.cc, etc. As a shortcut, if you only have one source file per test, you can use AM_DEFAULT_SOURCE_EXT = .cc, which will implicitly generate the sources for you. so far:
AM_CPPFLAGS = -I$(srcdir)/.. $(OTHER_CPPFLAGS) # relative path to lib headers.
LDADD = ../soroush/libsoroush.la
check_PROGRAMS = t1 t2 t3
AM_DEFAULT_SOURCE_EXT = .cc
# or: t1_SOURCES = t1.cc, t1_LDADD = ../soroush/libsoroush.la, etc.
make check will build, but not execute, those programs. For that, you need to add:
TESTS = $(check_PROGRAMS)
What's really good about this approach is that if libsoroush is built as a shared library, libtool will take care of handling library search paths, etc., using the uninstalled library.
Often, the resulting t1 program will just be a shell script that sets up environment variables so that the real binary: .libs/t1 can be executed. I only mention this, because the whole point of using libtool is that you don't need to worry about how it's done.
Test feedback is more complicated, depending on what you require. You can go all the way with a parallel test harness, or just simple PASS/FAIL feedback. Unless testing is a major bottleneck, or the project is huge, it's easier just to use simple (or scripted) testing.

How to automate module reloading when unit testing with Erlang?

I'm using Emacs and trying to get my unit testing work flow as automated as possible. I have it set up so it is working but I have to manually compile my module under test or the module containing the tests before the Erlang Shell recognizes my changes.
I have two files mymodule.erl and mymodule_tests.erl. What I would like to be able to do is:
Add test case to mymodule_tests
Save mymodule_tests
Switch to the Erlang Shell
Run tests with one line, like eunit:test(mymodule) or mymodule_tests:test()
Have Erlang reload mymodule and mymodule_tests before actually performing the tests
I have tried writing my own test method but it doesn't work.
-module (mytests).
-export([test/0]).
-import(mymodule).
-import(mymodule_tests).
-import(code).
test() ->
code:purge(mymodule),
code:delete(mymodule),
code:load_file(mymodule),
code:purge(mymodule_tests),
code:delete(mymodule_tests),
code:load_file(mymodule_tests),
mymodule_tests:test().
I have also tried by putting -compile(mymodule). into mymodule_tests to see if I could get mymodule to automatically reload when updating mymodule_tests but to no avail.
I have also googled quite a bit but can't find any relevant information. As I'm new to Erlang I'm thinking that I'm either searching for the wrong terms, e.g. erlang reload module, or that you are not supposed to be able reload other modules when compile another module.
Maybe the Erlang make can help you.
make:all([load]).
Reading from the doc:
This function first looks in the
current working directory for a file
named Emakefile (see below) specifying
the set of modules to compile and the
compile options to use. If no such
file is found, the set of modules to
compile defaults to all modules in the
current working directory.
And regarding the "load" option:
Load mode. Loads all recompiled
modules.
There's also a make:files/1,2 which allows you to specify the list of modules to check.
Have you tried using l(mymodule). to reload the module after it's been compiled?

How can I run OCUnit (SenTestingKit) with NSDebugEnabled, NSZombieEnabled, MallocStackLogging?

I have an error similar to the one in this post. Now, I'm sure I've made some stupid error somewhere, probably related to releasing an object or an observer or what-not, but since I can't seem to find a way to debug the code I thought I could use the NSDebugEnabled, NSZombieEnabled and MallocStackLogging (as shown here).
Can it be done using OCUnit? If so, how? I just can't find an "executable" to set these parameters on...
Thanks!
Aviad.
Unfortunately, Dave's solution didn't work - I kept getting errors and mistakes. I eventually got GHUnit to work on my project, found the problem by debugging, but it had its own problems so I now use both it and OCUnit which is slightly better integrated in terms of showing the results in the results tab.
sigh. When will we get to see a good, complete unit testing framework for Obj-C?
This may have been fixed in recent Xcodes, but I get zombies by doing
Go into schemes (cmd <)
Open Test, then Arguments tab
Uncheck "Use the Run action's arguments and environment variables"
"+" an environment variable "NSZombieEnabled" = "YES"
Well, NSZombieEnabled and friends are environment variables, which means they have to be run on an executable. The default setup for a unit testing bundle is for the tests to be run during the build process, and not during execution.
So the way to fix this is to make it so that your tests don't run during the build phase, but instead run them as part of an executable.
Here's how I do that:
Inside your Unit Test bundle target, remove the "Run Script" build phase. It's that step that executes the tests after compiling them.
From the Project menu, choose "New Custom Executable..." and name it something meaningful, like "otest"
Make the executable path to be the otest binary, which should be located at /Developer/Tools/otest
Set the following environment variables on the otest executable:
DYLD_FRAMEWORK_PATH => {UnitTest.bundle}/Contents/Frameworks
DYLD_LIBRARY_PATH => {UnitTest.bundle}/Contents/Frameworks
Set the following program arguments on the otest executable:
-SenTest All (this will run all of the unit tests)
{UnitTest.bundle}
You can now select your unit test bundle as the active target, and the otest executable as the active executable, and then build and debug. This will let you set breakpoints, set other environment variables (like NSZombieEnabled), and so on.
If you only want to debug a certain suite or specific unit test, you can change the -SenTest All argument to -SenTest MyUnitTestSuite or -SenTest MyUnitTestSuite/myUnitTestMethod.
It took me quite some time but I finally managed to make it work for my project.
To create the "logic" tests I followed Apple guidelines on creating logic tests.
This works fine once you understand that the logic tests are run during build.
To be able to debug those tests it is required to create a custom executable that will call those tests. The article by Sean Miceli on the Grokking Cocoa blog provides all the information to do this. Following it however did not yield immediate success and needed some tweaking.
I will go over the main steps presented in Sean's tutorial providing some "for dummies" outline which took me some time to figure out:
Setup a target that contains the unit tests but DOES NOT run them
Setup the otest executable to run the tests
Setup the otest environment variables so that otest can find your unit tests
Step 1 - Setting up the target
Duplicate your unit tests target located under your project Targets. This will also create a duplicate of your unit tests product (.octest file). In the figure below "UnitTest" is the original target.
Rename both the unit tests target and the unit tests product (.octest file) to the same name. In the figure below "UnitTestsDebug" is the duplicate target.
Delete the RunScript phase of the new target
The name of both can be anything but I would avoid spaces.
Step 2 - Setting up otest
The most important point here is to get the correct otest, i.e. the one for your current iOS and not the default Mac version. This is well described in Sean's tutorial. Here are a few more details which helped me setting things right:
Go Project->New Custom Executable. This will pop open a window prompting you to enter an Executable Name and an Executable Path.
Type anything you wish for the name.
Copy paste the path to your iOS otest executable. In my case this was /Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulator4.2.sdk/Developer/usr/bin/otest
Press enter. This will bring you to the configuration page of your executable.
The only thing to change at this point is to select "Path Type: Relative to current SDK". Do not type in the path, this was done at step 3.
Step 3 - Setting up the otest arguments and environment variables
The otest arguments are straightforward to setup... But this proved to be my biggest problem. I initially had named my logic test target "LogicTests Debug". With this name and "LogicTests Debug.octest" (with quotes) as argument to otest I kept having otest terminating with exit code 1 and NEVER stopping into my code...
The solution: no space in your target name!
The arguments to otest are:
-SenTest Self (or All or a test name - type man otest in terminal to get the list)
{LogicTestsDebug}.octest - Where {LogicTestsDebug} needs to be replaced by your logic test bundle name.
Here is the list of environment variables for copy/pasting:
DYLD_ROOT_PATH: $SDKROOT
DYLD_FRAMEWORK_PATH: "${BUILD_PRODUCTS_DIR}: ${SDK_ROOT}:${DYLD_FRAMEWORK_PATH}"
IPHONE_SIMULATOR_ROOT: $SDKROOT
CFFIXED_USER_HOME: "${HOME}/Library/Application Support/iPhone Simulator/User"
DYLD_LIBRARY_PATH: ${BUILD_PRODUCTS_DIR}:${DYLD_LIBRARY_PATH}
DYLD_NEW_LOCAL_SHARED_REGIONS: YES
DYLD_NO_FIX_PREBINDING: YES
Note that I also tried the DYLD_FORCE_FLAT_NAMESPACE but this simply made otest crash.
Step 4 - Running your otest executable
To run your otest executable and start debugging your tests you need to:
Set your active target to your unit test target (LogicTestsDebug in my case)
Set your active executable to your otest executable
You can build and run your executable and debug your tests with breakpoints.
As a side note if you are having problems running your otest executable it can be related to:
Faulty path. I had lots of problem initially because I was pointing to the mac otest. I kept crashing on launch with termination code 6.
Faulty arguments. Until I removed the space from bundle (.octest) name I kept having otest crash with exit code 1.
Wrong path in environment variables. [Sean tutorial][8] has lots of follow-up questions giving some insight on what other people tried. The set I have now seems to work so I suggest you start with this.
You may get some message in the console which might lead you to think something is wrong with your environment variables. You may notice a message regarding CFPreferences. This message is not preventing the tests from running properly so don't focus on it f you have problems running otest.
Last once everything is working you will be able to stop at breakpoints in your tests.
One last thing...
I've read on many blogs that the main limitation of the integrated XCode SenTestKit is that tests cannot be run while building the application. Well as it turns out this is in fact quite easy to manage. You simply need to add your Logic tests bundle as a dependency to your application project. This will make sure your logic tests bundle is built, i.e. all tests are run, before your application is built.
To do this you can drag and drop your logic test bundle onto your application target.