I am able to use cmocka and getting default results on the screen. I want to get the results for the unit test in JUnit format.
CMocka supports JUnit format by using the env variable CMOCKA_MESSAGE_OUTPUT or using API cmocka_set_message_output(CM_OUTPUT_XML);
But still no xml file gets generated. Can anyone help out in obtaining results in JUnit format?
The most credible source is actually the cmocka.c source file that contains entire implementation of the framework. This file is not too large, so I will cite something from the source file version 1.0.1.
There are two conditions to generate XML output by cmocka and the third condition is needed to store output in file.
1. XML output can be generated only if tests are called by cmocka_run_group_tests()
The customizable output format can be obtained only from the test runner cmocka_run_group_tests() or from its full variation cmocka_run_group_tests_name().
There is no other route that can lead to XML output. If a singe test is started by run_test() the output cannot be XML.
The summary format
[ PASSED ] 0 test(s).
[ FAILED ] 1 test(s), listed below:
can be generated in one of the following possible cases:
the test is started by one of the deprecated test runners: run_tests(), _run_tests() or run_group_tests(), _run_group_tests(); in that case it is even possible to see compilation warning about usage of a deprecated function;
the test is started by cmocka_run_group_tests() and the output format is CM_OUTPUT_STDOUT.
2. cmocka message output should be set to CM_OUTPUT_XML
The default output format can be set by calling cmocka_set_message_output(CM_OUTPUT_XML) before running tests. However, even if such default is set in the test source it can be overwritten by the environment variable CMOCKA_MESSAGE_OUTPUT. That variable has higher priority than the default set by cmocka_set_message_output().
The value of CMOCKA_MESSAGE_OUTPUT is case insensitive. The variable is taken into account if it is equal to one of the following values: stdout, subunit, tab or xml.
So, if the environment variable has value stdout the function mocka_set_message_output() has no effect.
That variable can be used to force different output formats of already compiled binary:
CMOCKA_MESSAGE_OUTPUT=stdout ./nulltest
CMOCKA_MESSAGE_OUTPUT=subunit ./nulltest
CMOCKA_MESSAGE_OUTPUT=tap ./nulltest
CMOCKA_MESSAGE_OUTPUT=xml ./nulltest
Thus, if the test is started by cmocka_run_group_tests() but the output is not affected by mocka_set_message_output() it means that there is set variable CMOCKA_MESSAGE_OUTPUT=stdout in the shell.
3. It should be possible to create a new file by cmocka to write its XML output directly to that file
If both previous conditions are satisfied, it is possible to ask cmocka to write its XML output directly to a file. If the environment variable CMOCKA_XML_FILE is set, then cmocka will try to write XML to file with name of that variable value.
Usage example:
CMOCKA_XML_FILE='./out.xml' CMOCKA_MESSAGE_OUTPUT=xml ./nulltest
The file is written if:
file with such name does not exist;
such file can be created.
Thus if there are more than one test runners in one compiled binary test application, only the first runner can write its output to that file.
The output is written to the shell even if CMOCKA_XML_FILE is set, but the file already exists or it cannot be created.
Of course, it is possible just to redirect shell output to a file overwriting existent file or appending to existent file if such file exists.
The example bellow can be used to check different options for output. It can be built by the command
gcc -g nulltest.c -o nulltest -Ipath_to_cmocka_headers -Lpath_to_cmocka_library_binary -lcmocka
nulltest.c
#include <stdarg.h>
#include <stddef.h>
#include <setjmp.h>
#include <cmocka.h>
/* A test case that fails. */
static void null_test_failed(void **state) {
(void) state; /* unused */
assert_int_equal(0, 1);
}
int main(void) {
const struct CMUnitTest tests[] = {
cmocka_unit_test(null_test_failed),
};
const struct UnitTest tests_deprecated[] = {
unit_test(null_test_failed),
};
cmocka_set_message_output(CM_OUTPUT_XML);
/* group test functions that use customizable output format */
cmocka_run_group_tests(tests, NULL, NULL);
cmocka_run_group_tests_name("custom group name", tests, NULL, NULL);
/* run single test with standard output */
run_test(null_test_failed);
/* DEPRECATED TEST RUNNER functions that can give only standard output */
run_tests(tests_deprecated);
_run_tests(tests_deprecated, 1);
run_group_tests(tests_deprecated);
_run_group_tests(tests_deprecated, 1);
return 0;
}
The XML is printed to stdout, you need to redirect it to a file ...
Related
By setting export GFORTRAN_STDOUT_UNIT=777 I want to change my stdout in gfortran. If I run the program
program main
implicit none
write (*,*) "*"
write (6,*) "6"
write (777,*) "777"
end program main
it will output
> $ ./a.out
777
and create a file:
> $ cat fort.6
*
6
Why isn't * forwarded to the stdout (now 777) anymore? Is this a gfortran bug or intended behaviour?
I believe the behaviour is as expected. The following paragraphs are of interest here:
GFORTRAN_STDOUT_UNIT: Unit number for standard output
This environment variable can be used to select the unit number preconnected to standard output. This must be a positive integer. The default value is 6.
source: GCC Gfortran Documentation
So this just states that /dev/stdout will be connected to the unit number GFORTRAN_STDOUT_UNIT.
The Fortran Standard makes the following statements:
9.5 File connection
9.5.1 Referring to a file
4 In a WRITE statement, an io-unit that is an asterisk identifies an external unit that is preconnected for sequential formatted output. This unit is also
identified by the value of the named constant OUTPUT_UNIT of the intrinsic module ISO_FORTRAN_ENV.
Note 9.15: Even though OUTPUT_UNIT is connected to a separate file on each image, it is expected that the processor could merge the sequences of records from these files into a single sequence of records that is sent to the
physical device associated with this unit, such as the user’s terminal.
source: Fortran 2008 Standard
All we know is that <asterisk> (ergo OUTPUT_UNIT) are preconnected to a unit for sequential formatted output. The standard makes no statement what this external unit is. It makes no reference to /dev/stdout. The standard actually explicitly mentions in a note that the user's terminal is a possible pre-connected unit, it could as well have been your printer.
So in the end, by setting GFORTRAN_STDOUT_UNIT=777, you just preconnect unit 777 to /dev/stdout and <asterisk> will be preconnected to an external unit for sequential output (i.e. in this case fort.6)
I have a small Go app with a cli using flags and I've been asked to make it more testable.
My app gets called on the command line like
deploy.exe <task> <command> -tenant tenant_name -validate -package "c:\some dir\\"
Based on which task and command a different execution path is called and ultimately a func residing in another package is called like:
if command == "db" {
dbhelper.RunDBCmds(*tenant, *validate, *package)
}
I need to write unit tests for just the flag parsing, without calling the actual functions at the end.
I'm fairly new to Go and I'm struggling figuring out how to accomplish this. I've thought about adding my Os.Args() and Flag parsing to a function that takes input and outputs a pointer of sorts to the RunDBCmds(*tenant, ...) func. However, I'm just not sure I can accomplish returning a pointer to a function.
I'd appreciate any advice on how I can make my code more testable without actually invoking the functions.
If all of your tasks/commands have different sets of flags I would end up with introducing some kind of Command abstraction. The best example can be found in the Go source code itself:
base.Commands = []*base.Command{
work.CmdBuild,
clean.CmdClean,
doc.CmdDoc,
envcmd.CmdEnv,
bug.CmdBug,
fix.CmdFix,
//...
}
Each command can have its own flag.FlagSet to parse command-specific flags:
// A Command is an implementation of a go command
// like go build or go fix.
type Command struct {
// Run runs the command.
// The args are the arguments after the command name.
Run func(cmd *Command, args []string)
// UsageLine is the one-line usage message.
// The first word in the line is taken to be the command name.
UsageLine string
// Short is the short description shown in the 'go help' output.
Short string
// Long is the long message shown in the 'go help <this-command>' output.
Long string
// Flag is a set of flags specific to this command.
Flag flag.FlagSet
// CustomFlags indicates that the command will do its own
// flag parsing.
CustomFlags bool
}
Using this approach you can separate commands' flags parsing from the execution.
I am using IAR EWARM's cspybat to run some unit tests for my embedded code using Unity. I would like an easy way for my build server to determine if the unit tests passed or failed. Is there a way for CSPY to return a nonzero error code if my unit tests fail? I have tried changing the return value in main() with no change. Is there a function I can call to force an error to be returned?
My cspybat batch file looks like this:
"C:\Program Files (x86)\IAR Systems\Embedded Workbench 7.4\common\bin\cspybat" -f "C:\Work\Sandbox\ST\stmicroeval\_iar_ewarm_project\settings\Project.UnitTest.general.xcl" --backend -f "C:\Work\Sandbox\ST\stmicroeval\_iar_ewarm_project\settings\Project.UnitTest.driver.xcl"
Unfortunately, no.
I've solved this by replacing "exit" with a function that prints a specific pattern, plus the exit code. I then wrapped the call to cspybat into a script that 1) strips the output of the extra output and 2) exits with the desired exit code.
It's late 2020 and they still don't offer a mechanism to do this.
We solved it by including a macro file with the contents:
execUserExit()
{
__message "program exited with __exit_value = ", __exit_value:%d ;
}
And having our own exit variable in the code:
extern "C" int __exit_value=0xff;
That we set prior to calling exit() (though you could just write your own version of exit())
This makes the debugger always print SOMETHING, even if the program crashes on startup.
Then we parse with a python wrapper:
pattern = "__exit_value =\s([\-|0-9|a-f|A-F|x]*)"
retvalue = int(re.findall(pattern,process.stdout)[0])
Currently I have the following part code in my Sync:
...
int index = file.find(remoteDir);
if(index >= 0){
file.erase(index, remoteDir.size());
file.insert(index, localDir);
}
...
// Uses PUT command on the file
Now I want to do the following instead:
If a file is the same as before, except for a rename, don't use the PUT command, but use the Rename command instead
TL;DR: Is there a way to check whether a file is the same as before except for a rename that occurred? So a way to compare both files (with different names) to see if they are the same?
check the md5sum, if it is different then the file is modified.
md5 check sum of a renamed file will remain same. Any change in content of file will give a different value.
I first tried to use Renjith method with md5, but I couldn't get it working (maybe it's because my C++ is for windows instead of Linux, I dunno.)
So instead I wrote my own function that does the following:
First check if the file is the exact same size (if this isn't the case we can just return false for the function instead of continuing).
If the sizes do match, continue checking the file-buffer per BUFFER_SIZE (in my case this is 1024). If the entire buffer of the file matches, return true.
PS: Make sure to close any open streams before returning.. My mistake here was that I had the code to close one stream after the return-statement (so it was never called), and therefore I had errno 13 when trying to rename the file.
I've written a command line tool that I want to test (I'm not looking to run unit tests from command line). I want to map a specific set of input options to a specific output. I haven't been able to find any existing tools for this. The application is just a binary and could be written in any language but it accepts POSIX options and writes to standard output.
Something along the lines of:
For each known set of input options:
Launch application with specified input.
Pipe output to a file.
Diff output to stored (desired) output.
If diff is not empty, record error.
(Btw, is this what you call an integration test rather than a unit test?)
Edit: I know how I would go about writing my own tool for this, I don't need help with the code. What I want to learn is if this has already been done.
DejaGnu is a mature and somewhat standard framework for writing test suites for CLI programs.
Here is a sample test taken from this tutorial:
# send a string to the running program being tested:
send "echo Hello world!\n"
# inspect the output and determine whether the test passes or fails:
expect {
-re "Hello world.*$prompt $" {
pass "Echo test"
}
-re "$prompt $" {
fail "Echo test"
}
timeout {
fail "(timeout) Echo test"
}
}
Using a well-established framework like this is probably going to be better in the long run than anything you can come up with yourself, unless your needs are very simple.
You are looking for BATS (Bash Automated Testing System):
https://github.com/bats-core/bats-core
From the docs:
example.bats contains
#!/usr/bin/env bats
#test "addition using bc" {
result="$(echo 2+2 | bc)"
[ "$result" -eq 4 ]
}
#test "addition using dc" {
result="$(echo 2 2+p | dc)"
[ "$result" -eq 4 ]
}
$ bats example.bats
✓ addition using bc
✓ addition using dc
2 tests, 0 failures
bats-core
Well, I think every language should have a way of execute an external process.
In C#, you could do something like:
var p = new Process();
p.StartInfo = new ProcessStartInfo(#"C:\file-to-execute.exe");
... //You can set parameters here, etc.
p.StartInfo.RedirectStandardOutput = true;
p.StartInfo.RedirectStandardInput = true;
p.StartInfo.UseShellExecute = false;
p.Start();
//To read the standard output:
var output = p.StandardOutput.ReadToEnd();
I have never had to write to the standard input, but I believe it can be done by accessing to p.StandardInput as well. The idea is to treat both inputs as Stream objects, because that's what they are.
In Python there is the subprocess module. According to its documentation:
The subprocess module allows you to spawn new processes, connect to their input/output/error pipes, and obtain their return codes.
I had to do the same when writing unit tests for the code generation part of a compiler I write some months ago: Writing unit tests in my compiler (which generates IL)
We wrote should, a single-file Python program to test any CLI tool. The default usage is to check that a line of the output contains some pattern. From the docs:
# A .should file launches any command it encounters.
echo "hello, world"
# Lines containing a `:` are test lines.
# The `test expression` is what is found at the right of the `:`.
# Here 'world' should be found on stdout, at least in one line.
:world
# What is at the left of the `:` are modifiers.
# One can specify the exact number of lines where the test expression has to appear.
# 'moon' should not be found on stdout.
0:moon
Should can check occurrences counts, look for regular expressions, use variables, filter tests, parse json data, and check exit codes.
Sure, it's been done literally thousands of times. But writing a tool to run simple shell scripts or batch files like what you propose is a trivial task, hardly worth trying to turn into a generic tool.