I'm writing tests for a project of mine, which involves fixed filepaths and base URLs. To me, the most logic way to define them is by means of public/private constants in a relevant module, but that prevents good testing practices. How can I work around this?
I searched for a possible solution and found that I can define two constructors for a struct in need of a path: one which defines the default path, and another that accepts a custom path
func Construct(param string) MyStruct {
return MyStruct {Param: param, Path: "/default/path"}
}
func ConstructWithPath(param, path string) MyStruct {
return MyStruct {Param: param, Path: path}
}
This is pretty ugly to me and it's a solution tailored exclusively for tests since the paths I'm considering are fixed and known.
This is known as dependency injection and is commonly used for testing. There's nothing particularly ugly about making your code more testable.
An alternative would be to define the paths as vars in the package (make them private - tests live in the same package) and the tests could set these vars to something before doing the test.
Your solution is fine. It's a common and recommended practice to design your code and interfaces to be testable.
I would slightly modify your solution to prevent duplication-
func Construct(param string) MyStruct {
return ConstructWithPath(param, "/default/path")
}
func ConstructWithPath(param, path string) MyStruct {
return MyStruct {Param: param, Path: path}
}
You need not export ConstructWithPath if you don't want anyone outside this package using it.
Related
In my past experience of testing C code, the function stub is pretty much mandatory. In my safety-critical line of work, I am usually required to test everything - even abstractive functions which simply call a build-specific implementation argument for argument, return for return.
I have searched high and low for ways to, in Rust, stub functions external to the module under test, but cannot find a solution.
There are several libraries for trait-mocking, but what little I have read about this topic suggests that it is not what I am looking for.
The other suggestion is that functions I test which call external functions have those functions passed in, allowing the test simply to pass the desired pseudostub function. This seems to be very inflexible in terms of the data passing in and out of the pseudostub and causes one's code to be polluted with function-reference arguments at every level - very undesirable when the function under test would never call anything except one operational function or the stub. You are writing operational code to fit in with the limitations of the testing system.
This seems so very basic. Surely there is a way to stub external functions with Rust and Cargo?
You can try and use mock crates like mockall, which I think is the more complete out there, but still might need some time getting used to.
Without a mock crate, I would suggest mocking the traits/structs in an other, and then bringing then into scope with the #[cfg(test)] attribute. Of course this would make it mandatory for you to annotate the production use statement with `#[cfg(not(test))]. for example:
If you're using an external struct ExternalStruct from external-crate with a method external_method you would have something like:
file real_code.rs
#[cfg(not(test))]
use external-crate::ExternalStruct;
#[cfg(test)]
use test_mocks::ExternalStruct;
fn my_function() {
//...
ExternalTrait::external_method();
}
#[test]
fn test_my_function(){
my_function();
}
file test_mocks.rs:
pub Struct ExternalStruct {}
impl ExternalStruct {
pub fn external_method() {
//desired mock behaviour
}
}
On running your test the ExternalStruct from the test_mocks will be used and the real dependency otherwise.
I'm trying to learn to write tests in golang on a personal project of mine.
Bunch of the functions in my project are using net package, however since I'm new to this I don't know how to mock the functions from packages such as net one which are interacting with the host's devices.
To make my question more concrete, I have a function similar to this:
func NewThingy(ifcName string) (*Thingy, error) {
if ifc, err := net.InterfaceByName(ifcName); err == nil {
return nil, fmt.Errorf("Interface name %s already assigned on the host", ifcName)
}
....
return &Thingy{
ifc: ifc,
}, nil
}
Thingy is clearly defined as:
type Thingy struct {
ifc *net.Interface
}
Anyone could give me any hints how to go about testing code like this ?
Thanks
Short answer is that you can't mock things like that. Go is a statically compiled language, so unlike ruby or other dynamic languages, there is no support for magically rewriting fixed function calls on the fly.
If you really want to test it, you have two obvious options:
Define NewThingy to take a function pointer whose type matches net.InterfaceByName. Pass that function to all "real" calls to the function, but pass a dummy function pointer in your tests. This will slightly slow down the code however, and isn't pleasant to read or write.
Require your test environment to use legitimate data. May or may not be possible depending on your requirements, but there are some packages that may help (I wrote https://github.com/eapache/peroxide for a similar purpose, but it doesn't deal with interfaces so it's probably not what you're looking for).
This isn't a direct answer to your question, but Gomock provides for mocking by generating code for whatever interface you give it. To use it, you need to use an interface-based coding style, rather than relying directly on structs.
Because many of the standard library packages use interfaces, mocking them with Gomock is effective. For any that don't, you might need to write an (untested) shim that provides the necessary interface layer. This seems less messy to me than the idea of passing function pointers to achieve a similar effect.
I had the same issue and ended up using this package to mock:
https://github.com/foxcpp/go-mockdns
srv, _ := mockdns.NewServer(map[string]mockdns.Zone{
"example.org.": {
A: []string{"1.2.3.4"},
},
}, false)
defer srv.Close()
srv.PatchNet(net.DefaultResolver)
defer mockdns.UnpatchNet(net.DefaultResolver)
addrs, err := net.LookupHost("example.org")
fmt.Println(addrs, err)
it also has an option to create a mock resolver object but then you have to use dependency injection to inject the mock resolver to be used internally:
r := mockdns.Resolver{
Zones: map[string]mockdns.Zone{
"example.org.": {
A: []string{"1.2.3.4"},
},
},
}
addrs, err := r.LookupHost(context.Background(), "example.org")
fmt.Println(addrs, err)
I'm attempting to help design some unit tests around controllers in a Qt C++ application.
To be frank, I have two large drawbacks. One, my testing background is heavily based on .NET projects, so my knowledge of best practice in the c++ world is slim at best. Two, the designer of the application I am looking at did not architect the code with unit testing in mind.
One specific point point, I'm looking at a controller class that includes boost/filesystem/operations.hpp. The controller constructor goes on to check directory existence and create directories using functions from the boost filesystem code.
Is there any way to overload or mock this behavior? I'm used to setting up an IoC container or at least dependency injected constructors in .NET, and then being able to pass mock objects in the unit test code. I'm not sure how a templated header file would work with that concept, though, or if it is even typical practice in c++.
Currently, I have no flexibility to suggest code changes, as there is a release build coming up this week. But after that, if there are some simple code changes that could improve testability, that is definitely an option. Ideally, there would be a way to overload the filesystem functions in the unit test framework as is, though.
We ended up creating a generic file system wrapper that calls Boost filesystem and accepting it as a parameter to our class constructors so we could send in mock versions at unit test time.
I understand the thought to not mock this, but I think there is value in fast unit tests for our CI environment to run at check in time as well as tests that actually hit the file system.
If you think about it, the only reason why you need to inject an instance of, suppose, IFilesystem into your classes, is to mock it in tests. No part of your non-test codebase gonna use anything but the real filesystem anyway, so it is safe to assume that you can inject not an object but a type into your classes and use them freely within your codebase without type clashes.
So, suppose you have class of interest
struct BuildTree
{
BuildTree(std::string_view dirname) { /*...*/ }
bool has_changed()
{
// iterates through files in directory
// recurs into directories
// checks the modification date against lastModified_
// uses boost::filesystem::last_write_time(), etc.
}
private:
boost::filesystem::file_time_type lastModified_;
};
Now, you could inject a type into the class. This type will be a class with a bunch of static methods. There will be a RealFilesystem type that will redirect to the boost::filesystem methods, and there will be a SpyFilesystem.
template <class Filesystem>
struct BuildTree
{
// ...
bool has_changed()
{
// uses Filesystem::last_write_time(), etc.
}
private:
typename Filesystem::file_time_type lastModified_;
};
The SpyFilesystem will resemble a PIMPL idiom, in that the static methods will redirect calls to the actual implementation.
struct SpyFilesystemImpl;
struct SpyFilesystem
{
using file_time_type = typename SpyFilesystemImpl::file_time_type;
static file_time_type last_time_write(std::string_view filename)
{
return instance.last_time_write(filename);
}
// ... more methods
static SpyFilesystemImpl instance;
};
SpyFilesystemImpl SpyFilesystem::instance{};
// No warranty of completeness provided
struct SpyFilesystemImpl
{
using file_time_type = std::chrono::system_clock::time_point;
void create_directory(std::string_view path) { /*...*/ }
void touch(std::string_view filename)
{
++lastModified_[filename];
}
file_time_type last_time_write(std::string_view filename)
{
return std::chrono::system_clock::time_point{lastModified_[filename]};
}
private:
std::unordered_map<std::string, std::chrono::seconds> lastModified_;
};
Finally, inside each of your tests you would prepare an instance of SpyFilesystemImpl, assign it to the SpyFilesystem::instance and then instantiate your class with SpyFilesystem. like so
// ... our SpyFilesystem and friends ...
// Google Test framework
TEST(BuildTree, PickUpChanges)
{
SpyFilesystemImpl fs{};
fs.create_directory("foo");
fs.touch("foo/bar.txt");
SpyFilesystem::instance = fs;
BuildTree<SpyFilesystem> tree("foo");
EXPECT_FALSE(tree.has_changed());
SpyFilesystem::instance.touch("foo/bar.txt");
EXPECT_TRUE(tree.has_changed());
}
This approach has an advantage that there will be no run-time overhead in the resulting binary (provided that optimizations are enabled). However, it requires more boilerplate code, which might be a problem.
It seems reasonable to me to consider boost::filesystem as an
extension of the standard libraries. So you mock it (or not) in
exactly the same way you mock something like std::istream.
(Generally, of course, you don't mock it, but rather your test
framework provides the necessary environment: the files you need
to read with std::istream, the directories, etc. for
boost::filesystem.)
Im reading The Art Of Unit Testing" and there is a specific paragraph im not sure about.
"One of the reasons you may want to avoid using a base class instead of an interface is that a base class from the production code may already have (and probably has) built-in production dependencies that you’ll have to know about and override. This makes implementing derived classes for testing harder than implementing an interface, which lets you know exactly what the underlying implementation is and gives you full control over it."
can someone please give me an example of a built-in production dependency?
Thanks
My interpretation of this is basically anything where you have no control over the underlying implementation, but still rely on it. This could be in your own code or in third party libraries.
Something like:
class MyClass : BaseConfigurationProvider
{
}
abstract class BaseConfigurationProvider
{
string connectionString;
protected BaseConfigurationProvider()
{
connectionString = GetFromConfiguration();
}
}
This has a dependency on where the connection string is returned from, perhaps a config file or perhaps a random text file - either way, difficult external state handling for a unit test on MyClass.
Whereas the same given an interface:
class MyClass : IBaseConfigurationProvider
{
string connectionString;
public MyClass(IBaseConfigurationProvider provider)
{
connectionString = provider.GetConnectionString();
}
}
interface IBaseConfigurationProvider
{
string GetConnectionString();
}
You are in full control of the implementation at least, and the use of an interface means that test versions of implementations can be used during unit tests, or you can inject dependencies into consuming classes (as I have done above). In this scenario, the dependency is on the need to resolve a connection string. The tests can provide a different or empty string.
one example which i can think of is use of Session variable inside that of asp.net (im a .net guy)
because you have no control over how asp.net populates the session variable you cannot test it simply by making a test case you will have to either override it somehow or make a mock object
and this happens because all the context and cookies arent present when you are testing
I'm writing program arguments parser, just to get better in TDD, and I stuck with the following problem. Say I have my parser defined as follows:
class ArgumentsParser {
public ArgumentsParser(ArgumentsConfiguration configuration) {
this.configuration = configuration;
}
public void parse(String[] programArguments) {
// all the stuff for parsing
}
}
and I imagine to have ArgumentsConfiguration implementation like:
class ArgumentsConfiguration {
private Map<String, Class> map = new HashMap<String, Class>();
public void addArgument(String argName, Class valueClass) {
map.add(argName, valueClass);
}
// get configured arguments methods etc.
}
This is my current stage. For now in test I use:
#Test
public void shouldResultWithOneAvailableArgument() {
ArgumentsConfiguration config = prepareSampleConfiguration();
config.addArgument("mode", Integer.class);
ArgumentsParser parser = new ArgumentsParser(configuration);
parser.parse();
// ....
}
My question is if such way is correct? I mean, is it ok to use real ArgumentsConfiguration in tests? Or should I mock it out? Default (current) implementation is quite simple (just wrapped Map), but I imagine it can be more complicated like fetching configuration from kind of datasource. Then it'd be natural to mock such "expensive" behaviour. But what is preferred way here?
EDIT:
Maybe more clearly: should I mock ArgumentsConfiguration even without writing any implementation (just define its public methods), use mock for testing and deal with real implementation(s) later, or should I use the simplest one in tests, and let them cover this implementation indirectly. But if so, what about testing another Configuration implementation provided later?
Then it'd be natural to mock such "expensive" behaviour.
That's not the point. You're not mocking complex classes.
You're mocking to isolate classes completely.
Complete isolation assures that the tests demonstrate that classes follow their interface and don't have hidden implementation quirks.
Also, complete isolation makes debugging a failed test much, much easier. It's either the test, the class under test or the mocked objects. Ideally, the test and mocks are so simple they don't need any debugging, leaving just the class under test.
The correct answer is that you should mock anything that you're not trying to test directly (e.g.: any dependencies that the object under test has that do not pertain directly to the specific test case).
In this case, because your ArgumentsConfiguration is so simple, I'd recommend using the real implementation until your requirements demand something more complicated. There doesn't seem to be any logic in your ArgumentsConfiguration class, so it's safe to use the real object. If the time comes where the configuration is more complicated, then an approach you should probably take would be not to create a configuration that talks to some data source, but instead generate the ArgumentsConfiguration object from that datasource. Then you could have a test that makes sure it generates the configuration properly from the datasource and you don't need unnecessary abstractions.