In a scenario that it is nice to not to change the code, but adding tests
the code to be tested thefile.pl
get_name{
return 'sth';
}
test test_thefile.t
use Test::More;
use Test::MockModule;
require thefile.pl
my $module = Test::MockModule->new('thefile.pl')
$module->mock(get_name => sub { return 'othername' });
my $name = get_name();
ok($name eq 'othername');
I got
Invalid package name thefile.pl
Your problem is that you are using the filename as a bare word, so Perl interprets that as a package name. Since . isn't a legal identifier character (a char you can use in a package name), you get that warning.
If you want to use a filename, use a string instead:
require 'file.pl';
Then, in the call to Test::MockModule, it's expecting the name of a package that you defined in file.pl rather than the file name.
I don't know what you are doing, but when I want to replace a subroutine definition, I just override it. This is the core of the idea and there are ways to do it temporarily. The code behind the interfaces such as Test::MockModule and Hook::LexWrap aren't that complicated:
BEGIN {
require 'file.pl';
package Some::Package;
no warnings 'redefine';
*get_name = sub { ... }
}
The glob (*get_name) is a bit tricky, but just about everything else already shows up in your example. I show some of these tricks in Effective Perl Programming. For my work, reducing the module dependency list (especially in tests) is often worth the advanced trick.
Related
I was here for something else but trying to write up my question, I realize there's no way I'm doing this right.
I've been using mirage and irmin for a little while, and as long as all the code stays in the Main module everything is great. But of course, it quickly becomes a ridiculously huge file, and trying to split it in modules drives me mad with types escaping their scopes and whatnot.
Instead of just passing console from start to some other functions, I have to put those other functions in a functor that will take the Mirage_types_lwt.CONSOLE as well as the actual console variable, which means everything ends up being in functors instantiated from start and passed around in an unreadable mess of code.
I'm having problems making a giant ugly module to store and pass all of this easily (to "isolate" the parts that need this mess from regular code), and I can't figure out how to declare something like this :
module type ContextConfig = sig
module Store
val clientctx : Client.ctx
....
end
let mkContextConfig (module Store : Repo) ctx =
(module struct
(module Store : Repo)
let clientctx = ctx
end : ContextConfig)
(Repo being a module I made to wrap around Irmin's functors).
This obviously doesn't work, and I tried so many syntaxes I'm guessing it's just not possible, that means I'm doing something very wrong ?
I'd love advice on the proper way to deal with all of those functors and types in a clean way, how do I pass around things like the console or a conduit without having to functorize and instantiate everything in the Main module just to pass it around after ?
Definitions like that are possible:
module type REPO = sig end
module type CONTEXT_CONFIG = sig
module Store : REPO
val client_ctx : int
end
let mkContextConfig (module Store : REPO) ctx =
(module struct
module Store = Store
let client_ctx = ctx
end : CONTEXT_CONFIG)
(With some uninteresting changes to make the code type check.)
Let me elaborate on the title:
I want to implement a system that would allow me to enable/disable/modify the general behavior of my program. Here are some examples:
I could switch off and on logging
I could change if my graphing program should use floating or pixel coordinates
I could change if my calculations should be based upon some method or some other method
I could enable/disable certain aspects like maybe a extension api
I could enable/disable some basic integrated profiler (if I had one)
These are some made-up examples.
Now I want to know what the most common solution for this sort of thing is.
I could imagine this working with some sort of singelton class that gets instanced globally or in some other globally available object. Another thing that would be possible would be just constexpr or other variables floating around in a namespace, again globally.
However doing something like that, globally, feels like bad practise.
second part of the question
This might sound like I cant decide what I want, but I want a way to modify all these switches/flags or whatever they are actually called in a single location, without tying any of my classes to it. I don't know if this is possible however.
Why don't I want to do that? Well I like to make my classes somewhat reusable and I don't like tying classes together, unless its required by the DRY principle and or inheritance. I basically couldn't get rid of the flags without modifying the possible hundreds of classes that used them.
What I have tried in the past
Having it all as compiler defines. This worked reasonably well, however I didnt like that I couldnt make it so if the flag file was gone there were some sort of default settings that would make the classes themselves still operational and changeable (through these default values)
Having it as a class and instancing it globally (system class). Worked ok, however I didnt like instancing anything globally. Also same problem as above
Instancing the system class locally and passing it to the classes on construction. This was kinda cool, since I could make multiple instruction sets. However at the same time that kinda ruined the point since it would lead to things that needed to have one flag set the same to have them set differently and therefore failing to properly work together. Also passing it on every construction was a pain.
A static class. This one worked ok for the longest time, however there is still the problem when there are missing dependencies.
Summary
Basically I am looking for a way to have a single "place" where I can mess with some values (bools, floats etc.) and that will change the behaviour of all classes using them for whatever, where said values either overwrite default values or get replaced by default values if said "place" isnt defined.
If a Singleton class does not work for you , maybe using a DI container may fit in your third approach? It may help with the construction and make the code more testable.
There are some DI frameworks for c++, like https://github.com/google/fruit/wiki or https://github.com/boost-experimental/di which you can use.
If you decide to use switch/flags, pay attention for "cyclometric complexity".
If you do not change the skeleton of your algorithm but only his behaviour according to the objets in parameter, have a look at "template design pattern". This method allow you to define a generic algorithm and specify particular step for a particular situation.
Here's an approach I found useful; I don't know if it's what you're looking for, but maybe it will give you some ideas.
First, I created a BehaviorFlags.h file that declares the following function:
// Returns true iff the given feature/behavior flag was specified for us to use
bool IsBehaviorFlagEnabled(const char * flagName);
The idea being that any code in any of your classes could call this function to find out if a particular behavior should be enabled or not. For example, you might put this code at the top of your ExtensionsAPI.cpp file:
#include "BehaviorFlags.h"
static const enableExtensionAPI = IsBehaviorFlagEnabled("enable_extensions_api");
[...]
void DoTheExtensionsAPIStuff()
{
if (enableExtensionsAPI == false) return;
[... otherwise do the extensions API stuff ...]
}
Note that the IsBehaviorFlagEnabled() call is only executed once at program startup, for best run-time efficiency; but you also have the option of calling IsBehaviorFlagEnabled() on every call to DoTheExtensionsAPIStuff(), if run-time efficiency is less important that being able to change your program's behavior without having to restart your program.
As far as how the IsBehaviorFlagEnabled() function itself is implemented, it looks something like this (simplified version for demonstration purposes):
bool IsBehaviorFlagEnabled(const char * fileName)
{
// Note: a real implementation would find the user's home directory
// using the proper API and not just rely on ~ to expand to the home-dir path
std::string filePath = "~/MyProgram_Settings/";
filePath += fileName;
FILE * fpIn = fopen(filePath.c_str(), "r"); // i.e. does the file exist?
bool ret = (fpIn != NULL);
fclose(fpIn);
return ret;
}
The idea being that if you want to change your program's behavior, you can do so by creating a file (or folder) in the ~/MyProgram_Settings directory with the appropriate name. E.g. if you want to enable your Extensions API, you could just do a
touch ~/MyProgram_Settings/enable_extensions_api
... and then re-start your program, and now IsBehaviorFlagEnabled("enable_extensions_api") returns true and so your Extensions API is enabled.
The benefits I see of doing it this way (as opposed to parsing a .ini file at startup or something like that) are:
There's no need to modify any "central header file" or "registry file" every time you add a new behavior-flag.
You don't have to put a ParseINIFile() function at the top of main() in order for your flags-functionality to work correctly.
You don't have to use a text editor or memorize a .ini syntax to change the program's behavior
In a pinch (e.g. no shell access) you can create/remove settings simply using the "New Folder" and "Delete" functionality of the desktop's window manager.
The settings are persistent across runs of the program (i.e. no need to specify the same command line arguments every time)
The settings are persistent across reboots of the computer
The flags can be easily modified by a script (via e.g. touch ~/MyProgram_Settings/blah or rm -f ~/MyProgram_Settings/blah) -- much easier than getting a shell script to correctly modify a .ini file
If you have code in multiple different .cpp files that needs to be controlled by the same flag-file, you can just call IsBehaviorFlagEnabled("that_file") from each of them; no need to have every call site refer to the same global boolean variable if you don't want them to.
Extra credit: If you're using a bug-tracker and therefore have bug/feature ticket numbers assigned to various issues, you can creep the elegance a little bit further by also adding a class like this one:
/** This class encapsulates a feature that can be selectively disabled/enabled by putting an
* "enable_behavior_xxxx" or "disable_behavior_xxxx" file into the ~/MyProgram_Settings folder.
*/
class ConditionalBehavior
{
public:
/** Constructor.
* #param bugNumber Bug-Tracker ID number associated with this bug/feature.
* #param defaultState If true, this beheavior will be enabled by default (i.e. if no corresponding
* file exists in ~/MyProgram_Settings). If false, it will be disabled by default.
* #param switchAtVersion If specified, this feature's default-enabled state will be inverted if
* GetMyProgramVersion() returns any version number greater than this.
*/
ConditionalBehavior(int bugNumber, bool defaultState, int switchAtVersion = -1)
{
if ((switchAtVersion >= 0)&&(GetMyProgramVersion() >= switchAtVersion)) _enabled = !_enabled;
std::string fn = defaultState ? "disable" : "enable";
fn += "_behavior_";
fn += to_string(bugNumber);
if ((IsBehaviorFlagEnabled(fn))
||(IsBehaviorFlagEnabled("enable_everything")))
{
_enabled = !_enabled;
printf("Note: %s Behavior #%i\n", _enabled?"Enabling":"Disabling", bugNumber);
}
}
/** Returns true iff this feature should be enabled. */
bool IsEnabled() const {return _enabled;}
private:
bool _enabled;
};
Then, in your ExtensionsAPI.cpp file, you might have something like this:
// Extensions API feature is tracker #4321; disabled by default for now
// but you can try it out via "touch ~/MyProgram_Settings/enable_feature_4321"
static const ConditionalBehavior _feature4321(4321, false);
// Also tracker #4222 is now enabled-by-default, but you can disable
// it manually via "touch ~/MyProgram_Settings/disable_feature_4222"
static const ConditionalBehavior _feature4222(4222, true);
[...]
void DoTheExtensionsAPIStuff()
{
if (_feature4321.IsEnabled() == false) return;
[... otherwise do the extensions API stuff ...]
}
... or if you know that you are planning to make your Extensions API enabled-by-default starting with version 4500 of your program, you can set it so that Extensions API will be enabled-by-default only if GetMyProgramVersion() returns 4500 or greater:
static ConditionalBehavior _feature4321(4321, false, 4500);
[...]
... also, if you wanted to get more elaborate, the API could be extended so that IsBehaviorFlagEnabled() can optionally return a string to the caller containing the contents of the file it found (if any), so that you could do shell commands like:
echo "opengl" > ~/MyProgram_Settings/graphics_renderer
... to tell your program to use OpenGL for its 3D graphics, or etc:
// In Renderer.cpp
std::string rendererType;
if (IsDebugFlagEnabled("graphics_renderer", &rendererType))
{
printf("The user wants me to use [%s] for rendering 3D graphics!\n", rendererType.c_str());
}
else printf("The user didn't specify what renderer to use.\n");
I'm trying to use the Code Templates feature with PHP in NetBeans (7.3), however I'm finding it rather limited. Given the following desired output:
public function addFoo(Foo $foo) {
$this->fooCollection[] = $foo;
}
I'm trying to have every instance of "foo"/"Foo" be variable; so I used a variable:
public function add${name}(${name} $$${name}) {
$this->${name}Collection[] = $$${name};
}
Of course, when expanded there isn't any regard given to the desired capitalization rules, because I can't find a way to implement that; the result being (given I populate ${name} with "Foo"):
public function addFoo(Foo $Foo) { // note the uppercase "Foo" in the argument
$this->FooCollection[] = $Foo; // and collection property names...
} // not what I had in mind
Now, I've read that NetBeans supports FreeMarker in it's templates, but that seems to be only for file-templates and not snippet-templates like these.
As far as I can tell, the FreeMarker version would look something like the following; however, it doesn't work, and ${name?capitalize} is simply seen as another variable name.
public function add${name?capitalize}(${name?capitalize} $$${name}) {
$this->${name}Collection[] = $$${name};
}
Passing "foo", allowing capitalize to fix it for type-names, second-words, etc.
Is there any way to get FreeMarker support here, or an alternative?
I'm open to any suggestions really; third-party plugins included. I just don't want to have to abandon NetBeans.
Addendum
The example given is trivial; an obvious solution for it specifically would be:
public function add${upperName}(${upperName} $$${lowerName}) {
$this->${lowerName}Collection[] = $$${lowerName};
}
Where upper/lower would be "Foo"/"foo" respectively. However, it's just an example, and I'm looking for something more robust in general (such as FreeMarker support)
Visual Studio keeps trying to indent the code inside namespaces.
For example:
namespace Foo
{
void Bar();
void Bar()
{
}
}
Now, if I un-indent it manually then it stays that way. But unfortunately if I add something right before void Bar(); - such as a comment - VS will keep trying to indent it.
This is so annoying that basically because of this only reason I almost never use namespaces in C++. I can't understand why it tries to indent them (what's the point in indenting 1 or even 5 tabs the whole file?), or how to make it stop.
Is there a way to stop this behavior? A config option, an add-in, a registry setting, hell even a hack that modifies devenv.exe directly.
As KindDragon points out, Visual Studio 2013 Update 2 has an option to stop indenting.
You can uncheck TOOLS -> Options -> Text Editor -> C/C++ -> Formatting -> Indentation -> Indent namespace contents.
Just don't insert anything before the first line of code. You could try the following approach to insert a null line of code (it seems to work in VS2005):
namespace foo
{; // !<---
void Test();
}
This seems to suppress the indentation, but compilers may issue warnings and code reviewers/maintainers may be surprised! (And quite rightly, in the usual case!)
Probably not what you wanted to hear, but a lot of people work around this by using macros:
#define BEGIN_NAMESPACE(x) namespace x {
#define END_NAMESPACE }
Sounds dumb, but you'd be surprised how many system headers use this. (glibc's stl implentation, for instance, has _GLIBCXX_BEGIN_NAMESPACE() for this.)
I actually prefer this way, because I always tend to cringe when I see un-indented lines following a {. That's just me though.
Here is a macro that could help you. It will remove indentation if it detects that you are currently creating a namespace. It is not perfect but seems to work so far.
Public Sub aftekeypress(ByVal key As String, ByVal sel As TextSelection, ByVal completion As Boolean) _
Handles TextDocumentKeyPressEvents.AfterKeyPress
If (Not completion And key = vbCr) Then
'Only perform this if we are using smart indent
If DTE.Properties("TextEditor", "C/C++").Item("IndentStyle").Value = 2 Then
Dim textDocument As TextDocument = DTE.ActiveDocument.Object("TextDocument")
Dim startPoint As EditPoint = sel.ActivePoint.CreateEditPoint()
Dim matchPoint As EditPoint = sel.ActivePoint.CreateEditPoint()
Dim findOptions As Integer = vsFindOptions.vsFindOptionsMatchCase + vsFindOptions.vsFindOptionsMatchWholeWord + vsFindOptions.vsFindOptionsBackwards
If startPoint.FindPattern("namespace", findOptions, matchPoint) Then
Dim lines = matchPoint.GetLines(matchPoint.Line, sel.ActivePoint.Line)
' Make sure we are still in the namespace {} but nothing has been typed
If System.Text.RegularExpressions.Regex.IsMatch(lines, "^[\s]*(namespace[\s\w]+)?[\s\{]+$") Then
sel.Unindent()
End If
End If
End If
End If
End Sub
Since it is running all the time, you need to make sure you are installing the macro inside in your EnvironmentEvents project item inside MyMacros. You can only access this module in the Macro Explorer (Tools->Macros->Macro Explorer).
One note, it does not currently support "packed" namespaces such as
namespace A { namespace B {
...
}
}
EDIT
To support "packed" namespaces such as the example above and/or support comments after the namespace, such as namespace A { /* Example */, you can try to use the following line instead:
If System.Text.RegularExpressions.Regex.IsMatch(lines, "^[\s]*(namespace.+)?[\s\{]+$") Then
I haven't had the chance to test it a lot yet, but it seems to be working.
You could also forward declare your types (or whatever) inside the namespace then implement outside like this:
namespace test {
class MyClass;
}
class test::MyClass {
//...
};
Visual Studio 2017+
You can get to this "Indent namespace contents" setting under Tools->Options then Text Editor->C/C++->Formatting->Indention. It's deep in the menus but extremely helpful once found.
I understand the problem when there are nested namespaces. I used to pack all the namespaces in a single line to avoid the multiple indentation. It will leave one level, but that's not as bad as many levels. It's been so long since I have used VS that I hardly remember those days.
namespace outer { namespace middle { namespace inner {
void Test();
.....
}}}
I have a class that processes a 2 xml files and produces a text file.
I would like to write a bunch of unit / integration tests that can individually pass or fail for this class that do the following:
For input A and B, generate the output.
Compare the contents of the generated file to the contents expected output
When the actual contents differ from the expected contents, fail and display some useful information about the differences.
Below is the prototype for the class along with my first stab at unit tests.
Is there a pattern I should be using for this sort of testing, or do people tend to write zillions of TestX() functions?
Is there a better way to coax text-file differences from NUnit? Should I embed a textfile diff algorithm?
class ReportGenerator
{
string Generate(string inputPathA, string inputPathB)
{
//do stuff
}
}
[TextFixture]
public class ReportGeneratorTests
{
static Diff(string pathToExpectedResult, string pathToActualResult)
{
using (StreamReader rs1 = File.OpenText(pathToExpectedResult))
{
using (StreamReader rs2 = File.OpenText(pathToActualResult))
{
string actualContents = rs2.ReadToEnd();
string expectedContents = rs1.ReadToEnd();
//this works, but the output could be a LOT more useful.
Assert.AreEqual(expectedContents, actualContents);
}
}
}
static TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
[Test]
public void TestX()
{
TestGenerate("x1.xml", "x2.xml", "x-expected.txt");
}
[Test]
public void TestY()
{
TestGenerate("y1.xml", "y2.xml", "y-expected.txt");
}
//etc...
}
Update
I'm not interested in testing the diff functionality. I just want to use it to produce more readable failures.
As for the multiple tests with different data, use the NUnit RowTest extension:
using NUnit.Framework.Extensions;
[RowTest]
[Row("x1.xml", "x2.xml", "x-expected.xml")]
[Row("y1.xml", "y2.xml", "y-expected.xml")]
public void TestGenerate(string pathToInputA, string pathToInputB, string pathToExpectedResult)
{
ReportGenerator obj = new ReportGenerator();
string pathToResult = obj.Generate(pathToInputA, pathToInputB);
Diff(pathToExpectedResult, pathToResult);
}
You are probably asking for the testing against "gold" data. I don't know if there is specific term for this kind of testing accepted world-wide, but this is how we do it.
Create base fixture class. It basically has "void DoTest(string fileName)", which will read specific file into memory, execute abstract transformation method "string Transform(string text)", then read fileName.gold from the same place and compare transformed text with what was expected. If content is different, it throws exception. Exception thrown contains line number of the first difference as well as text of expected and actual line. As text is stable, this is usually enough information to spot the problem right away. Be sure to mark lines with "Expected:" and "Actual:", or you will be guessing forever which is which when looking at test results.
Then, you will have specific test fixtures, where you implement Transform method which does right job, and then have tests which look like this:
[Test] public void TestX() { DoTest("X"); }
[Test] public void TestY() { DoTest("Y"); }
Name of the failed test will instantly tell you what is broken. Of course, you can use row testing to group similar tests. Having separate tests also helps in a number of situations like ignoring tests, communicating tests to colleagues and so on. It is not a big deal to create a snippet which will create test for you in a second, you will spend much more time preparing data.
Then you will also need some test data and a way your base fixture will find it, be sure to set up rules about it for the project. If test fails, dump actual output to the file near the gold, and erase it if test pass. This way you can use diff tool when needed. When there is no gold data found, test fails with appropriate message, but actual output is written anyway, so you can check that it is correct and copy it to become "gold".
I would probably write a single unit test that contains a loop. Inside the loop, I'd read 2 xml files and a diff file, and then diff the xml files (without writing it to disk) and compare it to the diff file read from disk. The files would be numbered, e.g. a1.xml, b1.xml, diff1.txt ; a2.xml, b2.xml, diff2.txt ; a3.xml, b3.xml, diff3.txt, etc., and the loop stops when it doesn't find the next number.
Then, you can write new tests just by adding new text files.
Rather than call .AreEqual you could parse the two input streams yourself, keep a count of line and column and compare the contents. As soon as you find a difference, you can generate a message like...
Line 32 Column 12 - Found 'x' when 'y' was expected
You could optionally enhance that by displaying multiple lines of output
Difference at Line 32 Column 12, first difference shown
A = this is a txst
B = this is a tests
Note, as a rule, I'd generally only generate through my code one of the two streams you have. The other I'd grab from a test/text file, having verified by eye or other method that the data contained is correct!
I would probably use XmlReader to iterate through the files and compare them. When I hit a difference I would display an XPath to the location where the files are different.
PS: But in reality it was always enough for me to just do a simple read of the whole file to a string and compare the two strings. For the reporting it is enough to see that the test failed. Then when I do the debugging I usually diff the files using Araxis Merge to see where exactly I have issues.