From the Syn documentation:
Syn operates on the token representation provided by the proc-macro2 crate from crates.io rather than using the compiler's built in proc-macro crate directly. This enables code using Syn to execute outside of the context of a procedural macro, such as in unit tests or build.rs
I am trying to enable unit testing for some Syn functions, however I can't get it to work no matter what I have tried. It does not work with the proc_macro2::TokenStream type, but it won't work with the proc_macro::TokenStream because we are not in a proc-macro context.
link to playground
use quote::quote;
use syn;
fn test() {
// let stream: syn::export::TokenStream = quote!{fn foo() {};}.into(); // doesn't work
let stream: proc_macro2::TokenStream = quote!{fn foo() {};}.into(); // doesn't work
// let item = parse_macro_input!(stream as Item); // doesn't work
let item = syn::parse(stream).unwrap();
}
fn main() {
test();
}
Any help on how to test syn functions outside of the proc-macro context would be appreciated. I am aware of the trybuild crate, but I would like to be able to unit test the macro's functions first.
It does not work with the proc_macro2::TokenStream type, but it won't work with the proc_macro::TokenStream because we are not in a proc-macro context.
Yes, and that's the whole point! Crates that export procedural macros can't export anything else, but proc_macro can only be used in crates that export macros. This is the reason why proc_macro2 exists in the first place.
You need to use multiple crates in order to write tests for code that uses syn and proc_macro2:
Your public crate that declares the macros with #[proc_macro] etc., and does very little except convert a proc_macro::TokenStream into a proc_macro2::TokenStream and vice versa.
An "internal" crate, containing most of the actual code, which depends on proc_macro2 but not proc_macro. Your tests can go in here.
The error you are seeing is because syn::parse accepts a proc_macro::TokenStream. You can instead use syn::parse2, which is identical except that it accepts a proc_macro2::TokenStream.
Related
I started out with some prototype code that goes along these lines:
// Omitted most definitions, return values checks, etc.
// The real code is much bigger and uglier.
serverId = socket(AD_INET, SOCK_STREAM, PROTO_ANY);
setsockopt(serverId, SOL_SOCKET, SO_REUSEADDR, &reuseAddrOk, sizeof(int));
bind(serverId, &serverAddress, sizeof(serverAddress));
listen(serverId, waitQueueSize);
clientId = accept(serverId, &clientAddress, sizeof(clientAddress));
read(clientId, clientBuffer, charsToRead);
Now I'd like to refactor this code extracting functionality in very simple classes (not trying to make things very generic for now...YAGNI). This is the kind of interface I'm thinking of:
SocketServer server = SocketServer(parameters);
// SocketServer knows how to create a SocketClient...abstract factories, dependency injection, etc. etc.
SocketClient client = server.accept();
string clientMessage = client.read();
client.write(serverMessage);
For instance, the SocketServer class encapsulates all the boilerplate for creating a new socket server:
SocketServer server = SocketServer(parameters);
Then, since this need to call the system API, I need to mock it:
SocketServer server = SocketServer(systemAPI, parameters);
Now, what does it mean testing that this code is correct? It doesn't produce any output that I can check (or better, I'm doing all of this exactly to encapsulate output, like file descriptors). I could check that the correct methods of the mock API are called, like:
testSocketCalledWithCorrectParameters() {
systemAPI = mock(SystemAPI).expect(once()).method("socket").with(
SystemAPI.AF_INET,
SystemAPI.SOCK_STREAM,
SystemAPI.PROTO_AUTO
);
ServerSocket(systemAPI, parameters);
}
Is this a proper situation where I should rely on testing an implementation instead of an interface? Is being forced to test an implementation rather than an interface smelling of bad design?
All other tests I could think of set expectations about the implementation:
testServerSocketIsCreatedWithCorrectDescriptor() {
dummyDescriptor = 10;
systemAPI = mock(SystemAPI).when("socket").return(dummyDescriptor);
server = SocketServer(systemAPI, parameters);
assertEquals(dummyDescriptor, server.descriptor);
}
/**
* #expected SocketException
*/
testThrowsExceptionIfErrorCreatingSocket() {
systemAPI = mock(SystemAPI).when("socket").return(SystemAPI.RETURN_ERROR);
SocketServer(systemAPI, parameters);
}
// etc.
And then, should I write unit tests also for the socketAPI, or should I just take for granted that it will be a very dumb wrapper class, doing nothing more than delegating calls to the external API (and thus it won't need to be tested)?
Let me try to answer one by one.
1. Testing call properties
This kind of test has only documentational value (This is the way to call the API.) Check, if it is really worth it -> maybe the statement is clear enough in your code.
2. testServerSocketIsCreatedWithCorrectDescriptor:
For me this kind of test have more value. I almost always write a testCreation which shows: input parameters and asserts properties, e.g. a car needs wheel for construction and has then 4 wheels, red colour and one steering wheel by default.
3. testThrowsExceptionIfErrorCreatingSocket
This kind of tests are the most valuable. They define and protect behaviour of your class on different circumstances. I'm missing an assertion here, e.g. the kind of exception, which is thrown.
Write as much as thinkable of this kind of tests.
4. Testing systemAPI
No. Never test system behaviour in unit tests. Especially not with mocks.
This is part of module or end to end tests.
Hope this helps.
I was trying to use gocheck to test my go code. I was guiding myself with the following example (similar to the one provided by their website):
package hello_test
import (
"testing"
gocheck "gopkg.in/check.v1"
)
// Hook up gocheck into the "go test" runner.
func Test(t *testing.T) {
gocheck.TestingT(t)
}
type MySuite struct{} //<==== Does the struct have to be named that way, what if we have multiple of these and register them, is it a problem?
var _ = gocheck.Suite(&MySuite{}) // <==== What does this line do?
func (s *MySuite) TestHelloWorld(c *gocheck.C) {
c.Assert(42, gocheck.Equals, "42")
c.Check(42, gocheck.Equals, 42)
}
However, there are some lines I am not sure I understand even after reading the documentation. Why is the line type MySuite struct{} needed and even more of an interesting line, why is var _ = gocheck.Suite(&MySuite{}) needed? The first one, its easy to infer that one probably has to declare that struct first and create functions that will run the tests if implemented with the signature as shown. However, the second line beats me. I have literally no idea why its needed. The documentation says:
Suite registers the given value as a test suite to be run. Any methods starting with the Test prefix in the given value will be considered as a test method.
However, I am not sure about a lot of things. For instance, is there a problem if I run this function with multiple MySuite structs in the same file? Is there anything special about the type MySuite struct? Could the gocheck testing suite work even with some different struct being registered? Basically, how many times can we register a struct in one file and will it still work?
The gocheck.Suite function has the side effect of registering the given suite value with the gocheck package. Internally, it just adds the the suite to a slice of registered test suites. You could get the same effect with:
func init() {
gocheck.Suite(&MySuite{})
}
Either form should work, so it is just a matter of style.
The tests in the registered test suites are run when you call gocheck.TestingT. You do this in your test called Test, which will be picked up by Go's testing framework. This is how gocheck tests are integrated into the testing framework. Note that you only need a single invocation of TestingT to run all test suites: not one for each test suite.
I want a unit test that verifies 2 function calls happen in the correct order. In the example, the first function encrypts a file and saves it to the file system, and the second function sends the encrypted file to a 3rd party processor (via FTP).
I am using NSubstitute as the mock framework and FluentAssertions to aid in test verification. It does not seem like this is something you can achieve with NSubstitute out of the box.
public void SendUploadToProcessor(Stream stream, string filename)
{
var encryptedFilename = FilenameBuilder.BuildEncryptedFilename(filename);
FileEncrypter.Encrypt(stream, filename, encryptedFilename);
FileTransferProxy.SendUpload(encryptedFilename);
}
[TestMethod, TestCategory("BVT")]
public void TheEncryptedFileIsSent()
{
var stream = new MemoryStream();
var filename = Fixture.Create<string>();
var encryptedFilename = Fixture.Create<string>();
FilenameBuilder
.BuildEncryptedFilename(Arg.Any<string>())
.Returns(encryptedFilename);
Sut.SendUploadToProcessor(stream, filename);
// Something here to verify FileEncrypter.Encrypt() gets called first
FileTransferProxy
.Received()
.SendUpload(encryptedFilename);
}
Try Received.InOrder in the NSubstitute.Experimental namespace.
Something like this (I haven't tested this):
Received.InOrder(() => {
FileEncrypter.Encrypt(stream, filename, encryptedFilename);
FileTransferProxy.SendUpload(encryptedFilename);
});
If you're not comfortable relying on experimental functionality, you will need to set up callbacks to store calls in order, then assert on that.
var calls = new List<string>(); //or an enum for different calls
FileEncrypter.When(x => x.Encrypt(stream, filename, encryptedFilename))
.Do(x => calls.Add("encrypt"));
FileTransferProxy.When(x => x.SendUpload(encryptedFilename))
.Do(x => calls.Add("upload"));
// Act
// Assert calls contains "encrypt","upload" in the correct order
If you do end up trying Received.InOrder, please leave some feedback on the discussion group. If we get some feedback about it working well for others then we can promote it to the core namespace.
Although it is not an answer persee, but verifying the explicit order as part of a unit test is bad practice. You should never test the implementation details. Just make sure the input is properly converted to the output and add some alternative scenarios that basically proof the expected behavior. That's the precise reason why this functionality was deprecated in RhinoMocks and that FakeItEasy doesn't even support it.
I'm in the process of learning Node.js and am wondering about how people mock dependencies in their modules when unit testing.
For example:
I have a module that abstracts my MongoDB calls. A module that uses this module may start out something like this.
var myMongo = require("MyMongoModule");
// insert rest of the module here.
I want to ensure I test such a module in isolation while also ensuring that my tests don't insert records/documents into Mongo.
Is there a module/package that I can use that proxies require() so I can inject in my own mocks? How do other's typically address this issue?
You can use a dependency injection library like nCore
To be honest, the hard part of this is actually mocking out the mongoDB API, which is complex and non trivial. I estimate it would take about a week to mock out most of the mongo API I use so I just test againts the a local mongodb database on my machine (which is always in a weird state)
Then with nCore specific syntax
// myModule.js
module.exports = {
myMethod: function () {
this.mongo.doStuff(...)
},
expose: ["myMethod"]
};
// test-myModule.js
var module = require("myModule")
module.mongo = mongoMock
assert(module.myMethod() === ...)
After reviewing Ryanos's suggestion as well as the Horaa package on npm, I discovered this thread on the Google Group that pointed me towards Sandboxed-Module.
Sandboxed-Module allows me to inject/override require() without me having to expose such dependencies for my unit tests.
I'm still up for other suggestions; however, Sandboxed-Module appears to fit my needs at the moment.
You easily mock require by using "a": https://npmjs.org/package/a
e.g. need to mock require('./foo') in unit test:
var fakeFoo = {};
var expectRequire = require('a').expectRequire;
expectRequire('./foo).return(fakeFoo);
//in sut:
var foo = require('./foo); //returns fakeFoo
Overwriting require to inject your mocks is a possible solution. However, I concur in Raynos' opinion:
I personally find the methodology of overwriting require on a file by file basis an "ugly hack" and prefer to go for proper DI. It is however optimum for mocking one or two modules on an existing code base without rewriting code for DI support.
To use proper dependency injection not only saves you an "ugly hack" but also allows you to apply additional use cases apart from injecting mocks. In production you may e.g. usually instantiate connections over http and in certain circumstances inject a different implementation to establish a connection over VPN.
If you want to look for a dependency injection container read this excellent article and check out Fire Up! which I implemented.
I have a "best practices" question. I'm writing a test for a certain method, but there are multiple entry values. Should I write one test for each entry value or should I change the entryValues variable value, and call the .assert() method (doing it for all range of possible values)?
Thank you for your help.
Best regards,
Pedro Magueija
edited: I'm using .NET. Visual Studio 2010 with VB.
If one is having to write many tests which vary only in initial input and final output one should use a data driven test. This allows you to define the test once along with a mapping between input and output. The unit testing framework will then interpret it as being one test per case. How to actually do this depends on which framework you are using.
It's better to have separate unit tests for each input/output sets covering the full spectrum of possible values for the method you are trying to test (or at least for those input/output sets that you want to unit test).
Smaller tests are easier to read.
The name is part of the documentation of the test.
Separate methods give a more precise indication of what has failed.
So if you have a single method like:
void testAll() {
// setup1
assert()
// setup2
assert()
// setup3
assert()
}
In my experience this gets very big very quickly, and so becomes hard to read and understand, so I would do:
void testDivideByZero() {
// setup
assert()
}
void testUnderflow() {
// setup
assert()
}
void testOverflow() {
// setup
assert()
}
Should I write one test for each entry
value or should I change the
entryValues variable value, and call
the .assert() method (doing it for all
range of possible values)?
If you have one code path typically you do not test all possible inputs. What you usually want to test are "interesting" inputs that make good exemplars of the data you will get.
For example if I have a function
define add_one(num) {
return num+1;
}
I can't write a test for all possible values so I may use MAX_NEGATIVE_INT, -1, 0, 1, MAX_POSITIVE_INT as my test set because they are a good representatives of interesting values I might get.
You should have at least one input for every code path. If you have a function where every value corresponds to a unique code path then I would consider writing a tests for the complete range of possible values. And example of this would be a command parser.
define execute(directive) {
if (directive == 'quit') { exit; }
elsif (directive == 'help') { print help; }
elsif (directive == 'connect') { intialize_connection(); }
else { warn("unknown directive"); }
}
For the purpose of clarity I used elifs rather than a dispatch table. I think this make it's clear that each unique value that comes in has a different behavior and therefore you would need to test every possible value.
Are you talking about this difference?
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
}
- (void) testSomething2
{
[foo callBarWithValue:y];
assert…
}
vs.
- (void) testSomething
{
[foo callBarWithValue:x];
assert…
[foo callBarWithValue:y];
assert…
}
The first version is better in that when a test fails, you’ll have better idea what does not work. The second version is obviously more convenient. Sometimes I even stuff the test values into a collection to save work. I usually choose the first approach when I might want to debug just that single case separately. And of course, I only choose the latter when the test values really belong together and form a coherent unit.
you have two options really, you don't mention which test framework or language you are using so one may not be applicable.
1) if your test framework supports it use a RowTest, MBUnit and Nunit support this if you're using .NET this would allow you to put multiple attributes on your method and each line would be executed as a separate test
2) If not write a test per condition and ensure you give it a meaningful name so that if (when) the test fails you can find the problem easily and it means something to you.
EDIT
Its called TestCase in Nunit Nunit TestCase Explination