I am using creating an application using mapreduce2 and testing the same using MRUnit 1.1.0. In one of the tests, I am checking the output of a reducer which puts the 'current system time' in it's output. I.e. at the time the reducer executes, the timestamp is used in context.write() to be written along with the rest of the output. Now even though I use the same method to find the system time in the test method as the one I use in the reducer, the times calculated in both are generally a second apart, like 2016-05-31 19:10:02 and 2016-05-31 19:10:01. So the output in the two turns out to be different, example:
test_fe01,2016-05-31 19:10:01
test_fe01,2016-05-31 19:10:02
This causes an assertion error. I wish to ignore this difference in timestamps, so the tests pass if the rest of the output apart from the timestamp is matched. I am looking for a way to mock the method used for returning the system time, so that a hardcoded value is returned, and the reducer and the test both use this mocked method during the test. Is this possible to do? Any help will be appreciated.
Best Regards
EDIT: I have already tried Mockito's spy functionality in the my test:
MClass mc = Mockito.spy(new MClass());
Mockito.when(mc.getSysTime()).thenReturn("FakeTimestamp");
However, this gives a runtime error:
org.mockito.exceptions.misusing.MissingMethodInvocationException:
when() requires an argument which has to be 'a method call on a mock'.
For example:
when(mock.getArticles()).thenReturn(articles);
Also, this error might show up because:
1. you stub either of: final/private/equals()/hashCode() methods.
Those methods *cannot* be stubbed/verified.
2. inside when() you don't call method on mock but on some other object.
3. the parent of the mocked class is not public.
It is a limitation of the mock engine.
The method getSysTime() is public and static and class MClass is public and doesn't have any parent classes.
Assuming i understand your question, you could pass a time into the Reduce using the configuration object. In your reduce you could check if this configuration is set and use it, otherwise you use the system time.
This way you can pass in a known value for testing and assert you get that same value back.
Related
I'm writing a JavaScript/React library which uses fetch.
I'm using Jest in order to test this library, and what I'd like to achieve is :
Mock HTTP requests in order to not send real calls and manipulate the answers (mock the backend server) so I'm using fetch-mock
Test the passed parameters to the fetch function
In order to achieve the first point, I need to do
fetchMock.post(`${API_URL}/sessions`, {
body: JSON.stringify({
token: 'this-is-your-token'
}),
status: 201
})
In order to achieve the second point, I need to do
const spiedFetch = jest.spyOn(global, 'fetch')
// ...
expect(spiedFetch).toHaveBeenCalledTimes(1)
(The last line is within a series of describe() and it()).
I have a first test which ensures the response has the token field and so on, and the second test is ensuring that fetch has been called once.
When I run them, the first one passes, then second one fails. When I comment the fetchMock.post(...), the first one fails and the second one passes.
Based on this, I guess fetch-mock and jest.spyOn are incompatible.
Questions :
Am I right thinking fetch-mock and jest.spyOn are incompatible?
How can I achieve the testing of the fetch response AND the passed parameters? (I want to ensure the HTTP verb is this, and the passed headers are that).
A way I could think of is to use fetch-mock as is, but using jest.spyOn on a function of my lib, which would embed fetch. Is that an okay way of doing it?
UPDATE: I tried my point 3 and it works.
jest.spyOn replaces the original function with a wrapper.
const fetchMock = require('fetch-mock'); doesn't immediately change the global fetch, but as soon as a function like fetchMock.get is called it replaces the global fetch.
Because they both work by modifying the global fetch they can be tricky to use together.
As it turns out, every time a method like fetchMock.get is called the global fetch gets replaced, but if jest.spyOn is called after all such methods are called on fetchMock then they will actually both work since jest.spyOn will wrap and track the final replaced fetch.
Having said that, there isn't really a need to use jest.spyOn(global, 'fetch') if you are already using fetch-mock since fetch-mock provides access to all the calls that were made to fetch.
To assert on the calls directly you can access them using fetchMock.calls().
Even easier is to simply call fetchMock.done() at the end of the test to assert that fetch was used exactly as expected.
I know stubs verify state and the mocks verify behavior.
How can I make a mock in PHPUnit to verify the behavior of the methods? PHPUnit does not have verification methods (verify()), And I do not know how to make a mock in PHPUnit.
In the documentation, to create a stub is well explained:
// Create a stub for the SomeClass class.
$stub = $this->createMock(SomeClass::class);
// Configure the stub.
$stub
->method('doSomething')
->willReturn('foo');
// Calling $stub->doSomething() will now return 'foo'.
$this->assertEquals('foo', $stub->doSomething());
But in this case, I am verifying status, saying that return an answer.
How would be the example to create a mock and verify behavior?
PHPUnit used to support two ways of creating test doubles out of the box. Next to the legacy PHPUnit mocking framework we could choose prophecy as well.
Prophecy support was removed in PHPUnit 9, but it can be added back by installing phpspec/prophecy-phpunit.
PHPUnit Mocking Framework
The createMock method is used to create three mostly known test doubles. It's how you configure the object makes it a dummy, a stub, or a mock.
You can also create test stubs with the mock builder (getMockBuilder returns the mock builder). It's just another way of doing the same thing that lets you to tweak some additional mock options with a fluent interface (see the documentation for more).
Dummy
Dummy is passed around, but never actually called, or if it's called it responds with a default answer (mostly null). It mainly exists to satisfy a list of arguments.
$dummy = $this->createMock(SomeClass::class);
// SUT - System Under Test
$sut->action($dummy);
Stub
Stubs are used with query like methods - methods that return things, but it's not important if they're actually called.
$stub = $this->createMock(SomeClass::class);
$stub->method('getSomething')
->willReturn('foo');
$sut->action($stub);
Mock
Mocks are used with command like methods - it's important that they're called, and we don't care much about their return value (command methods don't usually return any value).
$mock = $this->createMock(SomeClass::class);
$mock->expects($this->once())
->method('doSomething')
->with('bar');
$sut->action($mock);
Expectations will be verified automatically after your test method finished executing. In the example above, the test will fail if the method doSomething wasn't called on SomeClass, or it was called with arguments different to the ones you configured.
Spy
Not supported.
Prophecy
Prophecy is now supported by PHPUnit out of the box, so you can use it as an alternative to the legacy mocking framework. Again, it's the way you configure the object makes it becomes a specific type of a test double.
Dummy
$dummy = $this->prophesize(SomeClass::class);
$sut->action($dummy->reveal());
Stub
$stub = $this->prophesize(SomeClass::class);
$stub->getSomething()->willReturn('foo');
$sut->action($stub->reveal());
Mock
$mock = $this->prophesize(SomeClass::class);
$mock->doSomething('bar')->shouldBeCalled();
$sut->action($mock->reveal());
Spy
$spy = $this->prophesize(SomeClass::class);
// execute the action on system under test
$sut->action($spy->reveal());
// verify expectations after
$spy->doSomething('bar')->shouldHaveBeenCalled();
Dummies
First, look at dummies. A dummy object is both what I look like if you ask me to remember where I left the car keys... and also the object you get if you add an argument with a type-hint in phpspec to get a test double... then do absolutely nothing with it. So if we get a test double and add no behavior and make no assertions on its methods, it's called a "dummy object".
Oh, and inside of their documentation, you'll see things like $prophecy->reveal(). That's a detail that we don't need to worry about because phpspec takes care of that for us. Score!
Stubs
As soon as you start controlling even one return value of even one method... boom! This object is suddenly known as a stub. From the docs: "a stub is an object double" - all of these things are known as test doubles, or object doubles - that when put in a specific environment, behaves in a specific way. That's a fancy way of saying: as soon as we add one of these willReturn() things, it becomes a stub.
And actually, most of the documentation is spent talking about stubs and the different ways to control exactly how it behaves, including the Argument wildcarding that we saw earlier.
Mocks
If you keep reading down, the next thing you'll find are "mocks". An object becomes a mock when you call shouldBeCalled(). So, if you want to add an assertion that a method is called a certain number of times and you want to put that assertion before the actual code - using shouldBeCalledTimes() or shouldBeCalled() - congratulations! Your object is now known as a mock.
Spies
And finally, at the bottom, we have spies. A spy is the exact same thing as a mock, except it's when you add the expectation after the code - like with shouldHaveBeenCalledTimes().
https://symfonycasts.com/screencast/phpspec/doubles-dummies-mocks-spies
I am new to writing test cases for WebAPI's. I have seen similar questions asked in the past, but not answered, but I am wondering how I would test my APIs if they have an ODataQueryOptions as part of the parameters. See below:
public IQueryable<Item> GetByIdAndLocale(ODataQueryOptions opts,
Guid actionuniqueid,
string actionsecondaryid)
Would I have to moq this? If so, how would this look? Any help would be appreciated.
For ODataQueryOptions perspective, you may want to test that all the OData query options can work with your Function. So firstly you need to create an instance of ODataQueryOptions. Here is an example:
HttpRequestMessage request = new HttpRequestMessage(HttpMethod.Get, requestUri);
ODataQueryContext context = new ODataQueryContext(EdmCoreModel.Instance, elementType);
ODataQueryOptions options = new ODataQueryOptions(context, request);
So you need to create your own EDM model to replace EdmCoreModel.Instance, and replace requestUri with your query. elemntType in ODataQueryContext is "The CLR type of the element of the collection being queried".
I cannot tell from the phrasing, but is the above call (GetByIdAndLocale) the Web API that you are trying to test or are you trying to test something that is calling it?
One uses a mock or a stub to replace dependencies in a Unit Under Test (UUT). If you are testing GetByIdAndLocale() then you would not mock it though if it calls something else that takes the ODataQueryOptions as a parameter, you could use Moq to stub/mock that call.
If you are testing some unit that calls GetByIdAndLocale() then, yes, you could moq it. How exactly you might do this depends upon the goal (checking that the correct values are being passed in vs. checking that the returned IQueryable is correctly processed) basically, matching against It.IsAny() or against some matcher.
Which do you want to test?
GetByIdAndLocale(), something that calls it or something (not shown) that it calls?
What are you interested in verifying?
Correct options are passed in or the processing of the return from the mocked call?
In my daily unit test coding with Xcode, I only use XCTestCase. There are also these other classes that don't seem to get used much such as: XCTestSuite, XCTest, XCTestRun.
What are XCTestSuite, XCTest, XCTestRun for? When do you use them?
Also, XCTestCase header has a few methods such as:
defaultTestSuite
invokeTest
testCaseWithInvocation:
testCaseWithSelector:
How and when to use the above?
I am having trouble finding documentation on the above XCTest-classes and methods.
Well, this question is pretty good and I just wondering why this question is being ignored.
As the document saying:
XCTestCase is a concrete subclass of XCTest that should be the override point for
most developers creating tests for their projects. A test case subclass can have
multiple test methods and supports setup and tear down that executes for every test
method as well as class level setup and tear down.
On the other hand, this is what XCTestSuite defined:
A concrete subclass of XCTest, XCTestSuite is a collection of test cases. Alternatively, a test suite can extract the tests to be run automatically.
Well, with XCTestSuite, you can construct your own test suite for specific subset of test cases, instead of the default suite ( [XCTestCase defaultTestSuite] ), which as all test cases.
Actually, the default XCTestSuite is composed of every test case found in the runtime environment - all methods with no parameters, returning no value, and prefixed with ‘test’ in all subclasses of XCTestCase.
What about the XCTestRun class?
A test run collects information about the execution of a test. Failures in explicit
test assertions are classified as "expected", while failures from unrelated or
uncaught exceptions are classified as "unexpected".
With XCTestRun, you can record information likes startDate, totalDuration, failureCount etc when the test is starting, or somethings like hasSucceeded when done, and therefore you got the result of running a test. XCTestRun gives you controlability to focus what is happening or happened about the test.
Back to XCTestCase, you will notice that there are methods named testCaseWithInvocation: and testCaseWithSelector: if you read source code. And I recommend you to do for more digging.
How do they work together ?
I've found that there is an awesome explanation in Quick's QuickSpec source file.
XCTest automatically compiles a list of XCTestCase subclasses included
in the test target. It iterates over each class in that list, and creates
a new instance of that class for each test method. It then creates an
"invocation" to execute that test method. The invocation is an instance of
NSInvocation, which represents a single message send in Objective-C.
The invocation is set on the XCTestCase instance, and the test is run.
Some links:
http://modocache.io/probing-sentestingkit
https://github.com/Quick/Quick/blob/master/Sources/Quick/QuickSpec.swift
https://developer.apple.com/reference/xctest/xctest?language=objc
Launch your Xcode, and use cmd + shift + O to open the quickly open dialog, just type 'XCTest' and you will find some related files, such as XCTest.h, XCTestCase.h ... You need to go inside this file to check out the interfaces they offer.
There is a good website about XCTest: http://iosunittesting.com/xctest-assertions/
I need to find out the value passed into an indexer.
My code (c#) that I need to test is as follows:
string cacheKey = GetCacheKey(cacheKeyRequest);
string cachedValue = myCache[cacheKey] as string;
So, I need to be able to identify the value of the "cacheKey" that was passed into the indexer.
I have attempted this using a Mock of the cache object:
var cache = MockRepository.GenerateMock<WebDataCache>();
The idea being that after the code had executed, I would query the mock to identify the value that had been passed into the indexer:
var actualCacheKey = cache.GetArgumentsForCallsMadeOn(a => a["somevalue"], opt => opt.IgnoreArguments())[0][0].ToString();
This gives me a compilation error: Only assignment, call, increment, decrement, and new object expressions can be used as a statement.
I saw one suggestion to make this a function in the following way:
var actualCacheKey = cache.GetArgumentsForCallsMadeOn(a => a["somevalue"] = null, opt => opt.IgnoreArguments())[0][0].ToString();
This now compiles,but throws a run-time InvalidOperationException: No expectations were setup to be verified, ensure that the method call in the action is a virtual (C#) / overridable (VB.Net) method call.
Any suggestions? [Am using RhinoMocks.3.6.1]
Many thanks in advance
Griff
PS - I have previously posted this in http://groups.google.com/group/rhinomocks but after several days the view-count remains depressingly low.
The exception tells you exactly what is happening:
InvalidOperationException: No expectations were setup to be verified, ensure that the method call in the action is a virtual (C#) / overridable (VB.Net) method call.
Which means, in order for Rhino to properly work (or, in order for Castle to generate working proxies) your indexer has to be virtual. If you can't make it so, Rhino won't help you in this situation.
Once you make your indexer virtual, it is simple task:
var cache = MockRepository.GenerateMock<WebDataChache>();
cache.Expect(c => c["SomeKey"]).Returns("SomeValue");
// perform actual test
cache.VerifyAllExpectations();
This ensures that cache is accessed with ["SomeKey"]. If key value will be different, test will fail at VerifyAllExpectations line.