Spying with mockito on Akka actor - akka

I would like to spy on my actor instance, but it cannot be simply created with new keyword. I figured out following solution:
val testActorSpy = spy(TestActorRef(new TestActor).underlyingActor)
val testActorRef = TestActorRef(testActorSpy )
but this way I create one unnecessary actor. Is there any cleaner solution?

So my understanding of the Akka Actor system is that you should be doing this through properties then right?
Thus create the actor through Props and when under testing just return a spy to the actor.
thus this should give you a result:
val testActorRef = TestActorRef(spy(new TestActor))
val testActorSpy = testActorRef.underlyingActor
Be aware that the underlyingActor gets destroyed when the actor is restarted. so mocking this might not be the best option.
If using the actor directly rather going through the system you might be able to test things as well and bypassing the threaded underlying system.
See this (code paste below for java).
static class MyActor extends UntypedActor {
public void onReceive(Object o) throws Exception {
if (o.equals("say42")) {
getSender().tell(42, getSelf());
} else if (o instanceof Exception) {
throw (Exception) o;
}
}
public boolean testMe() { return true; }
}
#Test
public void demonstrateTestActorRef() {
final Props props = Props.create(MyActor.class);
final TestActorRef<MyActor> ref = TestActorRef.create(system, props, "testA");
final MyActor actor = ref.underlyingActor();
assertTrue(actor.testMe());
}

Related

Does The program flow go deeper into the bean being mocked in MockMvc?

From what I understand about mocking, the test should not go deeper into the bean being mocked. For example the control flow shouldn't go into the function apiService.getSomeData() and instead it should just return the string "Hello there".
But is that how mocking works or does the program keep going deeper and should I be able to see the print statements of getSomeData() in the stdout?
When I actually run the code below, it doesn't go deeper. But is that how it's supposed to work?
Suppose this is the Rest Controller Code:
#RestController
#RequestMapping(value = "/testing")
public class ApiController {
#Autowired
ApiService service;
#PostMapping(path = "/events/notifications",consumes = "application/json", produces = "application/json" )
public ResponseEntity<String> checkMapping(#Valid #RequestBody String someData, #RequestHeader(value="X-User-Context") String xUserContext) throws Exception {
String response = service.getSomeData(someData);
return ResponseEntity.status(HttpStatus.OK).body(response);
}
}
Suppose this is the Controller test code:
#WebMvcTest(ApiController.class)
public class ApiControllerTest {
#Autowired
MockMvc mockMvc;
#Autowired
ObjectMapper mapper;
#MockBean
ApiService apiService;
#Test
public void testingApi() throws Exception {
Mockito.when(apiService.getSomeData("")).thenReturn("Hello there");
MockHttpServletRequestBuilder mockRequest = MockMvcRequestBuilders.post("/testing/events/notifications")
.contentType(MediaType.APPLICATION_JSON)
.accept(MediaType.APPLICATION_JSON)
.header("X-User-Context","something")
.content("something");
mockMvc.perform(mockRequest)
.andExpect(status().isBadGateway());
}
}
Suppose this is the Api Service code:
#Service
public class ApiServiceImpl implements ApiService{
#Override
public String getSomeData(String data) throws Exception {
System.out.println("Going deeper in the program flow);
callThisFunction();
return "Some data";
}
public void callThisFunction(){
System.out.println("Going two levels deeper");
}
}
In your test you are not talking to ApiServiceImpl at all, but an instance that is created by mockito and that is also implementing the ApiService interface. Therefore, your implementation of getSomeData() is not executed at all. That's what mocking is about. You create a "mock" implementation (or let a tool like mockito do it for you) of the thing you do not want to be executed and inject it instead of the "real" thing.

Akka: Can an actor of some class become an actor of a diferent class?

As a course project, I am trying to implement a (simulation) of the Raft protocol.
In this post, I will not use Raft terminology at all; instead, I will use a simplified one.
The protocol is run by a number of servers (for example, 5) which can be in three different states (A, B, C).
The servers inherit some state variables and behavior from a "base" kind, but they all also have many unique state variables and methods, and respond to different messages.
At some point of the protocol, a server in some state (for example, A) is required to become the other state (for example, B).
In other words, the server should:
Lose the state variables and methods of state A, acquire those of state B, but maintain the variables of the "base" kind.
Stop responding to messages destined for state A, start responding to messages destined for state B.
In Akka, Point 1 can be implemented using Receives and become().
Point 2 is needed because, for example, an actor of class B should not have access to state variables and methods of an actor of class A. This aims at separating concerns, and achieving a better code organization.
The issues I am facing in implementing these Point 2 are the following:
Right now, my implementation has only one actor, which contains both A and B state variables and methods.
The protocol I am trying to implement requires each server has to keep a reference to the others (i.e., the ActorRef of the others).
I can't simply spawn an actor in state B, transfer the values of the state variables of the "base" kind to it, and stop the old actor, because the newly spawned actor has a new ActorRef, and the other servers are in the dark about it, and they will continue sending messages using the old ActorRef (therefore, the new actor would not receive anything, and both parties time out).
A way to circumvent the issue is that the newly spawned actor "advertises" itself by sending a message to the other actors, including its old ActorRef.
However, again due to the protocol, the other servers may be temporarily not available (i.e., they are crashed), thus they might not receive and process the advertisement.
In the project, I must use extensions of AbstractActor, and not FSM (final state machines), and have to use Java.
Is there any Akka pattern or functionality that solves this use case? Thank you for any insight. Below is a simplified example.
public abstract class BaseActor extends AbstractActor {
protected int x = 0;
// some state variables and methods that make sense for both A and B
#Override
public Receive createReceive() {
return new ReceiveBuilder()
.matchEquals("x", msg -> {
System.out.println(x);
x++;
})
.build();
}
}
public class A extends BaseActor {
protected int a = 10;
// many other state variables and methods that are own of A and do NOT make sense to B
#Override
public Receive createReceive() {
return new ReceiveBuilder()
.matchEquals("a", msg -> {
System.out.println(a);
})
.matchEquals("change", msg -> {
// here I want A to become B, but maintain value of x
})
.build()
.orElse(super.createReceive());
}
}
public class B extends BaseActor {
protected int b = 20;
// many other state variables and methods that are own of B and do NOT make sense to A
#Override
public AbstractActor.Receive createReceive() {
return new ReceiveBuilder()
.matchEquals("b", msg -> {
System.out.println(b);
})
.matchEquals("change", msg -> {
// here I want B to become A, but maintain value of x
})
.build()
.orElse(super.createReceive());
}
}
public class Example {
public static void main(String[] args) {
var system = ActorSystem.create("example");
// actor has class A
var actor = system.actorOf(Props.create(A.class));
actor.tell("x", ActorRef.noSender()); // prints "0"
actor.tell("a", ActorRef.noSender()); // prints "10"
// here, the actor should become of class B,
// preserving the value of x, a variable of the "base" kind
actor.tell("change", ActorRef.noSender());
// actor has class B
actor.tell("x", ActorRef.noSender()); // should print "1"
actor.tell("b", ActorRef.noSender()); // should print "20"
}
}
This is a sketch implementation of how this could look like.
You model each of the states a separate class:
public class BaseState {
//base state fields/getters/setters
}
public class StateA {
BaseState baseState;
//state A fields/getters/setters
..
//factory methods
public static StateA fromBase(BaseState baseState) {...}
//if you need to go from StateB to StateA:
public static StateA fromStateB(StateB stateB) {...}
}
public class StateB {
BaseState baseState;
//state B fields/getters/setters
//factory methods
public static StateB fromBase(BaseState baseState) {...}
//if you need to go from StateA to StateB:
public static StateB fromStateA(StateA stateA) {...}
}
Then in your Actor you can have receive functions defined for both A and B and initialize it to A or B depending which one is the initial one
private static class MyActor extends AbstractActor
{
private AbstractActor.Receive receive4StateA(StateA stateA)
{
return new ReceiveBuilder()
.matchEquals("a", msg -> stateA.setSomeProperty(msg))
.matchEquals("changeToB", msg -> getContext().become(
receive4StateB(StateB.fromStateA(stateA))))
.build();
}
private AbstractActor.Receive receive4StateB(StateB stateB)
{
return new ReceiveBuilder()
.matchEquals("b", msg -> stateB.setSomeProperty(msg))
.matchEquals("changeToA", msg -> getContext().become(
receive4StateA(StateA.fromStateB(stateB))))
.build();
}
//assuming stateA is the initial state
#Override
public AbstractActor.Receive createReceive()
{
return receive4StateA(StateA.fromBase(new BaseState()));
}
}
Admittedly, my Java is rusty, but for example, this actor (or something very much like it...) will take strings until it receives a Lock message, after which it can be queried for how many distinct strings it received before being locked. So in the first Receive it gets, it tracks a Set of the strings received in order to dedupe. On a Lock it transitions to a second Receive which does not contain the Set (just an Integer field) and ignores String and Lock messages.
import akka.japi.JavaPartialFunction;
import java.util.HashSet;
import scala.runtime.BoxedUnit;
public class StringCounter extends AbstractActor {
public StringCounter() {}
public static class Lock {
private Lock() {}
public static final Lock INSTANCE = new Lock();
}
public static class Query {
private Query() {}
public static final Query INSTANCE = new Query();
}
/** The taking in Strings state */
public class AcceptingStrings extends JavaPartialFunction<Object, BoxedUnit> {
private HashSet<String> strings;
public AcceptingStrings() {
strings = new HashSet<String>();
}
public BoxedUnit apply(Object msg, boolean isCheck) {
if (msg instanceof String) {
if (!isCheck) {
strings.add(msg);
}
} else if (msg instanceof Lock) {
if (!isCheck) {
context().become(new Queryable(strings.size()), true);
}
} else {
// not handling any other message
throw noMatch();
}
return BoxedUnit.UNIT;
}
}
/** The responding to queries state */
public class Queryable extends JavaPartialFunction<Object, BoxedUnit> {
private Integer ans;
public Queryable(int answer) {
ans = Integer.valueOf(answer);
}
public BoxedUnit apply(Object msg, boolean isCheck) {
if (msg instanceof Query) {
if (!isCheck) {
getSender().tell(ans, getSelf());
}
} else {
// not handling any other message
throw noMatch();
}
return BoxedUnit.UNIT;
}
}
#Override
public Receive createReceive() {
return new Receive(new AcceptingStrings());
}
}
Note that in Queryable the set is long gone. One thing to be careful of is that the JavaPartialFunction will typically have apply called once with isCheck set to true and if that call doesn't throw the exception returned by noMatch(), it will be called again "for real" with isCheck set to false. You therefore need to be careful to not do anything but throw noMatch() or return in the case that isCheck is true.
This pattern is exceptionally similar to what happens in Akka Typed (especially in the functional API) under the hood.
Hopefully this illuminates this approach. There's a chance, of course, that your instructors will not accept this, though in that case it might be worth pushing back with the argument that:
in the actor model state and behavior are effectively the same thing
all the functionality is contained within an AbstractActor
I'd also not necessarily recommend using this approach normally in Java Akka code (the AbstractActor with state in its fields feels a lot more Java-y).

Fake internal calls of a SUT with FakeItEasy

I have a small C# class that handles printing.
I want to create (n)unit tests for this class, using
fakeItEasy. How can I fake the internal calls of this
class without faking the whole SUT ?
For example:
public class MyPrintHandler: IMyPrintHandler
{
public MyPrintHandler(ILogger<MyPrintHandler> logger)
{
}
// function I want to (unit test
public async Task<bool> PrintAsync(string ipaddress)
{
try
{
if (!string.IsNullOrWhiteSpace(ipaddress) )
{
return await StartPrint(ipaddress); // This cannot be called in a unit test, because it really start printing on a printer.
}
}
catch (Exception e)
{
}
return false;
}
private async Task<bool> StartPrint(string ipaddress)
{
// prints on the printer
}
[TestFixture]
public class MyPrintHandlerTests
{
[Test]
public void Succes_PrintAsync()
{
using (var fake = new AutoFake())
{
// Arrange - configure the fake
var sut = fake.Resolve<MyPrintHandler>();
// Act
await sut.PrintAsync("0.0.0.0"); // I want to prevent StartPrint() from being called..
}
}
}
How can I achieve this, or is this not possible at all?
Thanks in advance!
I would typically say that faking the SUT is an anti-pattern, to be avoided whenever possible, as it causes confusion. If you can refactor to introduce a collaborator that handles the StartPrinting method, I would strongly consider doing so. If this is not possible, you can try this, however
any method that you want to fake must be virtual or abstract, otherwise FakeItEasy cannot intercept it
any method that you want to fake must be public (or internal, if you can grant dynamic proxy access to production code's internals)
you would then fake the SUT, specifying that it should call the original (base) methods, and finally
explicitly override the behaviour for the method that you want to intercept

Use Mockito to unit test a function which calls async function

I have a method which calls async function:
public class MyService {
...
public void uploadData() {
MyPool.getInstance().getThreadPool().execute(new Runnable() {
#Override
public void run() {
boolean suc = upload();
}
});
}
}
I want to unit test this function with Mockito, I tried:
MyPool mockMyPool = Mockito.mock(MyPool.class);
ThreadPool mockThreadPool = Mockito.mock(ThreadPool.class);
ArgumentCaptor<Runnable> runnableCaptor = ArgumentCaptor.forClass(Runnable.class);
when(mockMyPool.getThreadPool()).thenReturn(mockThreadPool);
MyService service = new MyService();
// run the method under test
service.uploadData();
// set the runnableCaptor to hold your callback
verify(mockThreadPool).execute(runnableCaptor.capture());
But I got error:
org.mockito.exceptions.verification.WantedButNotInvoked:
Wanted but not invoked:
threadPool.execute(
<Capturing argument>
);
Why I got this error, how to unit test uploadData() function with Mockito?
OK, I figured out a way by myself, since MyPool is an singleton. I added one public function setInstance(mockedInstance) to pass the mocked instance to MyPool. Then, it works. I know it is a bit "dirty", but if you have better solution, please let me know. Thanks!
Aside from the DI approach you found of keeping a MyPool or ThreadPool field, you can also refactor a little bit to allow for dependency injection in your method:
public class MyService {
...
public void uploadData() {
uploadData(MyPool.getInstance().getThreadPool());
}
/** Receives an Executor for execution. Package-private for testing. */
void uploadData(Executor executor) {
executor.execute(new Runnable() {
#Override public void run() {
boolean suc = upload();
}
});
}
}
This might be even cleaner, because it reduces your ThreadPool to the level of abstraction you need (Executor), which means you're only mocking a one-method interface rather than your ThreadPool (which I assume is related to ThreadPoolService; otherwise, you can just accept a ThreadPool, too). Officially your uploadData() would be untested, but you could easily and thoroughly test uploadData(Executor) or uploadData(ThreadPool), which are the moving parts most likely to break.
The package-private trick does rely on your code and tests to be in the same package, though they could be in different source folders; alternatively, you could just make the ThreadPool-receiving call a part of your public API, which would allow for more flexibility later.

How to unit test an interceptor?

I want to write some unit tests for an interceptor that intercepts the Loggable base class (which implements ILoggable).
The Loggable base class has no methods to call and it is used only to be initialized by the logging facility.
To my understanding I should:
Mock an ILoggable and an ILogger
Initialize the logging facility
Register my interceptor on it
Invoke some method of the mocked ILoggable
The problem is that my ILoggable interface has no methods to call and thus nothing will be intercepted.
What is the right way to act here?
Should I mock ILoggable manually and add a stub method to call?
Also, should I be mocking the container as well?
I am using Moq and NUnit.
EDIT:
Here's my interceptor implementation for reference:
public class LoggingWithDebugInterceptor : IInterceptor
{
#region IInterceptor Members
public void Intercept(IInvocation invocation)
{
var invocationLogMessage = new InvocationLogMessage(invocation);
ILoggable loggable = invocation.InvocationTarget as ILoggable;
if (loggable == null)
throw new InterceptionFailureException(invocation, string.Format("Class {0} does not implement ILoggable.", invocationLogMessage.InvocationSource));
loggable.Logger.DebugFormat("Method {0} called with arguments {1}", invocationLogMessage.InvokedMethod, invocationLogMessage.Arguments);
Stopwatch stopwatch = new Stopwatch();
try
{
stopwatch.Start();
invocation.Proceed();
stopwatch.Stop();
}
catch (Exception e)
{
loggable.Logger.ErrorFormat(e, "An exception occured in {0} while calling method {1} with arguments {2}", invocationLogMessage.InvocationSource, invocationLogMessage.InvokedMethod, invocationLogMessage.Arguments);
throw;
}
finally
{
loggable.Logger.DebugFormat("Method {0} returned with value {1} and took exactly {2} to run.", invocationLogMessage.InvokedMethod, invocation.ReturnValue, stopwatch.Elapsed);
}
}
#endregion IInterceptor Members
}
If it's just the interceptor that uses the Logger Property on your class than why have in there at all? You might just as well have it on the interceptor. (like Ayende explained in his post here).
Other than that - interceptor is just a class which interacts with an interface - everything highly testable.
I agree with Krzysztof, if you're looking to add Logging through AOP, the responsibility and implementation details about logging should be transparent to the caller. Thus it's something that the Interceptor can own. I'll try to outline how I would test this.
If I follow the question correctly, your ILoggable is really just a naming container to annotate the class so that the interceptor can determine if it should perform logging. It exposes a property that contains the Logger. (The downside to this is that the class still needs to configure the Logger.)
public interface ILoggable
{
ILogger { get; set; }
}
Testing the interceptor should be a straight-forward process. The only challenge I see that you've presented is how to manually construct the IInvocation input parameter so that it resembles runtime data. Rather than trying to reproduce this through mocks, etc, I would suggest you test it using classic State-based verification: create a proxy that uses your interceptor and verify that your log reflects what you expect.
This might seem like a bit more work, but it provides a really good example of how the interceptor works independently from other parts of your code-base. Other developers on your team benefit from this as they can reference this example as a learning tool.
public class TypeThatSupportsLogging : ILoggable
{
public ILogger { get; set; }
public virtual void MethodToIntercept()
{
}
public void MethodWithoutLogging()
{
}
}
public class TestLogger : ILogger
{
private StringBuilder _output;
public TestLogger()
{
_output = new StringBuilder();
}
public void DebugFormat(string message, params object[] args)
{
_output.AppendFormat(message, args);
}
public string Output
{
get { return _output.ToString(); }
}
}
[TestFixture]
public class LoggingWithDebugInterceptorTests
{
protected TypeThatSupportsLogging Input;
protected LoggingWithDebugInterceptor Subject;
protected ILogger Log;
[Setup]
public void Setup()
{
// create your interceptor
Subject = new LoggingWithDebugInterceptor();
// create your proxy
var generator = new Castle.DynamicProxy.ProxyGenerator();
Input = generator.CreateClassProxy<TypeThatSupportLogging>( Subject );
// setup the logger
Log = new TestLogger();
Input.Logger = Log;
}
[Test]
public void DemonstrateThatTheInterceptorLogsInformationAboutVirtualMethods()
{
// act
Input.MethodToIntercept();
// assert
StringAssert.Contains("MethodToIntercept", Log.Output);
}
[Test]
public void DemonstrateNonVirtualMethodsAreNotLogged()
{
// act
Input.MethodWithoutLogging();
// assert
Assert.AreEqual(String.Empty, Log.Output);
}
}
No methods? What are you testing?
Personally, this sounds like it goes too far. I realize that TDD and code coverage is dogma, but if you mock an interface with no methods and prove that the mocking framework does what you instructed it to do, what have you really proven?
There's another misdirection going on here: logging is the "hello world" of aspect oriented programming. Why aren't you doing logging in an interceptor/aspect? If you did it that way, there'd be no reason for all your classes to implement ILoggable; you could decorate them with logging capability declaratively. I think it's a less invasive design and a better use of interceptors.