I have a simple Kafka Streams application built with Ktor. The Application.kt looks like
fun main(args: Array<String>): Unit = io.ktor.server.netty.EngineMain.main(args)
fun Application.module() {
install(Routing) {
healthController()
}
val stream = createStream()
stream.start()
}
where
fun Route.healthController() {
get("/health") {
call.respond("I'm alive")
}
}
I would like to write unit tests to test the endpoints of my application (i.e. /health). I have created the following unit test
#Test
fun `Should get answer from health endpoint`() = testApplication {
val response = client.get("/health")
assertEquals(HttpStatusCode.OK, response.status)
assertEquals("I'm alive", response.bodyAsText())
}
This unit test works fine as long as the stream started in Application.kt does not use a KTable (at least a global KTable - I have not tested with a local KTable). If the stream uses a KTable, the unit test will never end as the stream will run indefinitely. This causes trouble in the GitLab pipeline where all the unit tests are executed.
Is there a "best practice" for testing the endpoints of a Kafka streams application built with Ktor? Especially, if the stream topology includes a KTable?
In unit tests, you should be using Kafka Streams's TopologyTestDriver.
If you want to run integration tests for an RPC layer over Kafka's Interactive Streams, it shouldn't matter if the stream is indefinitely running (that's the point of running Streams). The topology should be running in a background thread, and not block your tests or HTTP/RPC server.
Ideally, you have a way to inject a Topology or StreamsBuilder into the Application rather than both creating and starting in your main method.
Related
I've been following the guidelines here - https://docs.servicestack.net/testing
I'm trying to do unit testing rather than integration, just to cut down the level of mocking and other complexities.
Some of my services call some of my other services, via the recommended IServiceGateway API, e.g. Gateway.Send(MyRequest).
However when running tests i'm getting System.NotImplementedException: 'Unable to resolve service 'GetMyContentRequest''.
I've used container.RegisterAutoWired() which is the service that handles this request.
I'm not sure where to go next. I really don't want to have to start again setting up an integration test pattern.
You're likely going to continually run into issues if you try to execute Service Integrations as unit tests instead of Integration tests which would start in a verified valid state.
But for Gateway Requests, they're executed using an IServiceGateway which you can choose to override by implementing GetServiceGateway() in your custom AppHost with a custom implementation, or by registering an IServiceGatewayFactory or IServiceGateway in your IOC, here's the default implementation:
public virtual IServiceGateway GetServiceGateway(IRequest req)
{
if (req == null)
throw new ArgumentNullException(nameof(req));
var factory = Container.TryResolve<IServiceGatewayFactory>();
return factory != null ? factory.GetServiceGateway(req)
: Container.TryResolve<IServiceGateway>()
?? new InProcessServiceGateway(req);
}
Based on the discussion in the answer by #mythz, this is my solution:
Use case like OP: Test the "main" service, and mock the "sub service", and like OP I'd want to do that with Unit test (so BasicAppHost), because it's quicker, and I believe it is easier to mock services that way (side note: for AppHost based integration test, SS will scan assemblies for (real) Services, so how to mock? Unregister from "container" and replace w. mock?).
Anyway, for the unit test:
My "main" service is using another service, via the IServiceGateway (which is the officially recommended way):
public MainDtoResponse Get(MainDto request) {
// do some stuff
var subResponse = Gateway.Send(new SubDto { /* params */ });
// do some stuff with subResponse
}
In my test setup:
appHost = new BasicAppHost().Init();
var mockGateway = new Mock<IServiceGateway>(); // using Moq
mockGateway.Setup(x => x.Send<SubDtoResponse>(It.IsAny<SubDto>()))
.Returns(new SubDtoResponse { /* ... */ });
container.Register(mockGateway.Object);
So the IServiceGateway must be mocked, and then the Send method is the important one. What I was doing wrong was to mock the service, when I should have mocked the Gateway.
Then call the main service (under test) in the normal fashion for a Unit Test, like in the docs:
var s = appHost.Container.Resolve<MainService>(); // must be populated in DI manually earlier in code
s.Get(new MainDto { /* ... */ })
PS: The mockGateway.Setup can be used inside each test, not necessarily in the OneTimeSetUp.
So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.
In .NET Core 2.0 I have a fairly simple MassTransit routing slip that contains 2 activities. This is built and executed in a consumer and it all ties back to an automatonymous state machine. It all works great albeit with a few final clean tweaks needed.
However, I can't quite figure out the best way to write unit tests for my consumer as it builds a routing slip. I have the following code in my consumer:
public async Task Consumer(ConsumerContext<ProcessRequest> context)
{
var builder = new RoutingSlipBuilder(NewId.NextGuid());
SetupRoutingSlipActivities(builder, context);
var routingSlip = builder.Build();
await context.Execute(routingSlip).ConfigureAwait(false);
}
I created the SetupRoutingSlipActivities method as I thought it would help me write tests to make sure the right activities were being added and it simply looks like:
public void SetupRoutingSlipActivities(RoutingSlipBuilder builder, ConsumeContext<IProcessCreateLinkRequest> context)
{
builder.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
new ActivityOneArguments(
context.Message.Id,
context.Message.Name)
);
builder.AddActivity(
nameof(ActivityTwo),
new Uri("execute_activity_two_example_address"),
new ActivityTwoArguments(
context.Message.AnotherId,
context.Message.FileName)
);
}
I tried to just write tests for the SetupRoutingSlipActivities by using a Moq mock builder and a MassTransit InMemoryTestHarness but I found that the AddActivity method is not virtual so I can't verify it as such:
aRoutingSlipBuilder.Verify(x => x.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
It.Is<ActivityOne>(y => y.Id == 1 && y.Name == "A test name")));
Please ignore some of the weird data in the code examples as I just put up a simplified version.
Does anyone have any recommendations on how to do this? I also wanted to test to make sure the RoutingSlipBuilder was created but as that instance is created in the Consume method I wasn't sure how to do it! I've searched a lot online and through the MassTransit repo but nothing stood out.
Look at how the Courier tests are written, there are a number of test fixtures available to test routing slip activities. While they aren't well documented, the unit tests are a working testament to how the testing is used.
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.Tests/Courier/TwoActivityEvent_Specs.cs
I am currently running into problems attempting to create unit tests that involve the interaction of socket.io, redis, and express. I'm looking for strategies on how to best mock these interactions. For example, I am using socket.io-client to mock the connection/behavior of socket.io to my express server but then when I add a test to check if redis is storing the proper information from socket.io I find myself needing to also mock socket.io in the redis unit test which in turn means I need to mock the express server. This leads to the point where it seems like I'm rewriting another server just to unit test the actual server I am trying to test.
Has anyone had to do this before? If so could you point me to resources (google/stack overflow are slim in results)?
What part of the application are you trying to test?
One way is encapsulating the socket connection in a socket.service.js file.
const socketIO = require('socket.io');
class SocketService {
constructor(server) {
this.socket = socketIO(server);
}
initialize() {
this.socket.on('connection', socket => {
console.log('connected')
socket.on('disconnect', () => {
console.log('disconnected')
});
socket.on('register', registerHandler);
socket.on('update', updateHandler);
});
}
}
Your unit test could be that it registers agains a mock server, and that it calls the initialize method.
If you are trying to test all of them working together, then you are talking about integration and e2e testing.
I know this is a basic question but due to lack of my knowledge I couldn't start writing test for my application. So here I am trying to learn and understand what to test and how to test based on my application scenario.
Class MyController {
MyService service = new MyService();
List<String> myList = service.generateSomeList(keyVal);
model.add("myList", myList);
}
Class MyService {
ThirdPatryService extService = new ThirdPatryService ();
MyDao dao = new MyDao();
public List<String> generateSomeList(Long key) {
List<String> resultList = dao.fetchMyList(key);
extService.processData(resultList); //using 3rd party service to process result. It doesn't return anything.
return formattedList(resultList);
}
private List<String> formattedList(List<String> listToProcess) {
//do some formatting
}
}
Class MyDao {
List<String> fetchMyList(key) {
//Use DB connection to run sql and return result
}
}
I want to do both unit testing and integration testing. So some of my questions are:
Do I have to do unit testing for MyDao? I don't think so since I can test query result by testing service level.
What can be the possible test cases for service level? I can think of test result from db and test formatting function. Any other test that I missed?
While testing generateSomeList() method is that OK to create dummy String list and test it against result? Like code below Am I creating list myself and testing myself. IS this proper/correct way to write test?
#Test
public void generateSomeListTest() {
//Step 1: Create dummy string list e.g. dummyList =["test1", "test2", "test3"]
//Step 2: when(mydao.fetchMyList(anyLong()).thenReturn(dummyList);
//Step 4: result=service.generateSomelist(123L);
//Step 5: assertEquals(result[i], dummyList[i]);
}
I don't think I have to test third party service but I think I have to make sure it is being called. Is this correct? If yes how can I do that with Mockito?
How to make sure thirdparty service has really done the processing of my data. Since its return type is void how can I do test it really done its job e.g like send email
Do I have to write test for controller or I can just write integration test.
I really appreciate if you could answer these question to understand the testing part for the application.
Thanks.
They're not really unit tests, but yes, you should test your DAOs. One of the main points in using DAOs is precisely that they're relatively easy to test (you store some test data in the database, then call a DAO method which executes a query, and check that the method returns what it should return), and that they make the service layer easy to test by mocking the DAOs. You should definitely not use the real DAOs when testing the services. Ue mock DAOs. That willmake the service tests much simpler, and much much faster.
Testing the results from DB is the job of the DAO test. The service test should use a mock DAO that returns hard-coded results, and checks that the service foes what it should do with these hard-coded results (formatting, in this case)
Yes, it's fine.
Usually, it's sufficient to stub the dependencies. Verifying that they have been called is often redundant. In that case, it could be a good idea since the third party service doesn't return anything. But that's a code smell. Why doesn't it return something? See the method verify() in Mockito. It's the very first point in the Mockito documentation: http://docs.mockito.googlecode.com/hg/latest/org/mockito/Mockito.html#1
The third party service is supposed to have been tested separately and thus be reliable. So you're supposed to assume that it does what its documentation says it does. The test for A which uses B is not supposed to test B. Only A. The test of B tests B.
A unit test is usually simpler to write and faster to execute. It's also easier to test corner cases with unit tests. Integration tests should be more coarse-grained.
Just a note: what your code severely lacks as is is dependency injection. That's what will make your code testable. It's very hard to test as is because the controller creates its service, which creates its DAO. Instead, the controller should be injected with the service, and the service should be injected with the DAO. That's what allows injecting a mock DAO in the service to test the service in isolation, and to inject a mock service into the controller to test the controller in isolation.