Owin TestServer logs multiple times while testing - how can I fix this? - web-services

I'm trying to write unit tests for an OWIN service, but any log statements in my tests start duplicating once I run the tests all at once and really make the log output on the build server useless due to all the noise. I've distilled the problem down to a very simple repro:
[TestFixture]
public class ServerTest
{
[Test]
public void LogOnce()
{
using (TestServer.Create(app => { }))
{
Debug.WriteLine("Log once");
}
}
[Test]
public void LogTwice()
{
using (TestServer.Create(app => { }))
{
Debug.WriteLine("Log twice");
}
}
}
If I run one test at a time I get the expected output:
=> ServerTest.LogOnce
Log once
=> ServerTest.LogTwice
Log twice
If I run the tests all at once I get:
=> ServerTest.LogOnce
Log once
=> ServerTest.LogTwice
Log twice
Log twice
Initializing the TestServer once will solve the problem, but I am looking for a solution that allows me to continue instantiating as many TestServer instances as I choose.

This post points out how HostingEngine is defaulting on the TraceListener and ways to disable this:
TraceListener in OWIN Self Hosting
With that insight, I traced through the source code of TestServer.Create and confirmed that it is internally creating a HostingEngine which turns on a TraceListener that ultimately outputs results to the console. I have confirmed the highest voted (at the time of this writing) fix on that page works for the TestServer and believe the other solutions there are also excellent choices.
It was very time consuming and annoying having to figure this out. It is difficult to discover and non-obvious to wire up an opt-out. An opt-in solution would be better.

Related

ReSharper not supporting Assert.That

I'm in the process of returning to ReSharper recently, using trial of the newest version - 2022.2.3. I've got quite surprised when one of my nUnit tests failed in a weird way, when run by Resharper's built in Unit Test runner. Something that has never happened to me with a Test Explorer.
As long as the Asserts pass, it's all fine - green, all tests are listed. However, when the assert fails, it says One or more child tests had errors. Exception doesn't have a stacktrace
Not only there is no mention of actual values that weren't correct, but the whole failing test seems to be gone!
This happens only when I use the 'modern' approach with Assert.That. So
Assert.That(httpContext.Response.StatusCode, Is.EqualTo(200));
is causing issues, meanwhile, the more classic:
Assert.AreEqual(200, httpContext.Response.StatusCode);
works as expected. Is that something that is a known bug, or maybe some attributes are required? JetBrains claims they have full support of nUnit out of the box, so that is a bit surprising.
NOTE: the tests methods are async, awaiting result and returning Tasks, beside this nothing unusual.
EDIT: The test code is as follows, ApiKeyMiddleware is any middleware that returns response with 200 here.
[TestFixture]
public class ApiKeyMiddlewareTests
{
[Test]
public async Task Invoke_ActiveKey_Authorized()
{
var httpContext = new DefaultHttpContext();
httpContext.Request.Headers.Add("XXXXX", "xxxx");
var configuration = Options.Create(new AccessConfiguration { ActiveApiKeys = new List<string> { "xxxx" } });
var middleware = new ApiKeyMiddleware(GetEmptyRequest(), configuration);
await middleware.Invoke(httpContext);
Assert.That(httpContext.Response.StatusCode, Is.EqualTo(200)); //change to anything else than 200 and it fails + vanishes
}
}

Test runners inconsistent with HttpClient and Mocking HttpMessageRequest XUnit

So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.

Unit test a polymer web component that uses firebase

I have been trying to configure offline unit tests for polymer web components that use the latest release of Firebase distributed database. Some of my tests are passing, but others—that look nigh identical to passing ones—are not running properly.
I have set up a project on github that demonstrates my configuration, and I'll provide some more commentary below.
Sample:
https://github.com/doctor-g/wct-firebase-demo
In that project, there are two suites of tests that work fine. The simplest is offline-test, which doesn't use web components at all. It simply shows that it's possible to use the firebase database's offline mode to run some unit tests. The heart of this trick is the in the suiteSetup method shown below—a trick I picked up from nfarina's work on firebase-server.
suiteSetup(function() {
app = firebase.initializeApp({
apiKey: 'fake',
authDomain: 'fake',
databaseURL: 'https://fakeserver.firebaseio.com',
storageBucket: 'fake'
});
db = app.database();
db.goOffline();
});
All the tests in offline-test pass.
The next suite is wct-firebase-demo-app_test.html, which test the eponymous web component. This suite contains a series of unit tests that are set up like offline-test and that pass. Following the idea of dependency injection, the wct-firebase-demo-app component has a database attribute into which is passed the firebase database reference, and this is used to make all the firebase calls. Here's an example from the suite:
test('offline set string from web component attribute', function(done) {
element.database = db;
element.database.ref('foo').set('bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val(), 'bar');
done();
});
});
I have some very simple methods in the component as well, in my attempt to triangulate toward the broken pieces I'll talk about in a moment. Suffice it to say that this test passes:
test('offline push string from web component function', function(done) {
element.database = db;
let resultRef = element.pushIt('foo', 'bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val()[resultRef.key], 'bar');
done();
});
});
and is backed by this implementation in wct-firebase-demo-app:
pushIt: function(at, value) {
return this.database.ref(at).push(value);
},
Once again, these all pass. Now we get to the real quandary. There's a suite of tests for another element, x-element, which has a method pushData:
pushData: function(at, data) {
this.database.ref(at).push(data);
}
The test for this method is the only test in its suite:
test('pushData has an effect', function(done) {
element.database = db;
element.pushData('foo', 'xyz');
db.ref('foo').once('value', function(snapshot) {
expect(snapshot.val()).not.to.be.empty;
done();
});
});
This test does not pass. While this test is running, the console comes up with an error message:
Your API key is invalid, please check you have copied it correctly.
By setting some breakpoints and walking through the execution, it seems to me that this error comes up after the call to once but before the callback is triggered. Note, again, this doesn't happen with the same test structure described above that's in wct-firebase-demo-app.
That's where I'm stuck. Why do offline-test and wct-firebase-demo-app_test suites work fine, but I get this API key error in x-element_test? The only other clue I have is that if I copy in a valid API key into my initializeApp configuration, then I get a test timeout instead.
UPDATE:
Here is a (patched-together) image of my console log when running the tests.:
To illustrate the issue brought up by tony19 below, here's the console log with just pushData has an effect in x-element_test commented out:
The offline-test results are apparently false positives. If you check the Chrome console, offline-test actually throws the same error:
The error doesn't affect the test results most likely because the API key validation occurs asynchronously after the test has already completed. If you could somehow hook into that validation, you'd be able to to catch the error in your tests.
Commenting out all tests except for offline firebase is ok shows the error still occurring, which points to suiteSetup(). Narrowing the problem down further by commenting 2 of the 3 function calls in the setup, we'll see the error is caused by the call to firebase.initializeApp() (and not necessarily related to once() as you had suspected).
One workaround to consider is wrapping the Firebase library in a class/interface, and mocking that for unit tests.

How can I clear the database (domains) between easyb scenarios in Grails Integration Testing?

I am running an Integration Test for a Grails application. I am using the easyb plugin. The problem is that the database doesn't seem to get cleared out between Scenarios. My When I run standard Grails Integration Tests, the persistence context is cleared between each test. The easyb Stories are in the Integration folder, but the Grails Integration Test rules don't seem to apply here... So how do you make easyb clean up after itself?
P.S. I'm defining multiple scenarios in the same groovy file fwiw, but I don't think this is necessarily pertinent.
Just incase somebody like me is still dealing with this issue and looking for a way to rollback after each test scenario, below is a solution that works (thanks to Burt Beckwith's blog).
Wrap each easyb test scenario in a with transaction block and manually rollback at the end
scenario "add person should be successful", {
Person.withTransaction { status ->
given "no people in database", {
}
when "I add a person", {
Person.build()
}
then "the number of people in database is one", {
Person.list().size().shouldEqual 1
}
status.setRollbackOnly()
}
}
scenario "database rollback should be successful", {
given "the previous test created a person", {
}
when "queried for people", {
people = Person.list().size()
}
then "the number of people should be zero", {
people.shouldEqual 0
}
}
The above test passes.
Please post if you have a better solution to the problem
One possibility is to use transactions. I use this technique in java. You mark your test with transaction annotation. And after the test you rollback database changes.
Next possibility is to run SQL cleanup queries in after scenario section.

MVCContrib Testing Route with Areas

I am using MVC 2 with Areas. To test routing, I am using MvcContrib.
This is the testing code:
[Test]
public void Home()
{
MvcApplication.RegisterRoutes(RouteTable.Routes);
"~/".ShouldMapTo<HomeController>(x => x.Login("Nps"));
}
I am not sure how to call routing definition that are stored in Areas.
Calling AreaRegistration.RegisterAllAreas() is not an option as it gives an exception.
Thanks
Revin
This is the way I do it which works for me
[Test]
public void VerifyRouteMapFor_Test_Area_TestController()
{
RouteTable.Routes.Clear();
var testAreaRegistration = new testAreaRegistration();
testAreaRegistration.RegisterArea(new AreaRegistrationContext(testAreaRegistration.AreaName, RouteTable.Routes));
"~/test/index".ShouldMapTo<testController>(x => x.Index());
}
Rather than calling RegisterAllAreas, you should call the AreaRegistration for that area you are testing. The RegisterAllAreas scans all the loaded assemblies and as a result does too much for a test. I would manually setup the test. If it still throughs and exception post it here or to the mvccontrib mailing list. I am sure that there are some cases where the TestHelper needs to be updated to support areas better. We have not added any specific area support to the test helpers yet.
For a unit test, perhaps it's best to just do the one area. But for an integration test, you'd want to test all the routes in the context, imo.