ReSharper not supporting Assert.That - unit-testing

I'm in the process of returning to ReSharper recently, using trial of the newest version - 2022.2.3. I've got quite surprised when one of my nUnit tests failed in a weird way, when run by Resharper's built in Unit Test runner. Something that has never happened to me with a Test Explorer.
As long as the Asserts pass, it's all fine - green, all tests are listed. However, when the assert fails, it says One or more child tests had errors. Exception doesn't have a stacktrace
Not only there is no mention of actual values that weren't correct, but the whole failing test seems to be gone!
This happens only when I use the 'modern' approach with Assert.That. So
Assert.That(httpContext.Response.StatusCode, Is.EqualTo(200));
is causing issues, meanwhile, the more classic:
Assert.AreEqual(200, httpContext.Response.StatusCode);
works as expected. Is that something that is a known bug, or maybe some attributes are required? JetBrains claims they have full support of nUnit out of the box, so that is a bit surprising.
NOTE: the tests methods are async, awaiting result and returning Tasks, beside this nothing unusual.
EDIT: The test code is as follows, ApiKeyMiddleware is any middleware that returns response with 200 here.
[TestFixture]
public class ApiKeyMiddlewareTests
{
[Test]
public async Task Invoke_ActiveKey_Authorized()
{
var httpContext = new DefaultHttpContext();
httpContext.Request.Headers.Add("XXXXX", "xxxx");
var configuration = Options.Create(new AccessConfiguration { ActiveApiKeys = new List<string> { "xxxx" } });
var middleware = new ApiKeyMiddleware(GetEmptyRequest(), configuration);
await middleware.Invoke(httpContext);
Assert.That(httpContext.Response.StatusCode, Is.EqualTo(200)); //change to anything else than 200 and it fails + vanishes
}
}

Related

Test runners inconsistent with HttpClient and Mocking HttpMessageRequest XUnit

So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.

How can I unit test a MassTransit consumer that builds and executes a routing slip?

In .NET Core 2.0 I have a fairly simple MassTransit routing slip that contains 2 activities. This is built and executed in a consumer and it all ties back to an automatonymous state machine. It all works great albeit with a few final clean tweaks needed.
However, I can't quite figure out the best way to write unit tests for my consumer as it builds a routing slip. I have the following code in my consumer:
public async Task Consumer(ConsumerContext<ProcessRequest> context)
{
var builder = new RoutingSlipBuilder(NewId.NextGuid());
SetupRoutingSlipActivities(builder, context);
var routingSlip = builder.Build();
await context.Execute(routingSlip).ConfigureAwait(false);
}
I created the SetupRoutingSlipActivities method as I thought it would help me write tests to make sure the right activities were being added and it simply looks like:
public void SetupRoutingSlipActivities(RoutingSlipBuilder builder, ConsumeContext<IProcessCreateLinkRequest> context)
{
builder.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
new ActivityOneArguments(
context.Message.Id,
context.Message.Name)
);
builder.AddActivity(
nameof(ActivityTwo),
new Uri("execute_activity_two_example_address"),
new ActivityTwoArguments(
context.Message.AnotherId,
context.Message.FileName)
);
}
I tried to just write tests for the SetupRoutingSlipActivities by using a Moq mock builder and a MassTransit InMemoryTestHarness but I found that the AddActivity method is not virtual so I can't verify it as such:
aRoutingSlipBuilder.Verify(x => x.AddActivity(
nameof(ActivityOne),
new Uri("execute_activity_one_example_address"),
It.Is<ActivityOne>(y => y.Id == 1 && y.Name == "A test name")));
Please ignore some of the weird data in the code examples as I just put up a simplified version.
Does anyone have any recommendations on how to do this? I also wanted to test to make sure the RoutingSlipBuilder was created but as that instance is created in the Consume method I wasn't sure how to do it! I've searched a lot online and through the MassTransit repo but nothing stood out.
Look at how the Courier tests are written, there are a number of test fixtures available to test routing slip activities. While they aren't well documented, the unit tests are a working testament to how the testing is used.
https://github.com/MassTransit/MassTransit/blob/develop/src/MassTransit.Tests/Courier/TwoActivityEvent_Specs.cs

Unit test a polymer web component that uses firebase

I have been trying to configure offline unit tests for polymer web components that use the latest release of Firebase distributed database. Some of my tests are passing, but others—that look nigh identical to passing ones—are not running properly.
I have set up a project on github that demonstrates my configuration, and I'll provide some more commentary below.
Sample:
https://github.com/doctor-g/wct-firebase-demo
In that project, there are two suites of tests that work fine. The simplest is offline-test, which doesn't use web components at all. It simply shows that it's possible to use the firebase database's offline mode to run some unit tests. The heart of this trick is the in the suiteSetup method shown below—a trick I picked up from nfarina's work on firebase-server.
suiteSetup(function() {
app = firebase.initializeApp({
apiKey: 'fake',
authDomain: 'fake',
databaseURL: 'https://fakeserver.firebaseio.com',
storageBucket: 'fake'
});
db = app.database();
db.goOffline();
});
All the tests in offline-test pass.
The next suite is wct-firebase-demo-app_test.html, which test the eponymous web component. This suite contains a series of unit tests that are set up like offline-test and that pass. Following the idea of dependency injection, the wct-firebase-demo-app component has a database attribute into which is passed the firebase database reference, and this is used to make all the firebase calls. Here's an example from the suite:
test('offline set string from web component attribute', function(done) {
element.database = db;
element.database.ref('foo').set('bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val(), 'bar');
done();
});
});
I have some very simple methods in the component as well, in my attempt to triangulate toward the broken pieces I'll talk about in a moment. Suffice it to say that this test passes:
test('offline push string from web component function', function(done) {
element.database = db;
let resultRef = element.pushIt('foo', 'bar');
element.database.ref('foo').once('value', function(snapshot) {
assert.equal(snapshot.val()[resultRef.key], 'bar');
done();
});
});
and is backed by this implementation in wct-firebase-demo-app:
pushIt: function(at, value) {
return this.database.ref(at).push(value);
},
Once again, these all pass. Now we get to the real quandary. There's a suite of tests for another element, x-element, which has a method pushData:
pushData: function(at, data) {
this.database.ref(at).push(data);
}
The test for this method is the only test in its suite:
test('pushData has an effect', function(done) {
element.database = db;
element.pushData('foo', 'xyz');
db.ref('foo').once('value', function(snapshot) {
expect(snapshot.val()).not.to.be.empty;
done();
});
});
This test does not pass. While this test is running, the console comes up with an error message:
Your API key is invalid, please check you have copied it correctly.
By setting some breakpoints and walking through the execution, it seems to me that this error comes up after the call to once but before the callback is triggered. Note, again, this doesn't happen with the same test structure described above that's in wct-firebase-demo-app.
That's where I'm stuck. Why do offline-test and wct-firebase-demo-app_test suites work fine, but I get this API key error in x-element_test? The only other clue I have is that if I copy in a valid API key into my initializeApp configuration, then I get a test timeout instead.
UPDATE:
Here is a (patched-together) image of my console log when running the tests.:
To illustrate the issue brought up by tony19 below, here's the console log with just pushData has an effect in x-element_test commented out:
The offline-test results are apparently false positives. If you check the Chrome console, offline-test actually throws the same error:
The error doesn't affect the test results most likely because the API key validation occurs asynchronously after the test has already completed. If you could somehow hook into that validation, you'd be able to to catch the error in your tests.
Commenting out all tests except for offline firebase is ok shows the error still occurring, which points to suiteSetup(). Narrowing the problem down further by commenting 2 of the 3 function calls in the setup, we'll see the error is caused by the call to firebase.initializeApp() (and not necessarily related to once() as you had suspected).
One workaround to consider is wrapping the Firebase library in a class/interface, and mocking that for unit tests.

Unit Testing in Nancy causing Routebuilder exception using TinyIoc

Getting a System.MissingMethodException, Method not found: 'Void RouteBuilder.set_Item()
Get["/foo"] = parameters => { return Bar(Request);};
This runs fine when calling from browser, but fails when testing with this setup
var browser = new Browser(with =>
{
with.Module<Foobar>();
}
var response = brower.Get("/Foo", with => {with.HttpRequest();});
Any clue why the Routebuilder for testing won't pick up this route?
Turns out I had created the test project using the pre-release version of Nancy.Testing. This in turn made TinyIOC unhappy when trying to build routes/dependencies. So, if you see this mysterious message, check that your working code and test code are referencing the same packages.

Why do my test functions appear in code coverage? (or how to make them 100%?)

I'm using xUnit to test my C# code and I'm using Visual Studio Premium 2012.
In my solution I have my main project that I'm testing and a 2nd project that contains all of my tests. I'm supposesd to be at 100% code coverage, but there are some functions in my Test project that I cannot get to 100%. Can I just exclude that project from appearing in Code Coverage results?
Or... does anyone now how to get a test function to 100% when you have a test where you are expecting an exception to be thrown? Here are some of the ways I've tried to write a test for a method that should throw an exception and what isn't being covered. MyBusinessLogic has a function named GenerateNameLine that accepts an object of type MyViewModel. if the Name property of MyViewModel is an empty string, it should throw an exception of type RequiredInformationMissingException.
[Fact]
public void TestMethod1()
{
var vm = new MyViewModel();
vm.Name = string.Empty;
Assert.Throws<RequiredInformationMissingException>(delegate { MyBusinessLogic.GenerateNameLine(vm); });
}
This test passes, but code coverage with color highlighting it showing me that MyBusinessLogic.GenerateNameLine(vm); is not getting hit.
I've also tried:
[Fact]
public void TestMethod1
{
bool fRequiredInfoExceptionThrown = false;
var vm = new MyViewModel();
vm.Name = string.Empty;
try
{
MyBusinessLogic.GenerateNameLine(vm);
}
catch (Exception ex)
{
if (ex.GetType() == typeof(RequiredInformationMissingException))
fRequiredInfoExceptionThrown = true;
}
Assert.True(fRequiredInfoExceptionThrown, "RequiredInformationMissingException was not thrown.");
}
This test also passes. But code coverage says the } right before my catch is never hit.
I don't know how to write a test for an exception that gets 100%. I know it doesn't even really matter, but at work 100% code coverage is part of our definition of done, so I don't know what to do here.
The answer is Yes
We provide filters to customize what you want to include/exclude via the .runsettings file. You can filter out pretty much anything that you do not find useful.
The [ExcludeFromCodeCoverage] attribute can also be used in code.
See: http://blogs.msdn.com/b/sudhakan/archive/2012/05/11/customizing-code-coverage-in-visual-studio-11.aspx
Are you seeing the second issue in VS2012RTM+Update1 as well?
I would exclude the tests, but still have an eye on coverage rate for them, because a coverage below 99% would suggest some of them did not run at all.
BTW: 100% is an ideal and cannot be achieved in real life projects. At least the effort to actually reach 100% opposed to something like 90% is disproportionately high. Also exact coverage rates depend on the manner of counting hit lines.