I ran into some difficulty trying to use unit testing with live Azure Storage Queues, and kept writing simpler and simpler examples to try and isolate the problem. In a nutshell, here is what seems to be happening:
Queue access is clearly (and appropriately) lazy-loaded. In my MVC app though, when I get to REALLY need to access the queue (in my case when I call the CloudQueue.Exists method) it is pretty fast. Less than one tenth of a second. However, the VERY same code, when run in the context of a unit test takes about 25 seconds.
I don't understand why there should be this difference, so I made a simple console app that writes something and then reads it from an Azure queue. The console app also takes 25 seconds the first time it is run -- on subsequent runs it takes about 2.5 seconds.
And now for the really weird behavior. I created a Visual Studio 2012 solution with three projects -- one MVC app, one Console app, and one Unit Test project. All three call the same static method which checks for the existence of a queue, creates it if it doesn't exist, writes some data to it and reads some data from it. I have put a timer on the call to CloudQueue.Exists in that method. And here is the deal. When the method is called from the MVC app, the CloudQueue.Exists method consistently completes in about one tenth of a second, whether or not the queue actually does exist. When the method is called from the console app, the first time it is called it takes 25 seconds, and subsequent times it takes about 2.5 seconds. When the method is called from the Unit Test, it consistently takes 25 seconds.
More info: It so happens that when I create this dummy solution, I happened to put my static method (QueueTest) in the console app. Here is what is weird -- if I set the default startup project in Visual Studio to the Console App, then the Unit Test suddenly takes 2.5 seconds. But if I set the startup project in Visual Studio to the MVC app (or to the Unit Test project) then the Unit test takes 25 seconds!
So.... does anyone have a theory of what is going on here? I am baffled.
Code follows below:
Console App:
public class Program
{
static void Main(string[] args)
{
Console.WriteLine(QueueTest("my-console-queue", "Console Test"));
}
public static string QueueTest(string queueName, string message)
{
string connectionString = ConfigurationManager.ConnectionStrings["StorageConnectionString"].ConnectionString;
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
CloudQueue queue = queueClient.GetQueueReference(queueName);
DateTime beforeTime = DateTime.Now;
bool doesExist = queue.Exists();
DateTime afterTime = DateTime.Now;
TimeSpan ts = afterTime - beforeTime;
if (!doesExist)
{
queue.Create();
}
CloudQueueMessage qAddMessage = new CloudQueueMessage(message);
queue.AddMessage(qAddMessage);
CloudQueueMessage qGetmessage = queue.GetMessage();
string response = String.Format("{0} ({1} seconds)", qGetmessage.AsString, ts.TotalSeconds);
return response;
}
}
MVC App (Home Controller):
public class HomeController : Controller
{
public ActionResult Index()
{
return Content(Program.QueueTest("my-mvc-queue", "Mvc Test"));
}
}
Unit Test Method: (Note, currently expected to fail!)
[TestClass]
public class QueueUnitTests
{
[TestMethod]
public void CanWriteToAndReadFromQueue()
{
//Arrange
string qName = "my-unit-queue";
string message = "test message";
//Act
string result = Program.QueueTest(qName, message);
//Assert
Assert.IsTrue(String.CompareOrdinal(result,message)==0);
}
}
Of course insight is greatly appreciated.
I suspect this has nothing to do with Azure queues, but rather with .NET trying to determine your proxy settings. What happens if you make some other random System.Net call instead of the call to queue storage?
Try this line of code at the beginning of your app:
System.Net.WebRequest.DefaultWebProxy = null;
Related
I'm in the process of returning to ReSharper recently, using trial of the newest version - 2022.2.3. I've got quite surprised when one of my nUnit tests failed in a weird way, when run by Resharper's built in Unit Test runner. Something that has never happened to me with a Test Explorer.
As long as the Asserts pass, it's all fine - green, all tests are listed. However, when the assert fails, it says One or more child tests had errors. Exception doesn't have a stacktrace
Not only there is no mention of actual values that weren't correct, but the whole failing test seems to be gone!
This happens only when I use the 'modern' approach with Assert.That. So
Assert.That(httpContext.Response.StatusCode, Is.EqualTo(200));
is causing issues, meanwhile, the more classic:
Assert.AreEqual(200, httpContext.Response.StatusCode);
works as expected. Is that something that is a known bug, or maybe some attributes are required? JetBrains claims they have full support of nUnit out of the box, so that is a bit surprising.
NOTE: the tests methods are async, awaiting result and returning Tasks, beside this nothing unusual.
EDIT: The test code is as follows, ApiKeyMiddleware is any middleware that returns response with 200 here.
[TestFixture]
public class ApiKeyMiddlewareTests
{
[Test]
public async Task Invoke_ActiveKey_Authorized()
{
var httpContext = new DefaultHttpContext();
httpContext.Request.Headers.Add("XXXXX", "xxxx");
var configuration = Options.Create(new AccessConfiguration { ActiveApiKeys = new List<string> { "xxxx" } });
var middleware = new ApiKeyMiddleware(GetEmptyRequest(), configuration);
await middleware.Invoke(httpContext);
Assert.That(httpContext.Response.StatusCode, Is.EqualTo(200)); //change to anything else than 200 and it fails + vanishes
}
}
So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.
From time to time I have this annoying tests with intermittent issues, that I need to run many times to expose. I was looking for a convenient way to set a number or "endless loop" from the intelliJ, but I did not find.
Is there a plugin or I missed something that could allow me to do this from the UI (instead of changing code for it).
EDIT: As I found the support for such feature is per test utility plugin. For example, it already exists for JUnit, but there is no such for Go Test. My instinct suggests that such functionality should be generically provided for all test plugins, but there might be some technical reasons for per plugin approach.
In the Run Configuration of the test there is a "Repeat:" dropdown where you can specify the number of repeats, for example until the test fails. I believe this is available since IntelliJ IDEA 15.
You can use oracle JDK to create a executor service which schedules the running /execution of the test suite periodically unless you shut down the service
Please have a look at the below oracle doc
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ScheduledExecutorService.html
Sample
import static java.util.concurrent.TimeUnit.*;
class BeeperControl {
private final ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool(1);
public void beepForAnHour() {
final Runnable beeper = new Runnable() {
public void run() { System.out.println("beep"); }
};
final ScheduledFuture<?> beeperHandle =
scheduler.scheduleAtFixedRate(beeper, 10, 10, SECONDS);
scheduler.schedule(new Runnable() {
public void run() { beeperHandle.cancel(true); }
}, 60 * 60, SECONDS);
}
}
I am writing a series of unit tests on a Gtk Application in Vala and I have encountered a problem with instantiating and running a Gtk Application more than once inside the test program.
The first time the application is instantiated and run everything works as expected but the subsequent times it fails with the message:
Failed to register: An object is already exported for the interface org.gtk.Application at /org/valarade/testtools
From what I understand of the Gtk Application life cycle the Application has registered itself on the local DBus session bus as a single instance application which prevents additional instances from being run.
Using the d-feet application I am able to watch the application register itself on the local bus the first time it runs and it appears to deregister itself when the running test function terminates. There is no trace of it when the subsequent test function instantiates and runs a new instance but the above error is returned nonetheless.
I have tried several things, including making sure that all the objects referenced by the application are destroyed as well as the application object itself between test functions. I have tried calling connection.close_sync in the class destructor and setting register_session to false but neither had any effect.
The sample code for the Test Program
static void main (string[] args) {
Gtk.test_init (ref args);
TestSuite.get_root ().add_suite (new FileLoaderPluginTests ().get_suite ());
Idle.add (() => {
Test.run ();
Gtk.main_quit ();
return true;
});
Gtk.main ();
}
and the code for the Test Suite
public FileLoaderPluginTests () {
add_test ("method on_open_activate ()", file_loader_on_open_activate);
add_test ("method on_open_response_ok ()", file_loader_on_open_response_ok);
}
public void file_loader_on_open_activate () {
var app = new MockApplication ();
app.activate.connect ((a) => {
var action = app.shell.lookup_action ("file_open");
action.activate (null);
app.quit();
});
app.run ();
app = null;
}
public void file_loader_on_open_response_ok () {
var app = new MockApplication ();
app.activate.connect ((a) => {
var action = app.shell.lookup_action ("file_open");
action.activate (null);
app.quit();
});
app.run ();
}
It appears to me that the DBus session registration is for the life of the running Test program not the Application class itself. I've been through the precious little documentation there is and I can't seem to grock anything that would allow me deregister the Application after each test.
Although I can work around this by setting up distinct Test programs for each unit test, this seems to be a lot of unnecessary duplication. Ideally I would like to have one test program for each logical unit in the whole application whereas this way it could result in quite a number by the time there's any significant code coverage.
My question is then - is there any way I can create, run and destroy a Gtk Application multiple times within a Test program? Alternatively, is there a better way to test Gtk Applications that obviates this problem?
Since DBus communication is asynchronous, I would guess that your application name hasn't deregistered from the bus yet by the time the next test starts up the next application.
Some tips for testing Application classes:
You can append a unique identifier (e.g. the PID plus the system clock time) to the application ID when you create each application; that way, the applications will never clash with each other. This is especially advisable if you might be running test programs in parallel (which Automake does by default these days.)
If possible, keep your logic out of your application class and in smaller units so that you don't have to create an application instance for every test. Starting up and shutting down an application instance in each test makes them very slow.
PS. I believe Test.run() already runs a main loop for you, so you don't need to start the test suite in an idle function.
I am unit testing a class with a property whose value changes often, depending on communication it receives from another component. If the class does not receive any communication for 5 seconds, the property reverts to a default value.
It is easy for me to stub and mock out the communicating component in order to trigger the values I want to test for. The problem is that if I run my unit tests on a machine which is busy (like a build machine), and there is a significant-enough delay to cause the property to default, then my unit test will fail.
How would you test to be sure that this property has the proper value when simulating various communication conditions?
One idea is to restructure my code so that I can stub the part of the class which controls the timeout. Another is to write my unit test such that it can detect if it failed due to a timeout and indicates that in the test results.
I'd try a different approach. Game developers often need a way to control the game time, e.g. for fast-forward functionality or to synchronize frame rates. They introduce a Timer object, which reads ticks from either a hardware clock, or from a simulated clock.
In your case, you could provide a controllable timer for your unit tests and a timer which delegates to system time in production mode. That way, you can control the time which passes for your test case and thus how the class-under-test has to react under certain timeout conditions.
Pseudo-code:
public void testTimeout() throws Exception {
MockTimerClock clock = new MockTimerClock();
ClassUnderTest cut = new ClassUnderTest();
cut.setTimerClock(clock);
cut.beginWaitingForCommunication();
assertTrue(cut.hasDefaultValues());
cut.receiveOtherValues();
assertFalse(cut.hasDefaultValues());
clock.tick(5,TimeUnit.SECONDS);
assertTrue(cut.hasDefaultValues());
cut.shutdown();
}
You could make the timeout property configurable, then set it to a high enough value in your unit tests (or low enough, if you want to unit test the reset behaviour).
There is a similar problem when using DateTime.Now.
Ayende described a trick to deal with it that I liked:
public static class SystemTime
{
public static Func<DateTime> Now = () => DateTime.Now;
}
and then in your test :
[Test]
public void Should_calculate_length_of_stay_from_today_when_still_occupied()
{
var startDate = new DateTime(2008, 10, 1);
SystemTime.Now = () => new DateTime(2008, 10, 5);
var occupation = new Occupation { StartDate = startDate };
occupation.LengthOfStay().ShouldEqual(4);
}
Maybe you can use the same kind of trick for your timeout?