I'm trying to run Gatling tests. But it's crucial to have my rest service running. How can I run one project before test in another?
lazy val root =
project.in( file(".") )
.aggregate("cep", "gatlingTest")
lazy val cep = Project("cep", file("cep"))
.settings(version := "1.0")......
lazy val gatlingTest = Project("gatlingTest", file("gatling"))
.enablePlugins(GatlingPlugin)
.settings(libraryDependencies ++= Seq(.......
I was trying to add something like this(dependsOn):
lazy val gatlingTest = Project("gatlingTest", file("gatling")).dependsOn(cep)
But it's not what I need.
Maybe somehow
lazy val gatlingTest = Project("gatlingTest", file("gatling"))
.settings (test in Test <<= test.dependsOn(getProjectRunningTask))
where getProjectRunningTask is task to make my service running, but I really don't know how to implement such an idea.
What are you using for running your REST service ? Is it a Spray app using sbt-revolver ?
If that's the case, I guess that :
.settings(test in Gatling <<= reStop.dependsOn(test in Gatling).dependsOn(reStart)
could be sufficient.
This would mean that you would:
Start your app in the background using sbt-revolver
Then start running your Gatling simulations
And finally stop the server after your tests ran
Related
I have a simple Kafka Streams application built with Ktor. The Application.kt looks like
fun main(args: Array<String>): Unit = io.ktor.server.netty.EngineMain.main(args)
fun Application.module() {
install(Routing) {
healthController()
}
val stream = createStream()
stream.start()
}
where
fun Route.healthController() {
get("/health") {
call.respond("I'm alive")
}
}
I would like to write unit tests to test the endpoints of my application (i.e. /health). I have created the following unit test
#Test
fun `Should get answer from health endpoint`() = testApplication {
val response = client.get("/health")
assertEquals(HttpStatusCode.OK, response.status)
assertEquals("I'm alive", response.bodyAsText())
}
This unit test works fine as long as the stream started in Application.kt does not use a KTable (at least a global KTable - I have not tested with a local KTable). If the stream uses a KTable, the unit test will never end as the stream will run indefinitely. This causes trouble in the GitLab pipeline where all the unit tests are executed.
Is there a "best practice" for testing the endpoints of a Kafka streams application built with Ktor? Especially, if the stream topology includes a KTable?
In unit tests, you should be using Kafka Streams's TopologyTestDriver.
If you want to run integration tests for an RPC layer over Kafka's Interactive Streams, it shouldn't matter if the stream is indefinitely running (that's the point of running Streams). The topology should be running in a background thread, and not block your tests or HTTP/RPC server.
Ideally, you have a way to inject a Topology or StreamsBuilder into the Application rather than both creating and starting in your main method.
So let me start by saying I've seen all the threads over the wars between creating a wrapper vs mocking the HttpMethodRequest. In the past, I've done the wrapper method with great success, but I thought I'd go down the path of Mocking the HttpMessageRequest.
For starters here is an example of the debate: Mocking HttpClient in unit tests. I want to add that's not what this is about.
What I've found is that I have tests upon tests that inject an HttpClient. I've been doing a lot of serverless aws lambdas, and the basic flow is like so:
//some pseudo code
public class Functions
{
public Functions(HttpClient client)
{
_httpClient = client;
}
public async Task<APIGatewayResponse> GetData(ApiGatewayRequest request, ILambdaContext context)
{
var result = await _client.Get("http://example.com");
return new APIGatewayResponse
{
StatusCode = result.StatusCode,
Body = await result.Content.ReadStringAsAsync()
};
}
}
...
[Fact]
public void ShouldDoCall()
{
var requestUri = new Uri("http://example.com");
var mockResponse = new HttpResponseMessage(HttpStatusCode.OK) { Content = new StringContent(expectedResponse) };
var mockHandler = new Mock<HttpClientHandler>();
mockHandler
.Protected()
.Setup<Task<HttpResponseMessage>>(
"SendAsync",
It.IsAny<HttpRequestMessage>(),
It.IsAny<CancellationToken>())
.ReturnsAsync(mockResponse);
var f = new Functions(new HttpClient(handler.Object);
var result = f.GetData().Result;
handlerMock.Protected().Verify(
"SendAsync",
Times.Exactly(1), // we expected a single external request
ItExpr.Is<HttpRequestMessage>(req =>
req.Method == HttpMethod.Get &&
req.RequestUri == expectedUri // to this uri
),
ItExpr.IsAny<CancellationToken>()
);
Assert.Equal(200, result.StatusCode);
}
So here's where I have the problem!
When all my tests run in NCrunch they pass, and pass fast!
When I run them all manually with Resharper 2018, they fail.
Equally, when they get run within the CI/CD platform, which is a docker container with the net core 2.1 SDK on a Linux distro, they too fail.
These tests should not be run in parallel (read the tests default this way). I have about 30 tests around these methods combined, and each one randomly fails on the moq verify portion. Sometimes they pass, sometimes they fail. If I break down the tests per test class and on run the groups that way, instead of all in one, then these will all pass in chunks. I'll also add that I have even gone through trying to isolate the variables per test method to make sure there is no overlap.
So, I'm really lost with trying to handle this through here and make sure this is testable.
Are there different ways to approach the HttpClient where it can consistently pass?
After lots of back n forth. I found two of situations from this.
I couldn't get parallel processing disabled within the docker setup, which is where I thought the issue was (I even made it do thread sleep between tests to slow it down (It felt really icky to me)
I found that all the tests l locally ran through the test runners were telling me they passed when about 1/2 failed on the docker test runner. What ended up being the issue was a magic string area when seeing and getting environment variables.
Small caveat to call out, Amazon updated their .NET Core lambda tools to install via dotnet cli, so this was updated in our docker image.
When I write a unit test, it properly appears in "TestRunner">"EditMode", but not in "TestRunner">"PlayMode". I have enabled the playmode, but it seems to not recognize my script.
the script is in a folder named "Editor"
the script can be run into the "EditMode", resulting in "EditMode test can only yield null" ( wich makes sense as this script calls "WaitForFixedUpdate()" )
I tried to do it in a new project, resulting in the same situation : unit tests cannot be run in Play Mode.
This is basic unit test code from unity doc : https://docs.google.com/document/d/1SeNOAVYaq9HUjsKAC2ZvRwKLD2MCNyV4LwcsP3BXm0s/edit
[UnityTest]
public IEnumerator GameObject_WithRigidBody_WillBeAffectedByPhysics()
{
var go = new GameObject();
go.AddComponent<Rigidbody>();
var originalPosition = go.transform.position.y;
yield return new WaitForFixedUpdate();
Assert.AreNotEqual(originalPosition, go.transform.position.y);
}
Unity version : 5.6.0f3
Did anyone met this problem before ?
Did I missed a step into Unit Test creation ?
Thanks
When you build on a TFS build server, failed unit tests cause the build to show an orange alert state but they still "succeed". Is there any way to tag a unit test as critical such that if it fails, the whole build will fail?
I've Googled for it and didn't find anything, and I don't see any attribute in the framework, so I'm guessing the answer is no. But maybe I'm just looking in the wrong place.
There is a way to do this, but you need to create multiple test runs and then filter your tests. On your tests, set a TestCategory attribute:
[TestCategory("Critical")]
[TestMethod]
public void MyCriticalTest {}
For NUnit you should be able to use [Category("Critical")]. There are multiple attributes of a test you can filter on, including the name.
Name = TestMethodDisplayNameName
FullyQualifiedName = FullyQualifiedTestMethodName
Priority = PriorityAttributeValue
TestCategory = TestCategoryAttributeValue
ClassName = ClassName
And these operators:
= (equals)
!= (not equals)
~ (contains or substring only for string values)
& (and)
| (or)
( ) (paranthesis for grouping)
XUnit .NET currently does not support TestCaseFilters.
Then in your build definition you can create two test runs, one that runs Critical tests, one that runs everything else. You can use the Filter option of the Test Run.
Open the Test Runs window using this hard to find button:
Create 2 test runs:
On your first run set the options as follows:
On your second run set the options as follows:
This way Team Build will run any test with the "Ciritical" category in the first run and will fail. If the first run succeeds it will kick off the non-critical tests and will Partially Succeed, even when a test fails.
Update
The same process explained for Azure DevOps Pipelines.
Yes.
Using the TFS2013 Default Template:
Under the "Process" tab, go to section 2, "Basic".
Expand the Automated Tests section.
For "Test Source", click the ellipsis ("...").
This will open a new window that has a "Fail build when tests fail" check box.
I've got a big windows service application. It performs actions on a time bound basis. Sometimes I need to be able to use some of it's functionality in isolation from the rest of the application. Currently I've got a battery of 'unit tests' which call into various sources and perform the desired functionality. My problem is these are not unit tests, they are the way we're exposing the API. If we run all the unit tests in the project, we'll be damaging some of our production data.
My question is how do I go about accessing some of the functionality of the application without unit testing? I was thinking of perhaps something like an interpreter over the top of it where you can call various parts of the functionality, but am not really that sure where to start.
An example of a unit test in our code will be:
[TestMethod]
public void TransferFunds()
{
int accountNumberTo = 123456;
int accountNumberFrom = 654321;
var accountFrom = Store.GetAccount(accountNumberFrom);
var accountTo = Store.GetAccount(accountNumberTo);
double amountToTransfer = 1000;
DateTime transactionDate = new DateTime(2010,01,01);
Store.TransferFunds(accountFrom, AccountTo, amountToTransfer, transactionDate);
var client = BankAccountService.Client();
client.Contribute(accountNumberTo, amountToTransfer, transactionDate);
client.Contribute(accountNumberFrom, amountToTransfer, transactionDate);
}
How can we move this out of unit tests, but still have the ability to run code like this?
Your setup sounds very dangerous. I would create separate console applications for your different needs. I would also remove recommend that you remove all unittests that endangers your production data. Having that sort of unittests is just down-right bad!