Assertions for non-deterministic behavior - unit-testing

Does AssertJ (or JUnit) have a way to chain, in a single (fluent) expression, several assertions on the same unit under test where one of the assertions may throw an exception. Essentially, I'm trying to assert that:
If a unit under test (X) doesn't result in a particular exception, which it
may, then assert that a particular property on the unit under test doesn't hold. Otherwise assert the exception is of a certain type.
For example, is there a way to express the assertion that the following erroneous code could EITHER result in an Exception or in a situation where strings.size() != 10000:
#Test/*(expected=ArrayIndexOutOfBoundsException.class)*/
public void raceConditions() throws Exception {
List<String> strings = new ArrayList<>(); //not thread-safe
Stream.iterate("+", s -> s+"+")
.parallel()
.limit(10000)
//.peek(e -> System.out.println(e+" is processed by "+ Thread.currentThread().getName()))
.forEach(e -> strings.add(e));
System.out.println("# of elems: "+strings.size());
}
AssertJ has a concept of soft assertions, are those to be used in the scenarios like that? I'd appreciate some code samples if so.
Or perhaps there are better frameworks specifically design for this type of scenarios?
Thanks.

I'm not sure if that is what you are really looking for but you can try using assumptions.
After executing the code under test, perform an assumption on the result, the following code/assertions will only be executed if the assumptions were correct.
Since 3.9.0 AssertJ provides assumptions out of the box, example:
List<String> strings = new ArrayList<>(); // not thread-safe
int requestedSize = 10_000;
Throwable thrown = catchThrowable(() -> {
Stream.iterate("+", s -> s + "+")
.parallel()
.limit(requestedSize)
.forEach(strings::add);
});
// thrown is null if there was no exception raised
assumeThat(thrown).isNotNull();
// only executed if thrown was not null otherwise the test is skipped.
assertThat(thrown).isInstanceOf(ArrayIndexOutOfBoundsException.class);
You should also have a look at https://github.com/awaitility/awaitility if you are testing asynchronous code.

I came up with the following:
#Test
public void testRaceConditions() {
List<String> strings = new ArrayList<>(); //not thread-safe
int requestedSize = 10_000;
Throwable thrown = catchThrowable(() -> {
Stream.iterate("+", s -> s+"+")
.parallel()
.limit(requestedSize)
//.peek(e -> System.out.println(e+" is processed by "+ Thread.currentThread().getName()))
.forEach(e -> strings.add(e));
});
SoftAssertions.assertSoftly(softly -> {
softly.assertThat(strings.size()).isNotEqualTo(requestedSize);
softly.assertThat(thrown).isInstanceOf(ArrayIndexOutOfBoundsException.class);
});
}
If somebody with more of AssertJ under their belt or some other tools know of a better way, I'd gladly accept their solutions. Thanks!

Related

assertEquals fails for Error implementation but pass for Success one

I have these sealed interface
sealed interface Result<out T> {
data class Success<T>(val data: T) : Result<T>
data class Error(val exception: Throwable? = null) : Result<Nothing>
}
when i tried to assertEquals the Success one, it pass. But when it comes to Error one, it will fail even though the content is identical. Here is simple example:
#Test
fun testSucess() = runTest {
whenever(repository.login("email", "password"))
.thenReturn(someValue)
val expected = Result.Success(data = someValue)
val actual = loginUseCase(LoginRequest("email", "password"))
verify(repository).login("email", "password")
assertEquals(expected, actual) // this will pass
}
#Test
fun testError() = runTest {
val exception = RuntimeException("HTTP Error")
whenever(repository.login("", ""))
.thenThrow(exception)
val expected = Result.Error(exception = exception)
val actual = loginUseCase(LoginRequest("", ""))
verify(repository).login("", "")
assertEquals(expected, actual) // this will fail
assertEquals(expected.toString(), actual.toString()) // this will pass
}
What is causing this and what is possible solution to this? I have read some info that it needs equals() to be overriden, but i still confused as to why it only happens in Error case only and how to properly override the equals method.
Data classes in Kotlin have an implicitly generated equals function automatically derived from all their properties.
The problem you are facing is probably due to the fact that the type of your someValue has a proper equals function, so the equals works for your Success and its property value. But Throwable does not have an equals function which means that two Throwables are only equal if they are the same instance, which is obviously not the case for expected and actual in your test assertion. I can only guess that in loginUseCase, the exception is wrapped inside another exception, or a new exception is created based on the one thrown by the repository?
Kotlin already has a built-in Result type, and I strongly recommend using that one instead of defining your own.
Nonetheless, if you use the built-in type, you will probably face the same problem, since the equals check still fails for the different exception instances.
There are several ways to solve that:
Define your own exception type and override the equals function to return true if they are both of the same type and have the same message.
Check for expected is Error (or with the default Result type that expected.isFailure), and then check that the messages are the same.
Make sure that loginUseCase throws exactly the same exception instance as is thrown by the repository.

How to throw exception when try to mock gremlin query?

I'm trying to write unit test for a case when an exception is thrown but somehow it is throwing null instead of exception.
The service call that I'm trying to mock.
private List<Vertex> getVertexList(final String vertexId, GraphTraversalSource graphTraversalSource, final int indexToLoop) {
return graphTraversalSource.V(vertexId).repeat(in().dedup().simplePath()).until(loops().is(indexToLoop)).toList();
}
I wrote the following to mock to throw Exception
#Mock(answer = RETURNS_DEEP_STUBS)
private GraphTraversalSource gts;
Mockito.when(gts.V(anyString()).repeat(any()).until((Predicate<Traverser<Vertex>>) any()).toList()).thenThrow(Exception.class);
Is there any way to mock this so that it throws exception? Thanks in advance.
Assuming your gts and a call to MockitoAnnotations.initMocks(this) in the #Before of the test, this style worked for me:
GraphTraversal v = mock(GraphTraversal.class);
GraphTraversal repeat = mock(GraphTraversal.class);
GraphTraversal until = mock(GraphTraversal.class);
when(gts.V(anyString())).thenReturn(v);
when(v.repeat(any())).thenReturn(repeat);
when(repeat.until((Predicate<Traverser<Vertex>>) any())).thenReturn(until);
when(until.toList()).thenThrow(RuntimeException.class);
gts.V("test-id").repeat(out()).until(__.loops().is(1)).toList();
Depending on what you are doing you might consider avoiding the mock and just throwing an exception in the traversal itself:
GraphTraversalSource g = EmptyGraph.instance().traversal();
g.inject("test-id").sideEffect(x -> {
throw new RuntimeException();
}).toList();
Obviously, that's a little different from what your mock is doing and does require the traversal to actually have data passing through it (hence my use of inject() to start the traversal rather than V() as g is bound to an EmptyGraph in this case.

Asserting a property has been set in a mocked class

I'm using the MockingContainer<T> to automatically set up my dependencies. How do I assert that a property on one of those dependencies gets set?
[SetUp]
public void SetUp()
{
//arrange
_baseUrl = "http://baseUrl";
_container = new MockingContainer<ApiInteractionService>();
_container.Arrange<IConfigService>(m => m.BaseUrl).Returns(_baseUrl);
_uut = _container.Instance;
}
The following fails with 0 calls, which makes sense since I believe it's looking at the Getter, not the Setter. So how do I assert that the Setter was called by the unit under test?
[Test]
public void BaseUrlSet()
{
//act
var _ = _uut.MakeRequest((InitialRequest) Arg.AnyObject);
//assert
_container.Assert<IRestService>(m => m.BaseUrl, Occurs.Once());
}
Per the documentation (located at JustMock Docs for anyone who isn't familiar but wishes to try assisting) it appears I should be using Mock.ArrangeSet(lambda), however I cannot seem to figure out how to get that syntax to work in relation to MockingContainer<T>.
If worse comes to worse, I can just NOT use MockingContainer<T>, but I'd prefer to not have to refactor my test suite just to accommodate one specific unit test.
Not that it's really relevant to the question, but in the off chance anyone needs it, here is a stub of ApiInteractionService
public ApiInteractionService(IRestService restService, IConfigService configService)
{
_restService = restService;
_restService.BaseUrl = configService.BaseUrl;
}
public string MakeRequest(InitialRequest initialRequest)
{
return _restService.Post(initialRequest);
}
Why not simply assert that BaseUrl has the correct value at the end of the test?
var baseUrl = _container.Get<IRestService>().BaseUrl;
Assert.AreEqual(baseUrl, _baseUrl);
As suggested in the comments, _container.Assert<IRestService>(m => m.BaseUrl == _baseUrl) will not work. MockingContainer<T>.Assert asserts an expectation, it's not just asserting truth like regular asserts. The correct syntax would have been:
_container.AssertSet<IRestService>(restService => restService.BaseUrl = _baseUrl, Occurs.Once());
but, oddly, there is no AssertSet method on the container.

How to avoid multiple asserts in a JUnit test?

I have a DTO which I'm populating from the request object, and the request object has many fields. I want to write a test to check if the populateDTO() method is putting values in the right places or not. If I follow the rule of one assert per test, I would have to write a large number of tests, to test each field. The other approach would be to write multiple asserts in a single test. Is it really recommended to follow one assert per test rule or can we relax in these cases. How do I approach this problem?
Keep them separate. A unit test is supposed to tell you which unit failed. Keeping them separate also allows you to isolate the problem quickly w/o requiring you to go through a lengthy debug cycle.
Is it really recommended to have only one assert per unit test? Yes it is, there are people who make that recommendation. Are they right? I don't think so. I find it hard to believe such people have actually worked on real code for a long time.
So, imangine you have a mutator method you want to unit test. The mutator has some kind of effect, or effects, which you want to check. Typically the expected effect of a mutator are few in number, because many effects suggests an overly complicated design for the mutator. With one assert per effect and one test case per assert, you will not need many test cases per mutator, so the recommendation does not seem so bad.
But the flaw in this reasoning is that those tests are looking at only the expected effects of the mutator. But if the mutator has a bug in it, it might have unexpected faulty side effects. The tests are making the foolish assumption that the code does not have a whole class of bugs, and that no future refactoring will introduce such bugs. When the method was originally written it might be obvious to the author that particular side effects were impossible, but refactoring and addition of new functionality might make such side effects possible.
The only safe way to test long lived code is to check that the mutators do not have unexpected side effects. But how can you test for those? Most classes have some invariants: things that no mutator can ever change. The size method of a container will never return a negative value, for example. Each invariant is, in effect, a post condition for every mutator (and also the constructor). Each mutator also typically has a set of invariants that describe what kind of changes it does not make. A sort method does not change the length of the container, for example. The class and mutator invariants are, in effect, post conditions for every mutator call. Adding assertions for all them is the only way of checking for unexpected side effects.
So, just add more test cases? In practice the number of invariants multiplied by the number of mutators to test is large, so one assertion per test leads to many test cases. And the information about your invariants is scattered over many test cases. A design change to tweak one invariant will require alteration of many test cases. It becomes impractical. Its better to have parameterised test cases for a mutator, which check several invariants for the mutator, using several assertions.
And the authors of JUnit5 seem to agree. They provide an assertAll for checking several assertions in one test-case.
this construction help you to have 1 big assert (with small asserts inside)
import static org.junit.jupiter.api.Assertions.assertAll;
assertAll(
() -> assertThat(actual1, is(expected)),
() -> assertThat(actual2, is(expected))
);
You can have a parameterized test where the 1st parameter is the propertyname and the second the expected value.
Is that rule extended to being in a loop? Consider this
Collection expectedValues = // populate expected values
populateDTO();
for(DTO dto : myDtoContainer)
assert_equal(dto, expectedValues.get(someIndexRelatedToDto))
Now I'm not so big on the exact syntax, but this is just the notion I'm looking at.
EDIT:
After the comments...
The answer is ... Nope!
The reason the principle exists is so you can identify which parts of the object fail. If you have them in one method, you're going to run into only one assertion, then the next, then the next, and you won't see them all.
So you can have it one of two ways:
One method, less boilerplate code.
Many methods, better reporting on the test run
It's up to you, both have ups and downs.
3. List item
[caveat: I'm very "unfluent" in Java/JUnit, so beware of errors in the details below]
There's a couple of ways to do this:
1) Write multiple assertions in the same test. This should be ok if you are only testing the DTO generation once. You could start here, and move to another solution when this starts to hurt.
2) Write a helper assertion, e.g. assertDtoFieldsEqual, passing in the expected and actual DTO. Inside the helper assertion you assert each field separately. This at least gives you the illusion of only one assert per test and will make things clearer if you test DTO generation for multiple scenarios.
3) Implement equals for the object that check each property and implement toString so that you at least can inspect the assertion result manually to find out what part is incorrect.
4) For each scenario where the DTO is generated, create a separate test fixture that generates the DTO and initializes the expected properties in the setUp method. The create a separate test for testing each of the properties. This also results in a lot of tests, but they will at least be one-liners only. Example in pseudo-code:
public class WithDtoGeneratedFromXxx : TestFixture
{
DTO dto = null;
public void setUp()
{
dto = GenerateDtoFromXxx();
expectedProp1 = "";
...
}
void testProp1IsGeneratedCorrectly()
{
assertEqual(expectedProp1, dto.prop1);
}
...
}
If you need to test the DTO generation under different scenarios and choose this last method it could soon become tedious to write all those tests. If this is the case you could implement an abstract base fixture that leaves out the details on how to create the DTO and setup the expected properties to derived classes. Pseudo-code:
abstract class AbstractDtoTest : TestFixture
{
DTO dto;
SomeType expectedProp1;
abstract DTO createDto();
abstract SomeType getExpectedProp1();
void setUp()
{
dto = createDto();
...
}
void testProp1IsGeneratedCorrectly()
{
assertEqual(getExpectedProp1(), dto.prop1);
}
...
}
class WithDtoGeneratedFromXxx : AbstractDtoTest
{
DTO createDto() { return GenerateDtoFromXxx(); }
abstract SomeType getExpectedProp1() { return new SomeType(); }
...
}
Or you can do some workaround.
import junit.framework.Assert;
import org.junit.After;
import org.junit.AfterClass;
import org.junit.Before;
import org.junit.BeforeClass;
import org.junit.Test;
public class NewEmptyJUnitTest {
public NewEmptyJUnitTest() {
}
#BeforeClass
public static void setUpClass() throws Exception {
}
#AfterClass
public static void tearDownClass() throws Exception {
}
#Before
public void setUp() {
}
#After
public void tearDown() {
}
#Test
public void checkMultipleValues() {
String errMessages = new String();
try{
this.checkProperty1("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
try{
this.checkProperty2("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
try{
this.checkProperty3("someActualResult", "someExpectedResult");
} catch (Exception e){
errMessages += e.getMessage();
}
Assert.assertTrue(errMessages, errMessages.isEmpty());
}
private boolean checkProperty1(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property1 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
private boolean checkProperty2(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property2 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
private boolean checkProperty3(String propertyValue, String expectedvalue) throws Exception{
if(propertyValue == expectedvalue){
return true;
}else {
throw new Exception("Property3 has value: " + propertyValue + ", expected: " + expectedvalue);
}
}
}
Maybe not the best approach and if overused than can confuse... but it is a possibility.

JUnit for Functions with Void Return Values

I've been working on a Java application where I have to use JUnit for testing. I am learning it as I go. So far I find it to be useful, especially when used in conjunction with the Eclipse JUnit plugin.
After playing around a bit, I developed a consistent method for building my unit tests for functions with no return values. I wanted to share it here and ask others to comment. Do you have any suggested improvements or alternative ways to accomplish the same goal?
Common Return Values
First, there's an enumeration which is used to store values representing test outcomes.
public enum UnitTestReturnValues
{
noException,
unexpectedException
// etc...
}
Generalized Test
Let's say a unit test is being written for:
public class SomeClass
{
public void targetFunction (int x, int y)
{
// ...
}
}
The JUnit test class would be created:
import junit.framework.TestCase;
public class TestSomeClass extends TestCase
{
// ...
}
Within this class, I create a function which is used for every call to the target function being tested. It catches all exceptions and returns a message based on the outcome. For example:
public class TestSomeClass extends TestCase
{
private UnitTestReturnValues callTargetFunction (int x, int y)
{
UnitTestReturnValues outcome = UnitTestReturnValues.noException;
SomeClass testObj = new SomeClass ();
try
{
testObj.targetFunction (x, y);
}
catch (Exception e)
{
UnitTestReturnValues.unexpectedException;
}
return outcome;
}
}
JUnit Tests
Functions called by JUnit begin with a lowercase "test" in the function name, and they fail at the first failed assertion. To run multiple tests on the targetFunction above, it would be written as:
public class TestSomeClass extends TestCase
{
public void testTargetFunctionNegatives ()
{
assertEquals (
callTargetFunction (-1, -1),
UnitTestReturnValues.noException);
}
public void testTargetFunctionZeros ()
{
assertEquals (
callTargetFunction (0, 0),
UnitTestReturnValues.noException);
}
// and so on...
}
Please let me know if you have any suggestions or improvements. Keep in mind that I am in the process of learning how to use JUnit, so I'm sure there are existing tools available that might make this process easier. Thanks!
It is true that if you are using JUnit 3, and you are testing whether a particular exception is thrown or not thrown within a method, you will need to use something like the try-catch pattern you define above.
However:
1) I'd argue that there is a lot more to testing a method with a void return value then checking for exceptions: is your method making the correct calls to (presumably mocked) dependencies; does it behave differently when the class is initialized with a different context or different sets of dependencies, etc. By wrapping all calls to that method, you make it hard to change other aspects of your test.
I'm also generally opposed to adding code and adding complexity if it can be avoided; I don't think it's a burden to have to put a try/catch in a given test when it's checking for exceptions.
2) Switch to JUnit 4! It makes it easy to check for expected exceptions:
#Test(expected=IndexOutOfBoundsException.class)
public void testIndexOutOfBoundsException() {
ArrayList emptyList = new ArrayList();
Object o = emptyList.get(0);
}
If you have the possibility, you should upgrade to JUnit 4.x.
Then your first example can be rewritten to:
#Test(expected=RuntimeException.class)
public void testTargetFunction() {
testObj.targetFunction (x, y);
}
The advantage here is that you can remove you the private UnitTestReturnValues callTargetFunction (int x, int y) method and use JUnit's built in support for expecting exceptions.
You should also test for specific exceptions instead.
Looks like you reimplemented most of JUnit :) In general you don't need to do it. You just call the function you want to call and compare results. If it throws an exception, JUnit will catch if for you and fail the test. If you expect an exception, either you can use the explicit annotation if you are using JUnit 4, or you can use the following pattern:
public void testThrows()
{
try {
obj.DoSth(); //this should throw MyException
assertFail("Expected exception");
} catch (MyException e) {
//assert the message etc
}
}
again, if obj.DoSth() throws a different exception JUnit will fail the test.
So to sum up, I am afraid I believe your approach is overcomplicated, sorry.
please correct me if I am wrong. As I understood from the provided code you're only checking if there may be an exception while executing the function. But you're actually not verifying, if the called functions "works" correctly unless the only way to end in case of an error would be an exception. I suggest writing additional tests like this:
public void testTargetFunctionSomeValue() {
int someValue = 0;
callTargetFunction(someValue, someValue);
assertTrue(verifyTargetFunction(someValue, someValue));
}
public boolean verifyTargetFucntion(int someValue, int someValue) {
// verify that execution of targetFunction made expected changes.
. . . . .
}
and the verifyTargetFunction would acutally check, if calling targetFunction would have made the expected changes - let's say to a database table by returning true or false.
Hope that helps.
Cheers,
Markus