Flink Job Testing with MiniClusterWithClientResource - unit-testing

I've wrote a #Test method in order to test the execuction of a Flink job.
This is the method:
#Test
void testFlinkJob() throws Exception {
StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
env.setParallelism(2);
MyJob.buildJob(env, new MySourceFunction(), new MySinkFunction());
env.execute();
//asserts
}
Implementations details of MyJob.buildJob(), MySourceFunction and MySinkFunction are not important. Please, focus on env.setParallelism(2).
If I run this test, everything is ok. Fine!
However, Flink official documentation (https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/datastream/testing/#junit-rule-miniclusterwithclientresource), speaks about MiniClusterWithClientResource .
So I added these snippet to my test class, as showed into documentation.
#ClassRule
public static MiniClusterWithClientResource flinkCluster =
new MiniClusterWithClientResource(
new MiniClusterResourceConfiguration.Builder()
.setNumberSlotsPerTaskManager(2)
.setNumberTaskManagers(1)
.build());
I run my test again and it still passes. Perfect!
Then I started to play with the above snippet. The first thing I changed is the value of setNumberSlotsPerTaskManager() param from 2 to 1.
I launched a one more time my test. This time I expected a test failure because the value of parallelism (2) is higher than the value numberOfTaskManager * numberSlotPerTestManagers (1).
Instead, my test continues to pass.
Same thing if I write setNumberTaskManagers(0) (No TaskManager). Test continues to pass.
Seems that MiniClusterWithClientResource is dummy. Can you help my to understand how it work, please?

If you are working with JUnit 5, #ClassRule annotation will be ignored. You need to use extensions:
#ExtendWith(MiniClusterExtension.class)
public class MyTest {
#RegisterExtension
public static final MiniClusterExtension MINI_CLUSTER_RESOURCE = new MiniClusterExtension(
new MiniClusterResourceConfiguration.Builder()
.setNumberSlotsPerTaskManager(2)
.setNumberTaskManagers(1)
.build());
}
P.S.: MiniClusterExtension.class is still marked as #Experimental as of v1.16 and therefore subject to changes.

Related

Unit Testing Lambda Expressions

I have a method that is shown as below and this in turn call multiple private methods, that I won't be posting here.
#Bean
public CommandLineRunner registerStartersAndReaders(final Vertx vertx, final SpringVerticleFactory springVerticleFactory,
final SpringUtil springUtil, final GslConfig gslConfig) {
return args -> {
// Scan all the beans annotated with the #ElasticsearchBatchDataListener annotation.
List<Pair<Object, Method>> listenerMethods = springUtil.getListenerMethods();
// Deploy the starters per listener.
deployVerticle(listenerMethods, jsonConfig -> deployStarterVerticle(vertx, springVerticleFactory, jsonConfig), config);
// Deploy the reader verticles.
deployVerticle(listenerMethods, jsonConfig -> deployReaderVerticle(vertx, springVerticleFactory, jsonConfig), config);
setupTriggers(vertx, listenerMethods, config);
};
}
Then I have a test method for it :
#Test
public void registerStartersAndReadersTest() {
when(springUtil.getListenerMethods()).thenReturn(value);
CommandLineRunner runner = config.registerStartersAndReaders(vertx, springVerticleFactory, springUtil, config);
assertNotNull(runner);
}
Here, all the parameters passed into the method call are mocks. The problem is, when I run this test, it passes but it returns the value without getting into the private methods as it just returns 'args'.
Can someone please guide me, as to how I can make my test cover all the possible code. I am not supposed to change my code for the test.
I think you got confused with the lamba expression, and believe me it is very confusing in the beginning. But once you are fluent with it, it will be a breeze.
So here you got the instance of CommandLineRunner from method registerStartersAndReaders call, and your assertNotNull PASS as you have the not null instance, but until you call the run method of FunctionalInterface nothing will be executed.
Add runner.run(args) to execute the method(s) in your test case.

How to unit test netty handler

I implement a handler which extends SimpleChannelHandler, and overrides some methods such as channelConnected, messageReceived. However, I am wondering how to unit test it?
I searched about "netty unit test" and found one article which said considering CodecEmbedder, but I am still not sure how to begin. Do you have any example or advice on how to unit test Netty code?
Thanks a lot.
In Netty, there are different ways to test your networking stack.
Testing ChannelHandlers
You can use Netty's EmbeddedChannel to mock a netty connection for testing, an example of this would be:
#Test
public void nettyTest() {
EmbeddedChannel channel = new EmbeddedChannel(new StringDecoder(StandardCharsets.UTF_8));
channel.writeInbound(Unpooled.wrappedBuffer(new byte[]{(byte)0xE2,(byte)0x98,(byte)0xA2}));
String myObject = channel.readInbound();
// Perform checks on your object
assertEquals("☢", myObject);
}
This test above tests for StringDecoder ability to decode unicode correct (example from this bug posted by me)
You can also test the encoder direction using EmbeddedChannel, for this you should use writeOutBound and readInbound.
More Examples:
DelimiterBasedFrameDecoderTest.java:
#Test
public void testIncompleteLinesStrippedDelimiters() {
EmbeddedChannel ch = new EmbeddedChannel(new DelimiterBasedFrameDecoder(8192, true,
Delimiters.lineDelimiter()));
ch.writeInbound(Unpooled.copiedBuffer("Test", Charset.defaultCharset()));
assertNull(ch.readInbound());
ch.writeInbound(Unpooled.copiedBuffer("Line\r\ng\r\n", Charset.defaultCharset()));
assertEquals("TestLine", releaseLater((ByteBuf) ch.readInbound()).toString(Charset.defaultCharset()));
assertEquals("g", releaseLater((ByteBuf) ch.readInbound()).toString(Charset.defaultCharset()));
assertNull(ch.readInbound());
ch.finish();
}
More examples on github.
ByteBuf
To test if you use your bytebufs, you can set a JVM parameter that checks for leaked ByteBuf, for this, you should add -Dio.netty.leakDetectionLevel=PARANOID to the startup parameters, or call the method ResourceLeakDetector.setLevel(PARANOID).

Grails pollution between integration and unit tests

I know there's a lot out there about this particular topic, however I can't quite find anyone who has stumbled across my issue, and hopefully someone can explain this to me.
I have a Domain where I use the injected grailsApplication's dynamic method 'isDomainClass' in the equals method:
#Override
public boolean equals(Object obj) {
if(!grailsApplication.isDomainClass(obj.getClass())) { return false }
...
}
This works fine, and to unit test this i do:
#Mock([MyDomain])
...
def mockGApp
void setUp() {
mockGApp = new Object()
mockGApp.metaClass.isDomainClass = { obj -> true }
}
...
void testSomething() {
def myDomain = new MyDomain()
myDomain.grailsApplication = mockGApp
....
}
And when I run this with test-app -unit (on command line or in STS) it passes just fine.
I then ave an integration test that uses that domain (no mocking this time) and that again runs fine when ran with test-app -integration (either on the command line or in STS)
However if i run 'test-app' so it does both at once, I get a MissingMethodException: no method signature isDomainClass exists with parameters (java.lang.Class) ... and all that jazz.
On investigating it with println's in the service I'm testing in the tests, in the integration portion of the testing, before the equals method of my domain class is called, I can quite happily call grailsApplication.isDomainClass() and get the desired affect. However when the code steps into the domain's equals function the isDomainClass() method no longer exists, despite the grailsApplication object referring to the same object which is referenced in the service and has the dynamically added method.
It appears that the dynamic methods that grails adds to this class are not being injected when its called within the domain's methods but are getting injected within the service. And more bizarrely this only happens if the integration tests follow the unit tests. If done separately, no problemo...
Where does this pollution stem from? IS there any way to solve it?
P.S. using Grails 2.1.0
You have to remove the class you modified from metaClassRegistry in the destroy method (i.e.after test case runs). See below:
#After
void destroy() {
GroovySystem.metaClassRegistry.removeMetaClass(MyDomain.class)
}

Running TestNG test sequentially with time-gap

I have couple of DAO unit test classes that I want to run together using TestNG, however TestNG tries to run them in parallel which results in some rollbacks failing. While I would like to run my Unit Test classes run sequentially, I also want to be able to specify a minimum time that TestNG must wait before it runs the next test. Is this achievable?
P.S. I understand that TestNG can be told to run all the tests in a test class in a SingleThread, I am able to specify the sequence of method calls anyway using groups, so that's not an issue perhaps.
What about a hard dependency between the 2 tests? If you write that:
#Test
public void test1() { ... }
#Test(dependsOnMethods = "test1", alwaysRun = true)
public void test2() { ... }
then test2 will always be run after test1.
Do not forget alwaysRun = true, otherwise if test1 fails, test2 will be skipped!
If you do not want to run your classes in parallel, you need to specify the parallel attribute of your suite as false. By default, it's false. So I would think that it should run sequentially by default, unless you have some change in the way you invoke your tests.
For adding a bit of delay between your classes, you can probably add your delay logic in a method annotated with #AfterClass. AFAIK testng does not have a way to specify that in a testng xml or commandline. There is a timeout attribute but that is more for timing out tests and is not probably what you are looking for.
For adding delay between your tests i.e. test tags in xml, then you can try implementing the ITestListener - onFinish method, wherein you can add your delay code. It is run after every <test>. If a delay is required after every testcase, then implement delay code in IInvokedMethodListener - AfterInvocation() which is run after every test method runs. You would need to specify the listener when you invoke your suite then.
Hope it helps..
Following is what I used in some tests.
First, define utility methods like this:
// make thread sleep a while, so that reduce effect to subsequence operation if any shared resource,
private void delay(long milliseconds) throws InterruptedException {
Thread.sleep(milliseconds);
}
private void delay() throws InterruptedException {
delay(500);
}
Then, call the method inside testing methods, at end or beginning.
e.g
#Test
public void testCopyViaTransfer() throws IOException, InterruptedException {
copyViaTransfer(new File(sourcePath), new File(targetPath));
delay();
}

EJB repository testing with OpenEJB - how to rollback changes

I try to test my EJB-based repositories using OpenEJB. Every time new unit test is runned I'd like to have my DB in an "initial" state. After the test, all changes should be rolled back (no matter if test succeeded or not). How to accomplish it in a simple way? I tried using UserTransaction - beginning it when test is starting and rolling back changes when finishing (as you can see below). I don't know why, but with this code all changes in DB (which were done during unit test) are left after line rolling changes back has been executed.
As I wrote, I'd like to accomplish it in the simplest way, without any external DB schema and so on.
Thanks in advance for any hints!
Piotr
public class MyRepositoryTest {
private Context initialContext;
private UserTransaction tx;
private MyRepository repository; //class under the test
#Before
public void setUp() throws Exception {
this.initialContext = OpenEjbContextFactory.getInitialContext();
this.repository = (MyRepository) initialContext.lookup(
"MyRepositoryLocal");
TransactionManager tm = (TransactionManager) initialContext.lookup(
"java:comp/TransactionManager");
tx = new CoreUserTransaction(tm);
tx.begin();
}
#After
public void tearDown() throws Exception {
tx.rollback();
this.initialContext = null;
}
#Test
public void test() throws Exception {
// do some test stuff
}
}
There's an example called 'transaction-rollback' in the examples zip for 3.1.4.
Check that out as it has several ways to rollback in a unit test. One of the techniques includes a trick to get a new in memory database for each test.