how to run one or group of tests repeatedly in Go Test IntelliJ - unit-testing

From time to time I have this annoying tests with intermittent issues, that I need to run many times to expose. I was looking for a convenient way to set a number or "endless loop" from the intelliJ, but I did not find.
Is there a plugin or I missed something that could allow me to do this from the UI (instead of changing code for it).
EDIT: As I found the support for such feature is per test utility plugin. For example, it already exists for JUnit, but there is no such for Go Test. My instinct suggests that such functionality should be generically provided for all test plugins, but there might be some technical reasons for per plugin approach.

In the Run Configuration of the test there is a "Repeat:" dropdown where you can specify the number of repeats, for example until the test fails. I believe this is available since IntelliJ IDEA 15.

You can use oracle JDK to create a executor service which schedules the running /execution of the test suite periodically unless you shut down the service
Please have a look at the below oracle doc
https://docs.oracle.com/javase/7/docs/api/java/util/concurrent/ScheduledExecutorService.html
Sample
import static java.util.concurrent.TimeUnit.*;
class BeeperControl {
private final ScheduledExecutorService scheduler =
Executors.newScheduledThreadPool(1);
public void beepForAnHour() {
final Runnable beeper = new Runnable() {
public void run() { System.out.println("beep"); }
};
final ScheduledFuture<?> beeperHandle =
scheduler.scheduleAtFixedRate(beeper, 10, 10, SECONDS);
scheduler.schedule(new Runnable() {
public void run() { beeperHandle.cancel(true); }
}, 60 * 60, SECONDS);
}
}

Related

How to make CppUnit logging more verbose?

I'm using CppUnit to write unit tests for a C++ library. By default it prints a single "." character to the console for each test. I'd like to log the name of each test on a separate line, before the test runs.
I've looked into the CppUnit API, but it's not at all obvious how to customize the output. Instead of offering customization options, it's more of a framework that you can plug new handlers into. (The tutorial hasn't helped, either.) I could probably spend a day figuring out how to do this, but I can't afford to lose the time. Could someone provide a quick snippet that can customize the per-test log output?
It is simple enough to define and install a custom progress listener to emit the name of each test before it's performed. Here's one I wrote today:
class MyCustomProgressTestListener : public CppUnit::TextTestProgressListener {
public:
virtual void startTest(CppUnit::Test *test) {
fprintf(stderr, "starting test %s\n", test->getName().c_str());
}
};
Install it on a test runner like this:
CppUnit::TextUi::TestRunner runner;
MyCustomProgressTestListener progress;
runner.eventManager().addListener(&progress);

Why is azure storage queue access so slow when unit testing?

I ran into some difficulty trying to use unit testing with live Azure Storage Queues, and kept writing simpler and simpler examples to try and isolate the problem. In a nutshell, here is what seems to be happening:
Queue access is clearly (and appropriately) lazy-loaded. In my MVC app though, when I get to REALLY need to access the queue (in my case when I call the CloudQueue.Exists method) it is pretty fast. Less than one tenth of a second. However, the VERY same code, when run in the context of a unit test takes about 25 seconds.
I don't understand why there should be this difference, so I made a simple console app that writes something and then reads it from an Azure queue. The console app also takes 25 seconds the first time it is run -- on subsequent runs it takes about 2.5 seconds.
And now for the really weird behavior. I created a Visual Studio 2012 solution with three projects -- one MVC app, one Console app, and one Unit Test project. All three call the same static method which checks for the existence of a queue, creates it if it doesn't exist, writes some data to it and reads some data from it. I have put a timer on the call to CloudQueue.Exists in that method. And here is the deal. When the method is called from the MVC app, the CloudQueue.Exists method consistently completes in about one tenth of a second, whether or not the queue actually does exist. When the method is called from the console app, the first time it is called it takes 25 seconds, and subsequent times it takes about 2.5 seconds. When the method is called from the Unit Test, it consistently takes 25 seconds.
More info: It so happens that when I create this dummy solution, I happened to put my static method (QueueTest) in the console app. Here is what is weird -- if I set the default startup project in Visual Studio to the Console App, then the Unit Test suddenly takes 2.5 seconds. But if I set the startup project in Visual Studio to the MVC app (or to the Unit Test project) then the Unit test takes 25 seconds!
So.... does anyone have a theory of what is going on here? I am baffled.
Code follows below:
Console App:
public class Program
{
static void Main(string[] args)
{
Console.WriteLine(QueueTest("my-console-queue", "Console Test"));
}
public static string QueueTest(string queueName, string message)
{
string connectionString = ConfigurationManager.ConnectionStrings["StorageConnectionString"].ConnectionString;
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString);
CloudQueueClient queueClient = storageAccount.CreateCloudQueueClient();
CloudQueue queue = queueClient.GetQueueReference(queueName);
DateTime beforeTime = DateTime.Now;
bool doesExist = queue.Exists();
DateTime afterTime = DateTime.Now;
TimeSpan ts = afterTime - beforeTime;
if (!doesExist)
{
queue.Create();
}
CloudQueueMessage qAddMessage = new CloudQueueMessage(message);
queue.AddMessage(qAddMessage);
CloudQueueMessage qGetmessage = queue.GetMessage();
string response = String.Format("{0} ({1} seconds)", qGetmessage.AsString, ts.TotalSeconds);
return response;
}
}
MVC App (Home Controller):
public class HomeController : Controller
{
public ActionResult Index()
{
return Content(Program.QueueTest("my-mvc-queue", "Mvc Test"));
}
}
Unit Test Method: (Note, currently expected to fail!)
[TestClass]
public class QueueUnitTests
{
[TestMethod]
public void CanWriteToAndReadFromQueue()
{
//Arrange
string qName = "my-unit-queue";
string message = "test message";
//Act
string result = Program.QueueTest(qName, message);
//Assert
Assert.IsTrue(String.CompareOrdinal(result,message)==0);
}
}
Of course insight is greatly appreciated.
I suspect this has nothing to do with Azure queues, but rather with .NET trying to determine your proxy settings. What happens if you make some other random System.Net call instead of the call to queue storage?
Try this line of code at the beginning of your app:
System.Net.WebRequest.DefaultWebProxy = null;

Schedule JPA query and access result in a CDI-bean?

Every x minutes I want to query for new instances and cache the results. I currently only need a simple cache solution so I would like to update a Set in my #ApplicationScoped CacheBean
I tried a:
ScheduledExecutorService scheduler = Executors
.newScheduledThreadPool(1);
ScheduledFuture<?> sf = scheduler.scheduleAtFixedRate(new Runnable() {
public void run() {
//.................
But the thread created couldn't access any contextual instances (InvocationException).
So how to do this the CDI/JPA way?
Using Tomcat 7, Weld, JPA2 - Hibernate.
My recommendation would be to try the version of Tomcat with CDI and JPA already integrated (TomEE). It comes with OpenJPA but you can use Hibernate. Then do your caching with a class like this:
#Singleton
#Startup
public class CachingBean {
#Resource
private BeanManager beanManager;
#Schedule(minute = "*/10", hour = "*")
private void run() {
// cache things
}
}
That component would automatically start when the app starts and would run the above method every ten minutes. See the Schedule docs for details.
UPDATE
Hacked up an example for you. Uses a nice CDI/EJB combination to schedule CDI Events.
Effectively this is a simple wrapper around the BeanManager.fireEvent(Object,Annotations...) method that adds ScheduleExpression into the mix.
#Singleton
#Lock(LockType.READ)
public class Scheduler {
#Resource
private TimerService timerService;
#Resource
private BeanManager beanManager;
public void scheduleEvent(ScheduleExpression schedule, Object event, Annotation... qualifiers) {
timerService.createCalendarTimer(schedule, new TimerConfig(new EventConfig(event, qualifiers), false));
}
#Timeout
private void timeout(Timer timer) {
final EventConfig config = (EventConfig) timer.getInfo();
beanManager.fireEvent(config.getEvent(), config.getQualifiers());
}
// Doesn't actually need to be serializable, just has to implement it
private final class EventConfig implements Serializable {
private final Object event;
private final Annotation[] qualifiers;
private EventConfig(Object event, Annotation[] qualifiers) {
this.event = event;
this.qualifiers = qualifiers;
}
public Object getEvent() {
return event;
}
public Annotation[] getQualifiers() {
return qualifiers;
}
}
}
Then to use it, have Scheduler injected as an EJB and schedule away.
public class SomeBean {
#EJB
private Scheduler scheduler;
public void doit() throws Exception {
// every five minutes
final ScheduleExpression schedule = new ScheduleExpression()
.hour("*")
.minute("*")
.second("*/5");
scheduler.scheduleEvent(schedule, new TestEvent("five"));
}
/**
* Event will fire every five minutes
*/
public void observe(#Observes TestEvent event) {
// process the event
}
}
Full source code and working example, here.
You must know
CDI Events are not multi-treaded
If there are 10 observers and each of them take 7 minutes to execute, then the total execution time for the one event is 70 minutes. It would do you absolutely no good to schedule that event to fire more frequently than 70 minutes.
What would happen if you did? Depends on the #Singleton #Lock policy
#Lock(WRITE) is the default. In this mode the timeout method would essentially be locked until the previous invocation completes. Having it fire every 5 minutes even though you can only process one every 70 minutes would eventually cause all the pooled timer threads to be waiting on your Singleton.
#Lock(READ) allows for parallel execution of the timeout method. Events will fire in parallel for a while. However since they actually are taking 70 minutes each, within an hour or so we'll run out of threads in the timer pool just like above.
The elegant solution is to use #Lock(WRITE) then specify some short timeout like #AccessTimeout(value = 1, unit = TimeUnit.MINUTES) on the timeout method. When the next 5 minute invocation is triggered, it will wait up until 1 minute to get access to the Singleton before giving up. This will keep your timer pool from filling up with backed up jobs -- the "overflow" is simply discarded.
Instead of passing new Runnable() {....} into scheduler.scheduleAtFixedRate rather create a CDI bean that implements Runnable and #Inject that bean and then pass it to scheduler.scheduleAtFixedRate
After chatting with David Blevins for a good while I can acknowledge his answer as a great one that I voted up. Big thanks for all that. All though David you forgot to announce your involvement in TomEE which I know always bother someone.
Anyways the solution I went for was suggested by Mark Struberg in #Deltaspike (freenode).
As a deltaspike user I was pleased to do it with deltaspike. Solution is outlined in this blog post:
http://struberg.wordpress.com/2012/03/17/controlling-cdi-containers-in-se-and-ee/
I had to switch into OWB see https://issues.apache.org/jira/browse/DELTASPIKE-284
Cheers

Running TestNG test sequentially with time-gap

I have couple of DAO unit test classes that I want to run together using TestNG, however TestNG tries to run them in parallel which results in some rollbacks failing. While I would like to run my Unit Test classes run sequentially, I also want to be able to specify a minimum time that TestNG must wait before it runs the next test. Is this achievable?
P.S. I understand that TestNG can be told to run all the tests in a test class in a SingleThread, I am able to specify the sequence of method calls anyway using groups, so that's not an issue perhaps.
What about a hard dependency between the 2 tests? If you write that:
#Test
public void test1() { ... }
#Test(dependsOnMethods = "test1", alwaysRun = true)
public void test2() { ... }
then test2 will always be run after test1.
Do not forget alwaysRun = true, otherwise if test1 fails, test2 will be skipped!
If you do not want to run your classes in parallel, you need to specify the parallel attribute of your suite as false. By default, it's false. So I would think that it should run sequentially by default, unless you have some change in the way you invoke your tests.
For adding a bit of delay between your classes, you can probably add your delay logic in a method annotated with #AfterClass. AFAIK testng does not have a way to specify that in a testng xml or commandline. There is a timeout attribute but that is more for timing out tests and is not probably what you are looking for.
For adding delay between your tests i.e. test tags in xml, then you can try implementing the ITestListener - onFinish method, wherein you can add your delay code. It is run after every <test>. If a delay is required after every testcase, then implement delay code in IInvokedMethodListener - AfterInvocation() which is run after every test method runs. You would need to specify the listener when you invoke your suite then.
Hope it helps..
Following is what I used in some tests.
First, define utility methods like this:
// make thread sleep a while, so that reduce effect to subsequence operation if any shared resource,
private void delay(long milliseconds) throws InterruptedException {
Thread.sleep(milliseconds);
}
private void delay() throws InterruptedException {
delay(500);
}
Then, call the method inside testing methods, at end or beginning.
e.g
#Test
public void testCopyViaTransfer() throws IOException, InterruptedException {
copyViaTransfer(new File(sourcePath), new File(targetPath));
delay();
}

How to run concurrency unit test?

How to use junit to run concurrency test?
Let's say I have a class
public class MessageBoard
{
public synchronized void postMessage(String message)
{
....
}
public void updateMessage(Long id, String message)
{
....
}
}
I wan to test multiple access to this postMessage concurrently.
Any advice on this? I wish to run this kind of concurrency test against all my setter functions (or any method that involves create/update/delete operation).
Unfortunately I don't believe you can definitively prove that your code is thread-safe by using run-time testing. You can throw as many threads as you like against it, and it may/may not pass depending on the scheduling.
Perhaps you should look at some static analysis tools, such as PMD, that can determine how you're using synchronisation and identify usage problems.
I would recommend using MultithreadedTC - Written by the concurrency master himself Bill Pugh (and Nat Ayewah). Quote from their overview:
MultithreadedTC is a framework for
testing concurrent applications. It
features a metronome that is used to
provide fine control over the sequence
of activities in multiple threads.
This framework allows you to deterministically test every thread interleaving in separate tests
You can only prove the presence of concurrent bugs, not their absence.
However you can write a specialized test runner that spawns several concurrent thread and then calls your #Test annotated methods.
In .NET, there are tools like TypeMock Racer or Microsoft CHESS that are designed specifically for unit testing concurrency. These tools not only find multithreading bugs like deadlocks, but also give you the set of thread interleaves that reproduce the errors.
I'd imagine there's something similar for the Java world.
Running concurrently can lead to unexpected results. For instance, I just discovered, that while my Test suite with 200 tests pass when executed one by one, it fails for concurrent execution, I dug that and it wasn't a thread safety problem, but a test depending on another one, which is a bad thing and I could solve the problem.
Mycila work on JUnit ConcurrentJunitRunner and ConcurrentSuite is very interesting. The article seems a little outdated compared to the latest GA release, in my examples I will show the updated usage.
Annotating a test class like the following will cause to execute test methods concurrently, with a concurrency level of 6:
import com.mycila.junit.concurrent.ConcurrentJunitRunner;
import com.mycila.junit.concurrent.Concurrency;
#RunWith(ConcurrentJunitRunner.class)
#Concurrency(6)
public final class ATest {
...
You can also run all the test classes concurrently:
import com.mycila.junit.concurrent.ConcurrentSuiteRunner;
#RunWith(ConcurrentSuiteRunner.class)
#Suite.SuiteClasses({ATest.class, ATest2.class, ATest3.class})
public class MySuite {
}
The Maven dependency is:
<dependency>
<groupId>com.mycila</groupId>
<artifactId>mycila-junit</artifactId>
<version>1.4.ga</version>
</dependency>
I am currently investigating how to run methods multiple times and concurrently with this package. It may be possible already, if anyone has an example let me know, below my homebrewed solution.
#Test
public final void runConcurrentMethod() throws InterruptedException {
ExecutorService exec = Executors.newFixedThreadPool(16);
for (int i = 0; i < 10000; i++) {
exec.execute(new Runnable() {
#Override
public void run() {
concurrentMethod();
}
});
}
exec.shutdown();
exec.awaitTermination(50, TimeUnit.SECONDS);
}
private void concurrentMethod() {
//do and assert something
}
As others noted, is true that you can never be sure whether a concurrency bug would show up or not, but with ten of thousands, or hundred of thousands executions with a concurrency of, say 16, statistics is on your side.
Try looking at ActiveTestSuite which ships with JUnit. It can concurrently launch multiple JUnit tests:
public static Test suite()
{
TestSuite suite = new ActiveTestSuite();
suite.addTestSuite(PostMessageTest.class);
suite.addTestSuite(PostMessageTest.class);
suite.addTestSuite(PostMessageTest.class);
suite.addTestSuite(PostMessageTest.class);
suite.addTestSuite(PostMessageTest.class);
return suite;
}
The above will run the same JUnit test class 5 times in paralell. If you wanted variation in your paralell tests, just create a different class.
In your example the postMessage() method is synchronized, so you won't actually see any concurrency effects from within a single VM but you might be able to evaluate the performance of the synchronized version.
You will need to run multiple copies of the test program at the same time in different VMs. You can use
If you can't get your test framework to do it you can launch some VMs your self.
The process builder stuff is a pain with paths and whatnot, but here is the general sketch:
Process running[] = new Process[5];
for (int i = 0; i < 5; i++) {
ProcessBuilder b = new ProcessBuilder("java -cp " + getCP() + " MyTestRunner");
running[i] = b.start();
}
for(int i = 0; i < 5; i++) {
running[i].waitFor();
}
I usually do something like this for simple threaded tests, like others have posted, testing is not a proof of correctness, but it usually shakes out silly bugs in practice. It helps to test for a long time under a variety of different conditions -- sometimes concurrency bugs take a while to manifest in a test.
public void testMesageBoard() {
final MessageBoard b = new MessageBoard();
int n = 5;
Thread T[] = new Thread[n];
for (int i = 0; i < n; i++) {
T[i] = new Thread(new Runnable() {
public void run() {
for (int j = 0; j < maxIterations; j++) {
Thread.sleep( random.nextInt(50) );
b.postMessage(generateMessage(j));
verifyContent(j); // put some assertions here
}
}
});
PerfTimer.start();
for (Thread t : T) {
t.start();
}
for (Thread t : T) {
t.join();
}
PerfTimer.stop();
log("took: " + PerfTimer.elapsed());
}
}**strong text**
Testing concurrency bugs is impossible; you don't just have to validate input/output pairs, but you have to validate state in situations which may or may not occur during your tests. Unfortunately JUnit is not equipped to do this.
You can use the tempus-fugit library to run test methods in parallel and multiple times to simulate a load testing type environment. Although a previous comment points out that the post method is syncrhonised and so protected, associated members or methods may be involved which themselves are not protected, so its possible that a load / soak type test could catch these. I'd suggest you setup a fairly coarse grained / end-to-end like test to give you the best chance of catching any loop holes.
See JUnit integration section of the documentation.
BTW, I a developer on said project :)
TestNG has support for concurrency testing in Java. This article describes how it can be used and there is docs on the testng site.
Not sure if you can make the same test run at the same time though
The best approach here is to use system tests to blast your code with requests to see if it falls over. Then use the unit test to check the logical correctness. The way i would approach this is create a proxy for the asynchronous call and have it made synchronously under test.
However, if you wanted to do this at a level other than a full install and complete environment you could do this in junit by creating your object in a separate thread, then create lots of threads that fire requests to your object and block the main thread until it completes. This approach can make tests fail intermittently if you don't get it right.
You can also try HavaRunner. It runs tests in parallel by default.