I'm trying to figure out how to launch a JavaExec task that spawns a Jetty server without blocking subsequent tasks. Also, I will need to terminate this server after the build completes. Any idea how I can do this?
I know the thread is from 2011, but I still stumbled across the problem. So here's a solution working with Gradle 2.14:
import java.util.concurrent.Callable
import java.util.concurrent.ExecutorService
import java.util.concurrent.Executors
class RunAsyncTask extends DefaultTask {
String taskToExecute = '<YourTask>'
#TaskAction
def startAsync() {
ExecutorService es = Executors.newSingleThreadExecutor()
es.submit({taskToExecute.execute()} as Callable)
}
}
task runRegistry(type: RunAsyncTask, dependsOn: build){
taskToExecute = '<NameOfYourTaskHere>'
}
I updated solution from #chrishuen because you cannot call execute on task anymore. Here is my working build.gradle
import java.time.LocalDateTime
import java.util.concurrent.Callable
import java.util.concurrent.ExecutorService
import java.util.concurrent.Executors
group 'sk.bsmk'
version '1.0-SNAPSHOT'
apply plugin: 'java'
task wrapper(type: Wrapper) {
gradleVersion = '3.4'
}
class RunAsyncTask extends DefaultTask {
#TaskAction
def startAsync() {
ExecutorService es = Executors.newSingleThreadExecutor()
es.submit({
project.javaexec {
classpath = project.sourceSets.main.runtimeClasspath
main = "Main"
}
} as Callable)
}
}
task helloAsync(type: RunAsyncTask, dependsOn: compileJava) {
doLast {
println LocalDateTime.now().toString() + 'sleeping'
sleep(2 * 1000)
}
}
Hope this snippet will give you some insight on how it can be done.
You can used build listener closures to run code on build start/finish. However, for some reason, gradle.buildStarted closure does not work in milestone-3, so I have replaced it with gradle.taskGraph.whenReady which does the trick.
Then you can call the runJetty task using Task#execute() (Note, this API is not official and may disappear), and additionally, run it from an ExecutorService to get some asynchronous behaviour.
import java.util.concurrent.*
task myTask << {
println "Do usual tasks here"
}
task runJetty << {
print "Pretend we are running Jetty ..."
while(!stopJetty){
Thread.sleep(100)
}
println "Jetty Stopped."
}
stopJetty = false
es = Executors.newSingleThreadExecutor()
jettyFuture = null
//gradle.buildStarted { ... }
gradle.taskGraph.whenReady { g ->
jettyFuture = es.submit({ runJetty.execute() } as Callable)
}
gradle.buildFinished {
println "Stopping Jetty ... "
stopJetty = true
//This is optional. Could be useful when debugging.
try{
jettyFuture?.get()
}catch(ExecutionException e){
println "Error during Jetty execution: "
e.printStackTrace()
}
}
You can't do it with JavaExec; you'll have to write your own task.
Based on previous answers, here is my take:
abstract class RunAsyncTask extends DefaultTask {
#Input
abstract Property<FileCollection> getClasspath()
#Input
abstract Property<String> getMain()
#Input
abstract ListProperty<String> getArgs()
#TaskAction
def startAsync() {
// do get all the parameters before going asynch, otherwise it sometimes blocks
def cp = classpath.get().asPath
def m = main.get()
def a = args.get()
ExecutorService es = Executors.newSingleThreadExecutor()
es.submit({
def command = ["java", "-cp", cp, m] + a
ProcessBuilder builder = new ProcessBuilder(command.toList())
builder.redirectErrorStream(true)
builder.directory(project.projectDir)
Process process = builder.start()
InputStream stdout = process.getInputStream()
BufferedReader reader = new BufferedReader(new InputStreamReader(stdout))
def line
while ((line = reader.readLine()) != null) {
println line
}
} as Callable)
}
}
task startServer(type: RunAsyncTask) {
classpath = ...
main = '...'
args = [...]
doLast {
// sleep 3 seconds to give the server time to startup
Thread.sleep(3000)
}
}
Related
Hi have below typesafe config in file application-typed.conf.
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "DEBUG"
logging-filter = "akka.event.slf4j.Slf4jLoggingFilter"
actor {
provider = "local"
}
}
custom-thread-pool {
type = Dispatcher
executor = "thread-pool-executor"
thread-pool-executor {
fixed-pool-size = 40
}
throughput = 2
}
Below is the akka-typed actor code.
import akka.actor.typed.{ActorSystem, Behavior, DispatcherSelector, PostStop, Signal}
import akka.actor.typed.scaladsl.AbstractBehavior
import akka.actor.typed.scaladsl.ActorContext
import akka.actor.typed.scaladsl.Behaviors
import com.typesafe.config.ConfigFactory
import scala.concurrent.ExecutionContext
trait PrintMessage
case class PrintMessageAny(x: Any) extends PrintMessage
object PrintMeActor {
def apply(): Behavior[PrintMessage] =
Behaviors.setup[PrintMessage](context => new PrintMeActor(context))
}
class PrintMeActor(context: ActorContext[PrintMessage]) extends AbstractBehavior[PrintMessage](context) {
val dispatcherSelector: DispatcherSelector = DispatcherSelector.fromConfig("custom-thread-pool")
implicit val executionContext: ExecutionContext = context.system.dispatchers.lookup(dispatcherSelector)
println(s"PrintMeActor Application started in Thread ${Thread.currentThread().getName}")
override def onMessage(msg: PrintMessage): Behavior[PrintMessage] = {
// No need to handle any messages
println(s"Got $msg in Thread ${Thread.currentThread().getName}")
Behaviors.same
}
override def onSignal: PartialFunction[Signal, Behavior[PrintMessage]] = {
case PostStop =>
context.log.info("PrintMeActor Application stopped")
this
}
}
object TestTypedActorApp extends App {
val config = ConfigFactory.load("application-typed.conf")
val as: ActorSystem[PrintMessage] = ActorSystem(PrintMeActor(), "PrintAnyTypeMessage", config)
as.tell(PrintMessageAny("test"))
Thread.sleep(2000)
}
When I run the code, I get the below output.
PrintMeActor Application started in Thread PrintAnyTypeMessage-akka.actor.default-dispatcher-6
Got PrintMessageAny(test) in Thread PrintAnyTypeMessage-akka.actor.default-dispatcher-6
I want this actor to run on the custom-thread-pool but it is not happening. How can I achieve the same?
You associate the dispatcher with the actor when you spawn it, by passing an akka.actor.typed.DispatcherSelector (which extends akka.actor.typed.Props) corresponding to the desired dispatcher.
When spawning the ActorSystem on a custom dispatcher, one can only pass Props through the overloads that take either a Config or an ActorSystemSetup.
If wanting to override the actor for the user guardian actor (the actor with the behavior you passed into the ActorSystem), it may make more sense to make that dispatcher the default dispatcher:
akka.actor.default-dispatcher {
executor = "thread-pool-executor"
thread-pool-executor {
fixed-pool-size = 40
}
throughput = 2
}
Given the following, how do I mock processMessage() using Spock, so that I can check that processBulkMessage() calls processMessage() n times, where n is the number of messages within a BulkMessage?
class BulkMessage {
List messages
}
class MyService {
def processBulkMessage(BulkMessage msg) {
msg.messages.each {subMsg->
processMessage(subMsg)
}
}
def processMessage(Message message) {
}
}
You can use spies and partial mocks (requires Spock 0.7 or newer).
After creating a spy, you can listen in on the conversation between the caller and the real object underlying the spy:
def subscriber = Spy(SubscriberImpl, constructorArgs: ["Fred"])
subscriber.receive(_) >> "ok"
Sometimes, it is desirable to both execute some code and delegate to the real method:
subscriber.receive(_) >> { String message -> callRealMethod(); message.size() > 3 ? "ok" : "fail" }
In my opinion this is not a well designed solution. Tests and design walk hand in hand - I recommend this talk to investigate it better. If there's a need to check if other method was invoked on an object being under test it seems it should be moved to other object with different responsibility.
Here's how I would do it. I know how visibility works in groovy so mind the comments.
#Grab('org.spockframework:spock-core:0.7-groovy-2.0')
#Grab('cglib:cglib-nodep:3.1')
import spock.lang.*
class MessageServiceSpec extends Specification {
def 'test'() {
given:
def service = new MessageService()
def sender = GroovyMock(MessageSender)
and:
service.sender = sender
when:
service.sendMessages(['1','2','3'])
then:
3 * sender.sendMessage(_)
}
}
class MessageSender { //package access - low level
def sendMessage(String message) {
//whatever
}
}
class MessageService {
MessageSender sender //package access - low level
def sendMessages(Iterable<String> messages) {
messages.each { m -> sender.sendMessage(m) }
}
}
It does not use Spock built-in Mocking API (not sure how to partially mock an object), but this should do the trick:
class FooSpec extends Specification {
void "Test message processing"() {
given: "A Bulk Message"
BulkMessage bulk = new BulkMessage(messages: ['a', 'b', 'c'])
when: "Service is called"
def processMessageCount = 0
MyService.metaClass.processMessage { message -> processMessageCount++ }
def service = new MyService()
service.processBulkMessage(bulk)
then: "Each message is processed separately"
processMessageCount == bulk.messages.size()
}
}
For Java Spring folks testing in Spock:
constructorArgs is the way to go, but use constructor injection. Spy() will not let you set autowired fields directly.
// **Java Spring**
class A {
private ARepository aRepository;
#Autowire
public A(aRepository aRepository){
this.aRepository = aRepository;
}
public String getOne(String id) {
tryStubMe(id) // STUBBED. WILL RETURN "XXX"
...
}
public String tryStubMe(String id) {
return aRepository.findOne(id)
}
public void tryStubVoid(String id) {
aRepository.findOne(id)
}
}
// **Groovy Spock**
class ATest extends Specification {
def 'lets stub that sucker' {
setup:
ARepository aRepository = Mock()
A a = Spy(A, constructorArgs: [aRepository])
when:
a.getOne()
then:
// Stub tryStubMe() on a spy
// Make it return "XXX"
// Verify it was called once
1 * a.tryStubMe("1") >> "XXX"
}
}
Spock - stubbing void method on Spy object
// **Groovy Spock**
class ATest extends Specification {
def 'lets stub that sucker' {
setup:
ARepository aRepository = Mock()
A a = Spy(A, constructorArgs: [aRepository]) {
1 * tryStubVoid(_) >> {}
}
when:
...
then:
...
}
}
I have trait for override actorOf in tests:
trait ActorRefFactory {
this: Actor =>
def actorOf(props: Props) = context.actorOf(props)
}
And I have worker actor, which stop self when receive any message:
class WorkerActor extends Actor {
override def receive: Actor.Receive = {
case _ => { context.stop(self) }
}
}
Also I have master actor, who creates actors and hold them in queue:
class MasterActor extends Actor with ActorRefFactory {
var workers = Set.empty[ActorRef]
override val supervisorStrategy = SupervisorStrategy.stoppingStrategy
def createWorker() = {
val worker = context watch actorOf(Props(classOf[WorkerActor]))
workers += worker
worker
}
override def receive: Receive = {
case m: String =>
createWorker()
case Terminated(ref) =>
workers -= ref
createWorker()
}
}
And this test, which is failed:
class ActorTest(val _system: ActorSystem) extends akka.testkit.TestKit(_system)
with ImplicitSender
with Matchers
with FlatSpecLike {
def this() = this(ActorSystem("test"))
def fixture = new {
val master = TestActorRef(new MasterActor() {
override def actorOf(props: Props) = TestProbe().ref
})
}
it should "NOT FAILED" in {
val f = fixture
f.master ! "create"
f.master ! "create"
f.master.underlyingActor.workers.size shouldBe 2
val worker = f.master.underlyingActor.workers.head
system.stop(worker)
Thread.sleep(100)
f.master.underlyingActor.workers.size shouldBe 2
}
}
After Thread.sleep in the test, I give error by "1 was not equal to 2". I have not idea what happening. But, if guess I can assume that TestProbe() can't create in the time. What can I do?
This basically boils down to an asynchronicity issue that you want to try and avoid in unit tests for Akka. You are correctly using a TestActorRef to get hooked into the CallingThreadDispatcher for the master actor. But when you call system.stop(worker), the system still us using the default async dispatcher which will introduce this race condition on the stopping and then re-creating of a worker. The simplest way I can see to fix this issue consistently is to stop the worker like so:
master.underlyingActor.context.stop(worker)
Because you are using the context of master and that actor is using the CallingThreadDispatcher I believe this will remove the asnyc issue that you are seeing. It worked for me when I tried it.
I have the next code:
//TestActor got some message
class TestActor extends Actor {
def receive = {
case string: String => //....
}
}
//TestReg when create get ActorRef, when i call `pass` method, then should pass text to ActorRef
class TestReg(val actorRef: ActorRef) {
def pass(text: String) {
actorRef ! text
}
}
When i wrote test:
class TestActorReg extends TestKit(ActorSystem("system")) with ImplicitSender
with FlatSpecLike with MustMatchers with BeforeAndAfterAll {
override def afterAll() {
system.shutdown()
}
"actorReg" should "pass text to actorRef" in {
val probe = TestProbe()
val testActor = system.actorOf(Props[TestActor])
probe watch testActor
val testReg = new TestReg(testActor)
testReg.pass("test")
probe.expectMsg("test")
}
}
I got error:
java.lang.AssertionError: assertion failed: timeout (3 seconds) during expectMsg while waiting for test
How to check what the actor got a text?
probe.expectMsg() is calling the assertion on the probe. But you passed the testActor into your TestReg class
change it to the following line and it will work
val testReg = new TestReg(probe.ref)
have to call .ref to make the probe into an ActorRef
And you want to do it here not at the instantiation of the variable to avoid
certain bugs that are outside of the scope of this response
the error in the logic as I see it is you are thinking that the watch method makes probe see what test actor does. but its death watch not message watch. which is different.
Create application.conf file with this:
akka {
test {
timefactor = 1.0
filter-leeway = 999s
single-expect-default = 999s
default-timeout = 999s
calling-thread-dispatcher {
type = akka.testkit.CallingThreadDispatcherConfigurator
}
}
actor {
serializers {
test-message-serializer = "akka.testkit.TestMessageSerializer"
}
serialization-identifiers {
"akka.testkit.TestMessageSerializer" = 23
}
serialization-bindings {
"akka.testkit.JavaSerializable" = java
}
}
}
I have a simple spray client :
val pipeline = sendReceive ~> unmarshal[GoogleApiResult[Elevation]]
val responseFuture = pipeline {Get("http://maps.googleapis.com/maps/api/elevation/jsonlocations=27.988056,86.925278&sensor=false") }
responseFuture onComplete {
case Success(GoogleApiResult(_, Elevation(_, elevation) :: _)) =>
log.info("The elevation of Mt. Everest is: {} m", elevation)
shutdown()
case Failure(error) =>
log.error(error, "Couldn't get elevation")
shutdown()
}
Full code can be found here.
I want to mock the response of the server to test the logic in the Success and Failure cases. The only relevant information i found was here but I haven't been able to use the cake pattern to mock the sendReceive method.
Any suggestion or example would be greatly appreciated.
Here's an example of one way to mock it using specs2 for the test spec and mockito for the mocking. First, the Main object refactored into a class setup for mocking:
class ElevationClient{
// we need an ActorSystem to host our application in
implicit val system = ActorSystem("simple-spray-client")
import system.dispatcher // execution context for futures below
val log = Logging(system, getClass)
log.info("Requesting the elevation of Mt. Everest from Googles Elevation API...")
import ElevationJsonProtocol._
import SprayJsonSupport._
def sendAndReceive = sendReceive
def elavation = {
val pipeline = sendAndReceive ~> unmarshal[GoogleApiResult[Elevation]]
pipeline {
Get("http://maps.googleapis.com/maps/api/elevation/json?locations=27.988056,86.925278&sensor=false")
}
}
def shutdown(): Unit = {
IO(Http).ask(Http.CloseAll)(1.second).await
system.shutdown()
}
}
Then, the test spec:
class ElevationClientSpec extends Specification with Mockito{
val mockResponse = mock[HttpResponse]
val mockStatus = mock[StatusCode]
mockResponse.status returns mockStatus
mockStatus.isSuccess returns true
val json = """
{
"results" : [
{
"elevation" : 8815.71582031250,
"location" : {
"lat" : 27.9880560,
"lng" : 86.92527800000001
},
"resolution" : 152.7032318115234
}
],
"status" : "OK"
}
"""
val body = HttpEntity(ContentType.`application/json`, json.getBytes())
mockResponse.entity returns body
val client = new ElevationClient{
override def sendAndReceive = {
(req:HttpRequest) => Promise.successful(mockResponse).future
}
}
"A request to get an elevation" should{
"return an elevation result" in {
val fut = client.elavation
val el = Await.result(fut, Duration(2, TimeUnit.SECONDS))
val expected = GoogleApiResult("OK",List(Elevation(Location(27.988056,86.925278),8815.7158203125)))
el mustEqual expected
}
}
}
So my approach here was to first define an overridable function in the ElevationClient called sendAndReceive that just delegates to the spray sendReceive function. Then, in the test spec, I override that sendAndReceive function to return a function that returns a completed Future wrapping a mock HttpResponse. This is one approach for doing what you want to do. I hope this helps.
There's no need to introduce mocking in this case, as you can simply build a HttpResponse much more easily using the existing API:
val mockResponse = HttpResponse(StatusCodes.OK, HttpEntity(ContentTypes.`application/json`, json.getBytes))
(Sorry for posting this as another answer, but don't have enough karma to comment)