Play WS standalone for 2.5.x - web-services

I want to create a Play web service client outside a Play application. For Play WS version 2.4.x it is easy to find that it is done like this:
val config = new NingAsyncHttpClientConfigBuilder().build()
val builder = new AsyncHttpClientConfig.Builder(config)
val client = new NingWSClient(builder.build)
However in 2.5.x the NingWSClient is now deprecated - instead the AhcWSClient should be used.
Unfortunately, I didn't find a complete example that explains the creation and usage of a AhcWsClient outside of Play. Currently I go with this:
import play.api.libs.ws.ahc.AhcWSClient
import akka.stream.ActorMaterializer
import akka.actor.ActorSystem
implicit val system = ActorSystem()
implicit val materializer = ActorMaterializer()
val ws = AhcWSClient()
val req = ws.url("http://example.com").get().map{
resp => resp.body
}(system.dispatcher)
Is this the correct way of creating a AhcWsClient? And is there a way of creating a AhcWSClient without an ActorSystem?

You are probably using compile time dependency injection, otherwise you would just use #Inject() (ws: WSClient), right?.
There is one example in the docs: https://www.playframework.com/documentation/2.5.x/ScalaWS#using-wsclient
So you could write something like this in your application loader:
lazy val ws = {
import com.typesafe.config.ConfigFactory
import play.api._
import play.api.libs.ws._
import play.api.libs.ws.ahc.{AhcWSClient, AhcWSClientConfig}
import play.api.libs.ws.ahc.AhcConfigBuilder
import org.asynchttpclient.AsyncHttpClientConfig
val configuration = Configuration.reference ++ Configuration(ConfigFactory.parseString(
"""
|ws.followRedirects = true
""".stripMargin))
val parser = new WSConfigParser(configuration, environment)
val config = new AhcWSClientConfig(wsClientConfig = parser.parse())
val builder = new AhcConfigBuilder(config)
val logging = new AsyncHttpClientConfig.AdditionalChannelInitializer() {
override def initChannel(channel: io.netty.channel.Channel): Unit = {
channel.pipeline.addFirst("log", new io.netty.handler.logging.LoggingHandler("debug"))
}
}
val ahcBuilder = builder.configure()
ahcBuilder.setHttpAdditionalChannelInitializer(logging)
val ahcConfig = ahcBuilder.build()
new AhcWSClient(ahcConfig)
}
applicationLifecycle.addStopHook(() => Future.successful(ws.close))
And then inject ws to your controllers. I'm not 100% sure with this approach, I would be happy if some Play guru could validate this.
Regarding an ActorSystem, you need it only to get a thread pool for resolving that Future. You can also just import or inject the default execution context:
play.api.libs.concurrent.Execution.Implicits.defaultContext.
Or you can use your own:
implicit val wsContext: ExecutionContext = actorSystem.dispatchers.lookup("contexts.your-special-ws-config").

AFAIK this is the proper way to create the AhcWSClient - at least in 2.5.0 and 2.5.1 - as seen in the Scala API
You can, of course, always take another HTTP client - there are many available for Scala - like Newman, Spray client, etc. (although Spray is also based on Akka so you would have to create an actor system as well)

Related

actix web test doesn't seem to be routing requests as expected

I recently updated to actix web 4, I had some tests that used the actix-web test module that stopped working as expected in the process. I'm sure it's something simple but I'm having trouble figuring out what changed. Here is a minimal example of the issue:
use actix_web::{test, web, App, HttpResponse, HttpRequest};
#[actix_rt::test]
async fn says_hello() {
let req = test::TestRequest::get().uri("/index.html").to_request();
let mut server =
test::init_service(App::new().service(web::scope("/").route("index.html", web::get().to(|_req: HttpRequest| async {
println!("Hello?");
HttpResponse::Ok()
})))).await;
let _resp = test::call_and_read_body(&mut server, req).await;
}
running this test I would expect to see "Hello?" output to my console, however, the request handler function I have defined at "/index.html" doesn't seem to be called and I receive no output.
To be clear, the tests are more complicated and have assertions etc, this is just a working example of the main issue I am trying to resolve
actix-web = { version = "4.1.0", default-features = false }
note:
if I change all paths to the root path it will call the handler, I.E.
let req = test::TestRequest::get().uri("/").to_request();
let mut server =
test::init_service(App::new().service(web::scope("/").route("/", web::get().to(|_req: HttpRequest| async {
println!("Hello?");
HttpResponse::Ok()
})))).await;
let _resp = test::call_and_read_body(&mut server, req).await;
// prints "Hello?" to the console
However no other route combination I have tried calls the request handler.
Rust tests capture the output and only output them for failed tests.
If you want to show output on all tests you have to tell them to do so with either testbinary --nocapture or cargo test -- --nocapture.
I was able to make things work by changing the path in the scope to an empty string
let req = test::TestRequest::get().uri("/index.html").to_request();
let mut server =
test::init_service(App::new().service(web::scope("").route("index.html", web::get().to(|_req: HttpRequest| async {
println!("Hello?");
HttpResponse::Ok()
})))).await;
let _resp = test::call_and_read_body(&mut server, req).await;
// prints "Hello?"

MediatorLiveData doesn't work in JUnit tests?

So I've tried using MediatorLiveData for the rather simple use-case of converting an ISO country code (e.g. "US") to a country calling code (e.g. "+1") through the use of libphonenumber. The resulting screen works fine, but seems to fail JUnit tests, even when InstantTaskExecutorRule is used.
Example minimal unit test (in Kotlin) that I believe should pass, but fails instead:
import android.arch.core.executor.testing.InstantTaskExecutorRule
import android.arch.lifecycle.MediatorLiveData
import android.arch.lifecycle.MutableLiveData
import org.junit.Assert.assertEquals
import org.junit.Rule
import org.junit.Test
class MediatorLiveData_metaTest {
#get:Rule
val instantTaskExecutorRule = InstantTaskExecutorRule()
#Test
fun mediatorLiveData_metaTest() {
val sourceInt = MutableLiveData<Int>()
val mediatedStr = MediatorLiveData<String>()
mediatedStr.addSource(sourceInt) {
mediatedStr.value = it.toString()
}
sourceInt.value = 123
assertEquals("123", mediatedStr.value) // says mediatedStr.value is null
}
}
Thanks to Reddit user matejdro; the answer was that like Schrödinger's proverbial cat, MediatorLiveData won't update itself unless observed, so I'd need a mediatedStr.observeForever{} to force it to update itself.

How to checkout and checkin any document outside alfresco using rest API?

I have created one Web Application using Servlets and JSP. Through that I have connected to alfresco repository. I am also able be to upload document in Alfresco and view document in external web application.
Now my requirement is, I have to give checkin and checkout option to those documents.
I found below rest apis for this purpuse.
But I am not getting how to use these apis in servlets to full-fill my requirment.
POST /alfresco/service/slingshot/doclib/action/cancel-checkout/site/{site}/{container}/{path}
POST /alfresco/service/slingshot/doclib/action/cancel-checkout/node/{store_type}/{store_id}/{id}
Can anyone please provide the simple steps or some piece of code to do this task?
Thanks in advance.
Please do not use the internal slingshot URLs for this. Instead, use OpenCMIS from Apache Chemistry. It will save you a lot of time and headaches and it is more portable to other repositories besides Alfresco.
The example below grabs an existing document by path, performs a checkout, then checks in a new major version of the plain text document.
package com.someco.cmis.examples;
import java.io.ByteArrayInputStream;
import java.io.InputStream;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import org.apache.chemistry.opencmis.client.api.Document;
import org.apache.chemistry.opencmis.client.api.ObjectId;
import org.apache.chemistry.opencmis.client.api.Repository;
import org.apache.chemistry.opencmis.client.api.Session;
import org.apache.chemistry.opencmis.client.api.SessionFactory;
import org.apache.chemistry.opencmis.client.runtime.SessionFactoryImpl;
import org.apache.chemistry.opencmis.commons.SessionParameter;
import org.apache.chemistry.opencmis.commons.data.ContentStream;
import org.apache.chemistry.opencmis.commons.enums.BindingType;
public class CheckoutCheckinExample {
private String serviceUrl = "http://localhost:8080/alfresco/api/-default-/public/cmis/versions/1.1/atom"; // Uncomment for Atom Pub binding
private Session session = null;
public static void main(String[] args) {
CheckoutCheckinExample cce = new CheckoutCheckinExample();
cce.doExample();
}
public void doExample() {
Document doc = (Document) getSession().getObjectByPath("/test/test-plain-1.txt");
String fileName = doc.getName();
ObjectId pwcId = doc.checkOut(); // Checkout the document
Document pwc = (Document) getSession().getObject(pwcId); // Get the working copy
// Set up an updated content stream
String docText = "This is a new major version.";
byte[] content = docText.getBytes();
InputStream stream = new ByteArrayInputStream(content);
ContentStream contentStream = session.getObjectFactory().createContentStream(fileName, Long.valueOf(content.length), "text/plain", stream);
// Check in the working copy as a major version with a comment
ObjectId updatedId = pwc.checkIn(true, null, contentStream, "My new version comment");
doc = (Document) getSession().getObject(updatedId);
System.out.println("Doc is now version: " + doc.getProperty("cmis:versionLabel").getValueAsString());
}
public Session getSession() {
if (session == null) {
// default factory implementation
SessionFactory factory = SessionFactoryImpl.newInstance();
Map<String, String> parameter = new HashMap<String, String>();
// user credentials
parameter.put(SessionParameter.USER, "admin"); // <-- Replace
parameter.put(SessionParameter.PASSWORD, "admin"); // <-- Replace
// connection settings
parameter.put(SessionParameter.ATOMPUB_URL, this.serviceUrl); // Uncomment for Atom Pub binding
parameter.put(SessionParameter.BINDING_TYPE, BindingType.ATOMPUB.value()); // Uncomment for Atom Pub binding
List<Repository> repositories = factory.getRepositories(parameter);
this.session = repositories.get(0).createSession();
}
return this.session;
}
}
Note that on the version of Alfresco I tested with (5.1.e) the document must already have the versionable aspect applied for the version label to get incremented, otherwise the checkin will simply override the original.

Using Thread.sleep() inside an foreach in scala

I've a list of URLs inside a List.
I want to get the data by calling WS.url(currurl).get(). However, I want add a delay between each request. Can I add Thread.sleep() ? or is there another way of doing this?
one.foreach {
currurl => {
import play.api.libs.ws.WS
println("using " + currurl)
val p = WS.url(currurl).get()
p.onComplete {
case Success(s) => {
//do something
}
case Failure(f) => {
println("failed")
}
}
}
}
Sure, you can call Thread.sleep inside your foreach function, and it will do what you expect.
That will tie up a thread, though. If this is just some utility that you need to run sometimes, then who cares, but if it's part of some server you are trying to write and you might tie up many threads, then you probably want to do better. One way you could do better is to use Akka (it looks like you are using Play, so you are already using Akka) to implement the delay -- write an actor that uses scheduler.schedule to arrange to receive a message periodically, and then handle one request each time the message is read. Note that Akka's scheduler itself ties up a thread, but it can then send periodic messages to an arbitrary number of actors.
You can do it with scalaz-stream
import org.joda.time.format.DateTimeFormat
import scala.concurrent.duration._
import scalaz.stream._
import scalaz.stream.io._
import scalaz.concurrent.Task
type URL = String
type Fetched = String
val format = DateTimeFormat.mediumTime()
val urls: Seq[URL] =
"http://google.com" :: "http://amazon.com" :: "http://yahoo.com" :: Nil
val fetchUrl = channel[URL, Fetched] {
url => Task.delay(s"Fetched " +
s"url:$url " +
s"at: ${format.print(System.currentTimeMillis())}")
}
val P = Process
val process =
(P.awakeEvery(1.second) zipWith P.emitAll(urls))((b, url) => url).
through(fetchUrl)
val fetched = process.runLog.run
fetched.foreach(println)
Output:
Fetched url:http://google.com at: 1:04:25 PM
Fetched url:http://amazon.com at: 1:04:26 PM
Fetched url:http://yahoo.com at: 1:04:27 PM

Akka 2.1 Remote: sharing actor across systems

I'm learnin about remote actors in Akka 2.1 and I tried to adapt the counter example provided by Typesafe.
I implemented a quick'n'dirty UI from the console to send ticks. And to quit with asking(and showing the result) the current count.
The idea is to start a master node that will run the Counter actor and some client node that will send messages to it through remoting. However I'd like to achieve this through configuration and minimal changes to code. So by changing the configuration local actors could be used.
I found this blog entry about similar problem where it was necessary that all API calls go through one actor even though there are many instances running.
I wrote similar configuration but I cant get it to work. My current code does use remoting but it creates a new actor on the master for each new node and I can't get it to connect to existing actor without explicitly giving it the path(and defying the point of configuration). However this is not what I want since state cannot be shared between JVMs this way.
Full runnable code available through a git repo
This is my config file
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/counter {
remote = "akka://ticker#127.0.0.1:2552"
}
}
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
log-sent-messages = on
netty {
hostname = "127.0.0.1"
}
}
}
And full source
import akka.actor._
import akka.pattern.ask
import scala.concurrent.duration._
import akka.util.Timeout
import scala.util._
case object Tick
case object Get
class Counter extends Actor {
var count = 0
val id = math.random.toString.substring(2)
println(s"\nmy name is $id\ni'm at ${self.path}\n")
def log(s: String) = println(s"$id: $s")
def receive = {
case Tick =>
count += 1
log(s"got a tick, now at $count")
case Get =>
sender ! count
log(s"asked for count, replied with $count")
}
}
object AkkaProjectInScala extends App {
val system = ActorSystem("ticker")
implicit val ec = system.dispatcher
val counter = system.actorOf(Props[Counter], "counter")
def step {
print("tick or quit? ")
readLine() match {
case "tick" => counter ! Tick
case "quit" => return
case _ =>
}
step
}
step
implicit val timeout = Timeout(5.seconds)
val f = counter ? Get
f onComplete {
case Failure(e) => throw e
case Success(count) => println("Count is " + count)
}
system.shutdown()
}
I used sbt run and in another window sbt run -Dakka.remote.netty.port=0 to run it.
I found out I can use some sort of pattern. Akka remote allows only for deploying on remote systems(can't find a way to make it look up on remote just through configuration..am I mistaken here?).
So I can deploy a "scout" that will pass back the ActorRef. Runnable code available on the original repo under branch "scout-hack". Because this feels like a hack. I will still appreciate configuration based solution.
The actor
case object Fetch
class Scout extends Actor{
def receive = {
case Fetch => sender ! AkkaProjectInScala._counter
}
}
Counter actor creating is now lazy
lazy val _counter = system.actorOf(Props[Counter], "counter")
So it only executes on the master(determined by the port) and can be fetched like this
val counter: ActorRef = {
val scout = system.actorOf(Props[Scout], "scout")
val ref = Await.result(scout ? Fetch, timeout.duration) match {
case r: ActorRef => r
}
scout ! PoisonPill
ref
}
And full config
akka {
actor {
provider = "akka.remote.RemoteActorRefProvider"
deployment {
/scout {
remote = "akka://ticker#127.0.0.1:2552"
}
}
}
remote {
transport = "akka.remote.netty.NettyRemoteTransport"
log-sent-messages = on
netty {
hostname = "127.0.0.1"
}
}
}
EDIT: I also found a clean-ish way: check configuration for "counterPath" anf if present actorFor(path) else create actor. Nice and you can inject the master when running and code is much cleaner than with the "scout" but it still has to decide weather to look up or create an actor. I guess this cannot be avoided.
I tried your git project and it actually works fine, aside from a compilation error, and that you must start the sbt session with -Dakka.remote.netty.port=0 parameter to the jvm, not as parameter to run.
You should also understand that you don't have to start the Counter actor in both processes. In this example it's intended to be created from the client and deployed on the server (port 2552). You don't have to start it on the server. It should be enough to create the actor system on the server for this example.