How to make Play Framework WS use my SSLContext - web-services

I'm using Play 2.3.7 with Scala 2.11.4, Java 7. I want to use Play WS to connect to an HTTPS endpoint, that requires client to present its certificate. To do it I create my own SSLContext:
val sslContext = {
val keyStore = KeyStore.getInstance("pkcs12")
keyStore.load(new FileInputStream(clientKey), clientKeyPass.to[Array])
val kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm)
kmf.init(keyStore, clientKeyPass.to[Array])
val trustStore = KeyStore.getInstance("jks")
trustStore.load(new FileInputStream(trustStoreFile), trustStorePass.to[Array])
val tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm)
tmf.init(trustStore)
val ctx = SSLContext.getInstance("TLSv1.2")
ctx.init(kmf.getKeyManagers, tmf.getTrustManagers, new SecureRandom())
ctx
}
I know, that the SSLContext is valid, because I can use it with URLConnection successfully:
def urlConnection = Action {
val conn = new URL(url).openConnection()
conn.asInstanceOf[HttpsURLConnection].setSSLSocketFactory(sslContext.getSocketFactory)
conn.connect()
Ok(scala.io.Source.fromInputStream(conn.getInputStream).getLines().mkString("\n"))
}
But when I try one of two ways below I get java.nio.channels.ClosedChannelException.
def ning = Action.async {
val builder = new AsyncHttpClientConfig.Builder()
builder.setSSLContext(sslContext)
val client = new NingWSClient(builder.build())
client.url(url).get() map { _ => Ok("ok") }
}
def asyncHttpClient = Action {
val builder = new AsyncHttpClientConfig.Builder()
builder.setSSLContext(sslContext)
val httpClient = new AsyncHttpClient(builder.build())
httpClient.prepareGet(url).execute().get(10, TimeUnit.SECONDS)
Ok("ok")
}
I also get the same exception when I go after suggestion of Will Sargent and use NingAsyncHttpClientConfigBuilder with parsed config (note, that config references exactly the same values, the hand-crafted sslContext does).
def ningFromConfig = Action.async {
val config = play.api.Configuration(ConfigFactory.parseString(
s"""
|ws.ssl {
| keyManager = {
| stores = [
| { type: "PKCS12", path: "$clientKey", password: "$clientKeyPass" }
| ]
| }
| trustManager = {
| stores = [
| { type: "JKS", path: "$trustStoreFile", password: "$trustStorePass" },
| ]
| }
|}
|# Without this one I get InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
|ws.ssl.disabledKeyAlgorithms="RSA keySize < 1024"
""".stripMargin))
val parser = new DefaultWSConfigParser(config, play.api.Play.application.classloader)
val builder = new NingAsyncHttpClientConfigBuilder(parser.parse())
val client = new NingWSClient(builder.build())
client.url(url).get() map { _ => Ok("ok") }
}
How to make it work with Play WS?

You should use a client certificate from application.conf as defined in https://www.playframework.com/documentation/2.3.x/KeyStores and https://www.playframework.com/documentation/2.3.x/ExampleSSLConfig -- if you need to use a custom configuration, you should use the parser directly:
https://github.com/playframework/playframework/blob/2.3.x/framework/src/play-ws/src/test/scala/play/api/libs/ws/DefaultWSConfigParserSpec.scala#L14
to create a config, then the Ning builder:
https://github.com/playframework/playframework/blob/2.3.x/framework/src/play-ws/src/test/scala/play/api/libs/ws/ning/NingAsyncHttpClientConfigBuilderSpec.scala#L33
which will give you the AHCConfig object that you can pass in. There's examples in the "Using WSClient" section of https://www.playframework.com/documentation/2.3.x/ScalaWS.

Related

Kotlin/Micronaut - Client '/': Connect Error: Connection refused: localhost/127.0.0.1:9999

I'm writing some unit tests for a rest controller using HttpClient but I'm gettint the error on the title in the exchange part, we are using DynamoDBEmbeddedClient to have a fake data base instead of using the real db for testing, before adding the DynamoDBEmbeddedClient the test was working, so maybe I'm missing something on the test or the controller to mak it work?
The enpoint is just a patch to save or update an entry, the app it actually works and everything, is just the test that it seems to be broken
application-test.yml:
micronaut:
server:
port: -1
application:
name: app
endpoints:
all:
port: 9999
dynamodb:
e-note-table-name: dummy-table-name
environment: test
Controller:
#Validated
#Controller("/api/v1/")
#ExecuteOn(TaskExecutors.IO)
class ENotesController(private val service: ENoteService) {
#Patch("/")
fun createNoteRecord(#Body eNote: ENote
): Mono<HttpResponse<ENote>> {
return service.save(eNote)
.map { HttpResponse.ok(it) }
}
}
Test:
#MicronautTest(startApplication = false)
#ExtendWith(DynamoDBExtension::class)
#TestInstance(TestInstance.Lifecycle.PER_CLASS)
class ENoteControllerTest {
#Inject
#field:Client("/")
lateinit var client: HttpClient
#Test
fun `PATCH eNote - should create a new eNote`() {
val newENote = eNote()
val request = HttpRequest
.PATCH(
"/api/v1/",
newENote
)
val response = client
.toBlocking()
.exchange(request, ENote::class.java)
assertEquals(HttpStatus.OK, response.status)
assertEquals(newENote.noteKey, response.body()!!.noteKey)
assertEquals(newENote.noteEntries, response.body()!!.noteEntries)
}
private fun eNote() = ENote(
studentId = UUID.randomUUID().toString(),
enrollmentId = UUID.randomUUID().toString(),
courseId = UUID.randomUUID().toString(),
activityId = UUID.randomUUID().toString(),
noteEntries = listOf("Test the", "Patch method"),
audit = Audit(
createdBy = UUID.randomUUID().toString(),
updatedBy = UUID.randomUUID().toString(),
createdAt = Instant.now(),
updatedAt = Instant.now()
)
)
}
And this is the ReplacementBeanFactory where we replace the real instance of dynamodb for a fake one for the unit tests
#Factory
class ReplacementBeanFactory() {
#Singleton
#Replaces(DynamoDbAsyncClient::class)
fun dynamoDbAsyncClient(): DynamoDbAsyncClient {
val dynamoDbLocal = DynamoDBEmbedded.create();
return dynamoDbLocal.dynamoDbAsyncClient()
}
#Singleton
#Replaces(DynamoDbEnhancedAsyncClient::class)
fun dynamoDbEnhancedAsyncClient(): DynamoDbEnhancedAsyncClient {
return DynamoDbEnhancedAsyncClient.builder()
.dynamoDbClient(dynamoDbAsyncClient())
.build()
}
}

How to access metrics of Alpakka CommittableSource with back off?

Accessing the metrics of an Alpakka PlainSource seems fairly straight forward, but how can I do the same thing with a CommittableSource?
I currently have a simple consumer, something like this:
class Consumer(implicit val ma: ActorMaterializer, implicit val ec: ExecutionContext) extends Actor {
private val settings = ConsumerSettings(
context.system,
new ByteArrayDeserializer,
new StringDeserializer)
.withProperties(...)
override def receive: Receive = Actor.emptyBehavior
RestartSource
.withBackoff(minBackoff = 2.seconds, maxBackoff = 20.seconds, randomFactor = 0.2)(consumer)
.runForeach { handleMessage }
private def consumer() = {
AkkaConsumer
.committableSource(settings, Subscriptions.topics(Set(topic)))
.log(getClass.getSimpleName)
.withAttributes(ActorAttributes.supervisionStrategy(_ => Supervision.Resume))
}
private def handleMessage(message: CommittableMessage[Array[Byte], String]): Unit = {
...
}
}
How can I get access to the consumer metrics in this case?
We are using the Java prometheus client and I solved my issue with a custom collector that fetches its metrics directly from JMX:
import java.lang.management.ManagementFactory
import java.util
import io.prometheus.client.Collector
import io.prometheus.client.Collector.MetricFamilySamples
import io.prometheus.client.CounterMetricFamily
import io.prometheus.client.GaugeMetricFamily
import javax.management.ObjectName
import scala.collection.JavaConverters._
import scala.collection.mutable
class ConsumerMetricsCollector(val labels: Map[String, String] = Map.empty) extends Collector {
val metrics: mutable.Map[String, MetricFamilySamples] = mutable.Map.empty
def collect: util.List[MetricFamilySamples] = {
val server = ManagementFactory.getPlatformMBeanServer
for {
attrType <- List("consumer-metrics", "consumer-coordinator-metrics", "consumer-fetch-manager-metrics")
name <- server.queryNames(new ObjectName(s"kafka.consumer:type=$attrType,client-id=*"), null).asScala
attrInfo <- server.getMBeanInfo(name).getAttributes.filter { _.getType == "double" }
} yield {
val attrName = attrInfo.getName
val metricLabels = attrName.split(",").map(_.split("=").toList).collect {
case "client-id" :: (id: String) :: Nil => ("client-id", id)
}.toList ++ labels
val metricName = "kafka_consumer_" + attrName.replaceAll(raw"""[^\p{Alnum}]+""", "_")
val labelKeys = metricLabels.map(_._1).asJava
val metric = metrics.getOrElseUpdate(metricName,
if(metricName.endsWith("_total") || metricName.endsWith("_sum")) {
new CounterMetricFamily(metricName, attrInfo.getDescription, labelKeys)
} else {
new GaugeMetricFamily(metricName, attrInfo.getDescription, labelKeys)
}: MetricFamilySamples
)
val metricValue = server.getAttribute(name, attrName).asInstanceOf[Double]
val labelValues = metricLabels.map(_._2).asJava
metric match {
case f: CounterMetricFamily => f.addMetric(labelValues, metricValue)
case f: GaugeMetricFamily => f.addMetric(labelValues, metricValue)
case _ =>
}
}
metrics.values.toList.asJava
}
}

scala elasticsearch using high level rest java client: regexp and scroll

I am using scala (2.12) to query the Elasticsearch (6.5) using the High level java rest client(6.5).
libraryDependencies += "org.elasticsearch" % "elasticsearch" % "6.5.4"
libraryDependencies += "org.elasticsearch.client" % "elasticsearch-rest-high-level-client" % "6.5.4"
Basic query (matchQuery) and scroll works fine for me.
val credentialsProvider = new BasicCredentialsProvider
credentialsProvider.setCredentials(AuthScope.ANY, new UsernamePasswordCredentials("user", "password"))
val builder = RestClient.builder(new HttpHost("host", port, "https"))
builder.setHttpClientConfigCallback(new HttpClientConfigCallback() {
override def customizeHttpClient(httpClientBuilder: HttpAsyncClientBuilder): HttpAsyncClientBuilder = httpClientBuilder.setDefaultCredentialsProvider(credentialsProvider)
})
var client = new RestHighLevelClient(builder)
val scroll = new Scroll(TimeValue.timeValueMinutes(1L))
// Index
val searchRequest = new SearchRequest("index")
val searchSourceBuilder = new SearchSourceBuilder()
searchSourceBuilder.query(QueryBuilders.matchQuery("name.field", "value"))
searchSourceBuilder.size(500)
searchRequest.source(searchSourceBuilder)
searchRequest.scroll(scroll)
var searchResponse = client.search(searchRequest, RequestOptions.DEFAULT)
var scrollId = searchResponse.getScrollId
var searchHits = searchResponse.getHits().getHits
while (searchHits != null && searchHits.length > 0) {
val scrollRequest = new SearchScrollRequest(scrollId)
scrollRequest.scroll(scroll)
searchResponse = client.scroll(scrollRequest, RequestOptions.DEFAULT)
println(searchResponse.getHits.getHits.toList)
scrollId = searchResponse.getScrollId
searchHits = searchResponse.getHits.getHits
}
Basically above query works as expected with all the data returned.
Now, my case is I need the regex (and not mathQuery).
So I just change that line; but it does not fetch anything now.
searchSourceBuilder.query(QueryBuilders.regexpQuery("name.field", "value"))
What am I missing here? How to use Regexp ?

Is it possible to create an "infinite" stream from a database table using Akka Stream

I'm playing with Akka Streams 2.4.2 and am wondering if it's possible to setup a stream which uses a database table for a source and whenever there is a record added to the table that record is materialized and pushed downstream?
UPDATE: 2/23/16
I've implemented the solution from #PH88. Here's my table definition:
case class Record(id: Int, value: String)
class Records(tag: Tag) extends Table[Record](tag, "my_stream") {
def id = column[Int]("id")
def value = column[String]("value")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
Here's the implementation:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
try{
val newRecStream = Source.unfold((0, List[Record]())) { n =>
try {
val q = for (r <- TableQuery[Records].filter(row => row.id > n._1)) yield (r)
val r = Source.fromPublisher(db.stream(q.result)).collect {
case rec => println(s"${rec.id}, ${rec.value}"); rec
}.runFold((n._1, List[Record]())) {
case ((id, xs), current) => (current.id, current :: xs)
}
val answer: (Int, List[Record]) = Await.result(r, 5.seconds)
Option(answer, None)
}
catch { case e:Exception => println(e); Option(n, e) }
}
Await.ready(newRecStream.throttle(1, 1.second, 1, ThrottleMode.shaping).runForeach(_ => ()), Duration.Inf)
}
finally {
system.shutdown
db.close
}
But my problem is that when I attempt to call flatMapConcat the type I get is Serializable.
UPDATE: 2/24/16
Updated to try db.run suggestion from #PH88:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val disableAutoCommit = SimpleDBIO(_.connection.setAutoCommit(false))
val queryLimit = 1
try {
val newRecStream = Source.unfoldAsync(0) { n =>
val q = TableQuery[Records].filter(row => row.id > n).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
Await.ready(newRecStream, Duration.Inf)
}
catch
{
case ex: Throwable => println(ex)
}
finally {
system.shutdown
db.close
}
Which works (I changed query limit to 1 since I only have a couple items in my database table currently) - except once it prints the last row in the table the program exists. Here's my log output:
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/xxxxxxx/dev/src/scratch/scala/fpp-in-scala/target/scala-2.11/classes/logback.xml]
17:09:28,062 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
17:09:28,064 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
17:09:28,079 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
17:09:28,102 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to DEBUG
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
17:09:28,103 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
17:09:28,104 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator#4278284b - Registering current configuration as safe fallback point
17:09:28.117 [main] INFO com.zaxxer.hikari.HikariDataSource - pg-postgres - is starting.
1, WASSSAAAAAAAP!
2, WHAAAAT?!?
3, booyah!
4, what!
5, This rocks!
6, Again!
7, Again!2
8, I love this!
9, Akka Streams rock
10, Tuning jdbc
17:09:39.000 [main] INFO com.zaxxer.hikari.pool.HikariPool - pg-postgres - is closing down.
Process finished with exit code 0
Found the missing piece - need to replace this:
Some(recs.last.id, recs)
with this:
val lastId = if(recs.isEmpty) n else recs.last.id
Some(lastId, recs)
The call to recs.last.id was throwing java.lang.UnsupportedOperationException: empty.last when the result set was empty.
In general SQL database is a 'passive' construct and does not actively push changes like what you described. You can only 'simulate' the 'push' with periodic polling like:
val newRecStream = Source
// Query for table changes
.unfold(initState) { lastState =>
// query for new data since lastState and save the current state into newState...
Some((newState, newRecords))
}
// Throttle to limit the poll frequency
.throttle(...)
// breaks down into individual records...
.flatMapConcat { newRecords =>
Source.unfold(newRecords) { pendingRecords =>
if (records is empty) {
None
} else {
// take one record from pendingRecords and save to newRec. Save the rest into remainingRecords.
Some(remainingRecords, newRec)
}
}
}
Updated: 2/24/2016
Pseudo code example based on the 2/23/2016 updates of the question:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val queryLimit = 10
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
val q = TableQuery[Records].filter(row => row.id > lastRowId).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
// Block forever
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
db.close
}
It will repeatedly execute the query in unfoldAsync against the DB, retrieving at most 10 (queryLimit) records a time and send the records downstream (-> throttle -> flatMapConcat -> runForeach). The Await at the end will actually block forever.
Updated: 2/25/2016
Executable 'proof-of-concept' code:
import akka.actor.ActorSystem
import akka.stream.{ThrottleMode, ActorMaterializer}
import akka.stream.scaladsl.Source
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
object Infinite extends App{
implicit val system = ActorSystem("Publisher")
implicit val ec = system.dispatcher
implicit val materializer = ActorMaterializer()
case class Record(id: Int, value: String)
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
Future {
val recs = (lastRowId to lastRowId + 10).map(i => Record(i, s"rec#$i"))
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.Shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(rec)
}
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
}
}
Here is database infinite streaming working code. This has been tested with millions of records being inserted into postgresql database while streaming app is running -
package infinite.streams.db
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.alpakka.slick.scaladsl.SlickSession
import akka.stream.scaladsl.{Flow, Sink, Source}
import akka.stream.{ActorMaterializer, ThrottleMode}
import org.slf4j.LoggerFactory
import slick.basic.DatabaseConfig
import slick.jdbc.JdbcProfile
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContextExecutor}
case class Record(id: Int, value: String) {
val content = s"<ROW><ID>$id</ID><VALUE>$value</VALUE></ROW>"
}
object InfiniteStreamingApp extends App {
println("Starting app...")
implicit val system: ActorSystem = ActorSystem("Publisher")
implicit val ec: ExecutionContextExecutor = system.dispatcher
implicit val materializer: ActorMaterializer = ActorMaterializer()
println("Initializing database configuration...")
val databaseConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig[JdbcProfile]("postgres3")
implicit val session: SlickSession = SlickSession.forConfig(databaseConfig)
import databaseConfig.profile.api._
class Records(tag: Tag) extends Table[Record](tag, "test2") {
def id = column[Int]("c1")
def value = column[String]("c2")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
val db = databaseConfig.db
println("Prime for streaming...")
val logic: Flow[(Int, String), (Int, String), NotUsed] = Flow[(Int, String)].map {
case (id, value) => (id, value.toUpperCase)
}
val fetchSize = 5
try {
val done = Source
.unfoldAsync(0) {
lastId =>
println(s"Fetching next: $fetchSize records with id > $lastId")
val query = TableQuery[Records].filter(_.id > lastId).take(fetchSize)
db.run(query.result.withPinnedSession)
.map {
recs => Some(recs.last.id, recs)
}
}
.throttle(5, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat {
recs => Source.fromIterator(() => recs.iterator)
}
.map(x => (x.id, x.content))
.via(logic)
.log("*******Post Transformation******")
// .runWith(Sink.foreach(r => println("SINK: " + r._2)))
// Use runForeach or runWith(Sink)
.runForeach(rec => println("REC: " + rec))
println("Waiting for result....")
Await.ready(done, Duration.Inf)
} catch {
case ex: Throwable => println(ex.getMessage)
} finally {
println("Streaming end successfully")
db.close()
system.terminate()
}
}
application.conf
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "INFO"
}
# Load using SlickSession.forConfig("slick-postgres")
postgres3 {
profile = "slick.jdbc.PostgresProfile$"
db {
dataSourceClass = "slick.jdbc.DriverDataSource"
properties = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost/testdb"
user = "postgres"
password = "postgres"
}
numThreads = 2
}
}

akka-http file upload does not upload entire file?

I have the following server
object FileUploadServer {
implicit val system = ActorSystem("fileUploadServer")
implicit val materializer = ActorMaterializer()
implicit val ec = system.dispatcher
val route =
path("upload") {
post {
extractRequest { request =>
println(request)
val file = File.createTempFile("debug",".zip")
val futureDone: Future[Long] = request.entity.dataBytes.runWith(Sink.file(file))
onComplete(futureDone) { result =>
complete(s"path:${file.getAbsolutePath}, size:${result.get}")
}
}
}
}
def main(args: Array[String]) {
val serverFuture: Future[ServerBinding] = Http().bindAndHandle(route, "localhost", 8080)
println("FileUpload server is running at http://localhost:8080\nPress RETURN to stop ...")
readLine()
serverFuture
.flatMap(_.unbind())
.onComplete(_ => system.terminate())
}
}
and I have a following property in application.conf
akka.http.parsing.max-content-length = 1000m
But when I try to upload a 300MB zip file, it returns only when 159MB
$ time curl -X POST -H 'Content-Type: application/octet-stream' -d #bluecoat_proxy_big.zip http://localhost:8080/upload
path:/var/folders/_2/q9xz_8ks73d0h6hq80c7439s8_x7qx/T/debug3960545903381659311.zip, size:155858257
real 0m1.426s
user 0m0.375s
sys 0m0.324s
What am I missing?