Listing folders on S3 - amazon-web-services

just trying to parse and display a whole bucket, i can't get rid off the message no suchKeyException which is odd ..
The code below is connecting and displaying to standard out:
import software.amazon.awssdk.auth.credentials.ProfileCredentialsProvider
import software.amazon.awssdk.regions.Region
import software.amazon.awssdk.services.s3.S3Client
import software.amazon.awssdk.services.s3.model.EncodingType
import software.amazon.awssdk.services.s3.model.ListObjectsV2Request
import software.amazon.awssdk.services.s3.model.ListObjectsV2Response
import software.amazon.awssdk.services.s3.paginators.ListObjectsV2Iterable
import java.net.URI
internal class S3ObjectsOps {
companion object {
private val BUCKET: String = "mybucket"
private val REGION_IRE = software.amazon.awssdk.regions.Region.EU_WEST_1
val s3c = S3Client.builder()
.region(REGION_IRE)
.endpointOverride(URI("https://${BUCKET}.s3-eu-west-1.amazonaws.com/"))
.credentialsProvider(ProfileCredentialsProvider.builder()
.profileName("default")
.build())
.build()
fun lsfolders(bucket: String) {
val listReq = ListObjectsV2Request.builder()
.bucket(bucket)
.delimiter("/")
.prefix("")
.maxKeys(10_000)
.build()
val listRes = s3c.listObjectsV2Paginator(listReq)
listRes.contents().onEach { f ->
if (!f.key().isNullOrBlank()) {
println(">> ${f.key()}")
} else {
println(" = ")
}
}
}
#JvmStatic
fun main(args: Array<String>) {
val bucket: String = ((args.size > 0 && !args[0].isNullOrEmpty()).toString()) ?: BUCKET
println("s3 ls test for bucket ${bucket}")
lsfolders(bucket)
}
}
}
Which raises:
software.amazon.awssdk.services.s3.model.NoSuchKeyException: The specified key does not exist. (Service: S3Client; Status Code: 404; Request ID: C8AF9CB788D77F74)
Exception in thread "main" software.amazon.awssdk.services.s3.model.NoSuchKeyException: The specified key does not exist. (Service: S3Client; Status Code: 404; Request ID: C8AF9CB788D77F74)
at software.amazon.awssdk.core.http.pipeline.stages.HandleResponseStage.handleErrorResponse(HandleResponseStage.java:114)
Same Exception using a stream
listRes.contents().stream().forEach { content -> println(" Key: " + content.key() + " | ") }
Thanks folks !

The right calls of ListObjectsV2Request is:
val listReq = ListObjectsV2Request.builder()
.bucket(bucket)
.delimiter("/")
.build()
val listRes = s3c.listObjectsV2Paginator(listReq)
listRes.contents().stream().forEach { content -> println(" Key: " + content.key() + " | ") }
Then, s3client must init with mimimum params:
val s3c = S3Client.builder()
.credentialsProvider(ProfileCredentialsProvider.builder().profileName("default").build())
.region(REGION_IRE)
.build()

Related

S3 bukcet getobject throws error Method not found: 'Amazon.Runtime.Internal.Transform.UnmarshallerContext

I am tring to read my s3 bucket object from EKS cluster application method as listed below.
[HttpGet("ReadFromS3")]
[ProducesResponseType(typeof(ApiResponse<string>), 200)]
[ProducesResponseType(500)]
public async Task<String> ReadFromS3()
{
AmazonS3Config config = new AmazonS3Config { RegionEndpoint = RegionEndpoint.EUWest2 };
string aws_tokenFile = Environment.GetEnvironmentVariable("AWS_WEB_IDENTITY_TOKEN_FILE");
string aws_rolw_arn = Environment.GetEnvironmentVariable("AWS_ROLE_ARN");
string logs = string.Empty;
using (var stsClient = new AmazonSecurityTokenServiceClient(new AnonymousAWSCredentials()))
{
logs += "AmazonSecurityTokenServiceClient";
var assumeRoleResult = await stsClient.AssumeRoleWithWebIdentityAsync(new AssumeRoleWithWebIdentityRequest
{
WebIdentityToken = System.IO.File.ReadAllText(Environment.GetEnvironmentVariable("AWS_WEB_IDENTITY_TOKEN_FILE")),
RoleArn = Environment.GetEnvironmentVariable("AWS_ROLE_ARN"),
RoleSessionName = "S3Session",
DurationSeconds = 900
});
if (assumeRoleResult != null)
{
logs += " -> assumeRoleResult";
var credentials = assumeRoleResult.Credentials;
var basicCredentials = new BasicAWSCredentials(credentials.AccessKeyId,
credentials.SecretAccessKey);
logs += " -> BasicAWSCredentials";
using (client = new AmazonS3Client(basicCredentials, RegionEndpoint.EUWest2) )
{
logs += " -> AmazonS3Client";
try
{
GetObjectRequest request = new GetObjectRequest
{
BucketName = bucketName,
Key = keyName
};
GetObjectResponse response = await client.GetObjectAsync(request);
logs += " -> GetObjectAsync";
Stream responseStream = response.ResponseStream;
StreamReader reader = new StreamReader(responseStream);
logs += " -> ResponseStream";
return await reader.ReadToEndAsync();
}
catch (Exception ex)
{
return "Exception in ReadFromS3 :" + ex.Message + logs;
}
}
}
else
{
return "Unable to Connect";
}
}
}
I am getting error at line GetObjectResponse response = await client.GetObjectAsync(request);
Method not found: 'Amazon.Runtime.Internal.Transform.UnmarshallerContext Amazon.Runtime.Internal.Transform.ResponseUnmarshaller.CreateContext

Get IAM Role information programatically using Scala

I want to receive the role information for a role name. For example getting the exact ARN identifier.
Somehow this code below is not working. Sadly there is no error message in cloudwatch
import software.amazon.awssdk.services.iam.*;
import com.amazonaws.services.identitymanagement.model._
import com.amazonaws.services.identitymanagement.{AmazonIdentityManagementClient, AmazonIdentityManagement, AmazonIdentityManagementClientBuilder}
// ....
val iamClient = AmazonIdentityManagementClient
.builder()
.withRegion("eu-central-1")
.build()
val roleRequest = new GetRoleRequest();
roleRequest.setRoleName("InfrastructureStack-StandardRoleD-HBLE12VPTWQ")
val result = iamClient.getRole(roleRequest) // <-- Nothing happens after this line
println("wont execute this println statement")
Other services like CognitoIdentityProvider are working perfectly fine.
I also tried the builder pattern for the GetRoleRequest and IamClient.
I got this IAM V2 code working fine. As stated in my comment, setup your dev environment to use AWS SDK for Java V2.
package com.example.iam;
import software.amazon.awssdk.services.iam.model.*;
import software.amazon.awssdk.regions.Region;
import software.amazon.awssdk.services.iam.IamClient;
public class GetRole {
public static void main(String[] args) {
final String USAGE = "\n" +
"Usage:\n" +
" <policyArn> \n\n" +
"Where:\n" +
" policyArn - a policy ARN that you can obtain from the AWS Management Console. \n\n" ;
// if (args.length != 1) {
// System.out.println(USAGE);
//// System.exit(1);
// }
String roleName = "DynamoDBAutoscaleRole" ; //args[0];
Region region = Region.AWS_GLOBAL;
IamClient iam = IamClient.builder()
.region(region)
.build();
getRoleInformation(iam, roleName);
System.out.println("Done");
iam.close();
}
public static void getRoleInformation(IamClient iam, String roleName) {
try {
GetRoleRequest roleRequest = GetRoleRequest.builder()
.roleName(roleName)
.build();
GetRoleResponse response = iam.getRole(roleRequest) ;
System.out.println("The ARN of the role is " +response.role().arn());
} catch (IamException e) {
System.err.println(e.awsErrorDetails().errorMessage());
System.exit(1);
}
}
}
Output:

How to access metrics of Alpakka CommittableSource with back off?

Accessing the metrics of an Alpakka PlainSource seems fairly straight forward, but how can I do the same thing with a CommittableSource?
I currently have a simple consumer, something like this:
class Consumer(implicit val ma: ActorMaterializer, implicit val ec: ExecutionContext) extends Actor {
private val settings = ConsumerSettings(
context.system,
new ByteArrayDeserializer,
new StringDeserializer)
.withProperties(...)
override def receive: Receive = Actor.emptyBehavior
RestartSource
.withBackoff(minBackoff = 2.seconds, maxBackoff = 20.seconds, randomFactor = 0.2)(consumer)
.runForeach { handleMessage }
private def consumer() = {
AkkaConsumer
.committableSource(settings, Subscriptions.topics(Set(topic)))
.log(getClass.getSimpleName)
.withAttributes(ActorAttributes.supervisionStrategy(_ => Supervision.Resume))
}
private def handleMessage(message: CommittableMessage[Array[Byte], String]): Unit = {
...
}
}
How can I get access to the consumer metrics in this case?
We are using the Java prometheus client and I solved my issue with a custom collector that fetches its metrics directly from JMX:
import java.lang.management.ManagementFactory
import java.util
import io.prometheus.client.Collector
import io.prometheus.client.Collector.MetricFamilySamples
import io.prometheus.client.CounterMetricFamily
import io.prometheus.client.GaugeMetricFamily
import javax.management.ObjectName
import scala.collection.JavaConverters._
import scala.collection.mutable
class ConsumerMetricsCollector(val labels: Map[String, String] = Map.empty) extends Collector {
val metrics: mutable.Map[String, MetricFamilySamples] = mutable.Map.empty
def collect: util.List[MetricFamilySamples] = {
val server = ManagementFactory.getPlatformMBeanServer
for {
attrType <- List("consumer-metrics", "consumer-coordinator-metrics", "consumer-fetch-manager-metrics")
name <- server.queryNames(new ObjectName(s"kafka.consumer:type=$attrType,client-id=*"), null).asScala
attrInfo <- server.getMBeanInfo(name).getAttributes.filter { _.getType == "double" }
} yield {
val attrName = attrInfo.getName
val metricLabels = attrName.split(",").map(_.split("=").toList).collect {
case "client-id" :: (id: String) :: Nil => ("client-id", id)
}.toList ++ labels
val metricName = "kafka_consumer_" + attrName.replaceAll(raw"""[^\p{Alnum}]+""", "_")
val labelKeys = metricLabels.map(_._1).asJava
val metric = metrics.getOrElseUpdate(metricName,
if(metricName.endsWith("_total") || metricName.endsWith("_sum")) {
new CounterMetricFamily(metricName, attrInfo.getDescription, labelKeys)
} else {
new GaugeMetricFamily(metricName, attrInfo.getDescription, labelKeys)
}: MetricFamilySamples
)
val metricValue = server.getAttribute(name, attrName).asInstanceOf[Double]
val labelValues = metricLabels.map(_._2).asJava
metric match {
case f: CounterMetricFamily => f.addMetric(labelValues, metricValue)
case f: GaugeMetricFamily => f.addMetric(labelValues, metricValue)
case _ =>
}
}
metrics.values.toList.asJava
}
}

Is it possible to create an "infinite" stream from a database table using Akka Stream

I'm playing with Akka Streams 2.4.2 and am wondering if it's possible to setup a stream which uses a database table for a source and whenever there is a record added to the table that record is materialized and pushed downstream?
UPDATE: 2/23/16
I've implemented the solution from #PH88. Here's my table definition:
case class Record(id: Int, value: String)
class Records(tag: Tag) extends Table[Record](tag, "my_stream") {
def id = column[Int]("id")
def value = column[String]("value")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
Here's the implementation:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
try{
val newRecStream = Source.unfold((0, List[Record]())) { n =>
try {
val q = for (r <- TableQuery[Records].filter(row => row.id > n._1)) yield (r)
val r = Source.fromPublisher(db.stream(q.result)).collect {
case rec => println(s"${rec.id}, ${rec.value}"); rec
}.runFold((n._1, List[Record]())) {
case ((id, xs), current) => (current.id, current :: xs)
}
val answer: (Int, List[Record]) = Await.result(r, 5.seconds)
Option(answer, None)
}
catch { case e:Exception => println(e); Option(n, e) }
}
Await.ready(newRecStream.throttle(1, 1.second, 1, ThrottleMode.shaping).runForeach(_ => ()), Duration.Inf)
}
finally {
system.shutdown
db.close
}
But my problem is that when I attempt to call flatMapConcat the type I get is Serializable.
UPDATE: 2/24/16
Updated to try db.run suggestion from #PH88:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val disableAutoCommit = SimpleDBIO(_.connection.setAutoCommit(false))
val queryLimit = 1
try {
val newRecStream = Source.unfoldAsync(0) { n =>
val q = TableQuery[Records].filter(row => row.id > n).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
Await.ready(newRecStream, Duration.Inf)
}
catch
{
case ex: Throwable => println(ex)
}
finally {
system.shutdown
db.close
}
Which works (I changed query limit to 1 since I only have a couple items in my database table currently) - except once it prints the last row in the table the program exists. Here's my log output:
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/xxxxxxx/dev/src/scratch/scala/fpp-in-scala/target/scala-2.11/classes/logback.xml]
17:09:28,062 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
17:09:28,064 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
17:09:28,079 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
17:09:28,102 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to DEBUG
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
17:09:28,103 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
17:09:28,104 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator#4278284b - Registering current configuration as safe fallback point
17:09:28.117 [main] INFO com.zaxxer.hikari.HikariDataSource - pg-postgres - is starting.
1, WASSSAAAAAAAP!
2, WHAAAAT?!?
3, booyah!
4, what!
5, This rocks!
6, Again!
7, Again!2
8, I love this!
9, Akka Streams rock
10, Tuning jdbc
17:09:39.000 [main] INFO com.zaxxer.hikari.pool.HikariPool - pg-postgres - is closing down.
Process finished with exit code 0
Found the missing piece - need to replace this:
Some(recs.last.id, recs)
with this:
val lastId = if(recs.isEmpty) n else recs.last.id
Some(lastId, recs)
The call to recs.last.id was throwing java.lang.UnsupportedOperationException: empty.last when the result set was empty.
In general SQL database is a 'passive' construct and does not actively push changes like what you described. You can only 'simulate' the 'push' with periodic polling like:
val newRecStream = Source
// Query for table changes
.unfold(initState) { lastState =>
// query for new data since lastState and save the current state into newState...
Some((newState, newRecords))
}
// Throttle to limit the poll frequency
.throttle(...)
// breaks down into individual records...
.flatMapConcat { newRecords =>
Source.unfold(newRecords) { pendingRecords =>
if (records is empty) {
None
} else {
// take one record from pendingRecords and save to newRec. Save the rest into remainingRecords.
Some(remainingRecords, newRec)
}
}
}
Updated: 2/24/2016
Pseudo code example based on the 2/23/2016 updates of the question:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val queryLimit = 10
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
val q = TableQuery[Records].filter(row => row.id > lastRowId).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
// Block forever
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
db.close
}
It will repeatedly execute the query in unfoldAsync against the DB, retrieving at most 10 (queryLimit) records a time and send the records downstream (-> throttle -> flatMapConcat -> runForeach). The Await at the end will actually block forever.
Updated: 2/25/2016
Executable 'proof-of-concept' code:
import akka.actor.ActorSystem
import akka.stream.{ThrottleMode, ActorMaterializer}
import akka.stream.scaladsl.Source
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
object Infinite extends App{
implicit val system = ActorSystem("Publisher")
implicit val ec = system.dispatcher
implicit val materializer = ActorMaterializer()
case class Record(id: Int, value: String)
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
Future {
val recs = (lastRowId to lastRowId + 10).map(i => Record(i, s"rec#$i"))
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.Shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(rec)
}
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
}
}
Here is database infinite streaming working code. This has been tested with millions of records being inserted into postgresql database while streaming app is running -
package infinite.streams.db
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.alpakka.slick.scaladsl.SlickSession
import akka.stream.scaladsl.{Flow, Sink, Source}
import akka.stream.{ActorMaterializer, ThrottleMode}
import org.slf4j.LoggerFactory
import slick.basic.DatabaseConfig
import slick.jdbc.JdbcProfile
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContextExecutor}
case class Record(id: Int, value: String) {
val content = s"<ROW><ID>$id</ID><VALUE>$value</VALUE></ROW>"
}
object InfiniteStreamingApp extends App {
println("Starting app...")
implicit val system: ActorSystem = ActorSystem("Publisher")
implicit val ec: ExecutionContextExecutor = system.dispatcher
implicit val materializer: ActorMaterializer = ActorMaterializer()
println("Initializing database configuration...")
val databaseConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig[JdbcProfile]("postgres3")
implicit val session: SlickSession = SlickSession.forConfig(databaseConfig)
import databaseConfig.profile.api._
class Records(tag: Tag) extends Table[Record](tag, "test2") {
def id = column[Int]("c1")
def value = column[String]("c2")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
val db = databaseConfig.db
println("Prime for streaming...")
val logic: Flow[(Int, String), (Int, String), NotUsed] = Flow[(Int, String)].map {
case (id, value) => (id, value.toUpperCase)
}
val fetchSize = 5
try {
val done = Source
.unfoldAsync(0) {
lastId =>
println(s"Fetching next: $fetchSize records with id > $lastId")
val query = TableQuery[Records].filter(_.id > lastId).take(fetchSize)
db.run(query.result.withPinnedSession)
.map {
recs => Some(recs.last.id, recs)
}
}
.throttle(5, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat {
recs => Source.fromIterator(() => recs.iterator)
}
.map(x => (x.id, x.content))
.via(logic)
.log("*******Post Transformation******")
// .runWith(Sink.foreach(r => println("SINK: " + r._2)))
// Use runForeach or runWith(Sink)
.runForeach(rec => println("REC: " + rec))
println("Waiting for result....")
Await.ready(done, Duration.Inf)
} catch {
case ex: Throwable => println(ex.getMessage)
} finally {
println("Streaming end successfully")
db.close()
system.terminate()
}
}
application.conf
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "INFO"
}
# Load using SlickSession.forConfig("slick-postgres")
postgres3 {
profile = "slick.jdbc.PostgresProfile$"
db {
dataSourceClass = "slick.jdbc.DriverDataSource"
properties = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost/testdb"
user = "postgres"
password = "postgres"
}
numThreads = 2
}
}

How to make Play Framework WS use my SSLContext

I'm using Play 2.3.7 with Scala 2.11.4, Java 7. I want to use Play WS to connect to an HTTPS endpoint, that requires client to present its certificate. To do it I create my own SSLContext:
val sslContext = {
val keyStore = KeyStore.getInstance("pkcs12")
keyStore.load(new FileInputStream(clientKey), clientKeyPass.to[Array])
val kmf = KeyManagerFactory.getInstance(KeyManagerFactory.getDefaultAlgorithm)
kmf.init(keyStore, clientKeyPass.to[Array])
val trustStore = KeyStore.getInstance("jks")
trustStore.load(new FileInputStream(trustStoreFile), trustStorePass.to[Array])
val tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm)
tmf.init(trustStore)
val ctx = SSLContext.getInstance("TLSv1.2")
ctx.init(kmf.getKeyManagers, tmf.getTrustManagers, new SecureRandom())
ctx
}
I know, that the SSLContext is valid, because I can use it with URLConnection successfully:
def urlConnection = Action {
val conn = new URL(url).openConnection()
conn.asInstanceOf[HttpsURLConnection].setSSLSocketFactory(sslContext.getSocketFactory)
conn.connect()
Ok(scala.io.Source.fromInputStream(conn.getInputStream).getLines().mkString("\n"))
}
But when I try one of two ways below I get java.nio.channels.ClosedChannelException.
def ning = Action.async {
val builder = new AsyncHttpClientConfig.Builder()
builder.setSSLContext(sslContext)
val client = new NingWSClient(builder.build())
client.url(url).get() map { _ => Ok("ok") }
}
def asyncHttpClient = Action {
val builder = new AsyncHttpClientConfig.Builder()
builder.setSSLContext(sslContext)
val httpClient = new AsyncHttpClient(builder.build())
httpClient.prepareGet(url).execute().get(10, TimeUnit.SECONDS)
Ok("ok")
}
I also get the same exception when I go after suggestion of Will Sargent and use NingAsyncHttpClientConfigBuilder with parsed config (note, that config references exactly the same values, the hand-crafted sslContext does).
def ningFromConfig = Action.async {
val config = play.api.Configuration(ConfigFactory.parseString(
s"""
|ws.ssl {
| keyManager = {
| stores = [
| { type: "PKCS12", path: "$clientKey", password: "$clientKeyPass" }
| ]
| }
| trustManager = {
| stores = [
| { type: "JKS", path: "$trustStoreFile", password: "$trustStorePass" },
| ]
| }
|}
|# Without this one I get InvalidAlgorithmParameterException: the trustAnchors parameter must be non-empty
|ws.ssl.disabledKeyAlgorithms="RSA keySize < 1024"
""".stripMargin))
val parser = new DefaultWSConfigParser(config, play.api.Play.application.classloader)
val builder = new NingAsyncHttpClientConfigBuilder(parser.parse())
val client = new NingWSClient(builder.build())
client.url(url).get() map { _ => Ok("ok") }
}
How to make it work with Play WS?
You should use a client certificate from application.conf as defined in https://www.playframework.com/documentation/2.3.x/KeyStores and https://www.playframework.com/documentation/2.3.x/ExampleSSLConfig -- if you need to use a custom configuration, you should use the parser directly:
https://github.com/playframework/playframework/blob/2.3.x/framework/src/play-ws/src/test/scala/play/api/libs/ws/DefaultWSConfigParserSpec.scala#L14
to create a config, then the Ning builder:
https://github.com/playframework/playframework/blob/2.3.x/framework/src/play-ws/src/test/scala/play/api/libs/ws/ning/NingAsyncHttpClientConfigBuilderSpec.scala#L33
which will give you the AHCConfig object that you can pass in. There's examples in the "Using WSClient" section of https://www.playframework.com/documentation/2.3.x/ScalaWS.