How to access metrics of Alpakka CommittableSource with back off? - akka

Accessing the metrics of an Alpakka PlainSource seems fairly straight forward, but how can I do the same thing with a CommittableSource?
I currently have a simple consumer, something like this:
class Consumer(implicit val ma: ActorMaterializer, implicit val ec: ExecutionContext) extends Actor {
private val settings = ConsumerSettings(
context.system,
new ByteArrayDeserializer,
new StringDeserializer)
.withProperties(...)
override def receive: Receive = Actor.emptyBehavior
RestartSource
.withBackoff(minBackoff = 2.seconds, maxBackoff = 20.seconds, randomFactor = 0.2)(consumer)
.runForeach { handleMessage }
private def consumer() = {
AkkaConsumer
.committableSource(settings, Subscriptions.topics(Set(topic)))
.log(getClass.getSimpleName)
.withAttributes(ActorAttributes.supervisionStrategy(_ => Supervision.Resume))
}
private def handleMessage(message: CommittableMessage[Array[Byte], String]): Unit = {
...
}
}
How can I get access to the consumer metrics in this case?

We are using the Java prometheus client and I solved my issue with a custom collector that fetches its metrics directly from JMX:
import java.lang.management.ManagementFactory
import java.util
import io.prometheus.client.Collector
import io.prometheus.client.Collector.MetricFamilySamples
import io.prometheus.client.CounterMetricFamily
import io.prometheus.client.GaugeMetricFamily
import javax.management.ObjectName
import scala.collection.JavaConverters._
import scala.collection.mutable
class ConsumerMetricsCollector(val labels: Map[String, String] = Map.empty) extends Collector {
val metrics: mutable.Map[String, MetricFamilySamples] = mutable.Map.empty
def collect: util.List[MetricFamilySamples] = {
val server = ManagementFactory.getPlatformMBeanServer
for {
attrType <- List("consumer-metrics", "consumer-coordinator-metrics", "consumer-fetch-manager-metrics")
name <- server.queryNames(new ObjectName(s"kafka.consumer:type=$attrType,client-id=*"), null).asScala
attrInfo <- server.getMBeanInfo(name).getAttributes.filter { _.getType == "double" }
} yield {
val attrName = attrInfo.getName
val metricLabels = attrName.split(",").map(_.split("=").toList).collect {
case "client-id" :: (id: String) :: Nil => ("client-id", id)
}.toList ++ labels
val metricName = "kafka_consumer_" + attrName.replaceAll(raw"""[^\p{Alnum}]+""", "_")
val labelKeys = metricLabels.map(_._1).asJava
val metric = metrics.getOrElseUpdate(metricName,
if(metricName.endsWith("_total") || metricName.endsWith("_sum")) {
new CounterMetricFamily(metricName, attrInfo.getDescription, labelKeys)
} else {
new GaugeMetricFamily(metricName, attrInfo.getDescription, labelKeys)
}: MetricFamilySamples
)
val metricValue = server.getAttribute(name, attrName).asInstanceOf[Double]
val labelValues = metricLabels.map(_._2).asJava
metric match {
case f: CounterMetricFamily => f.addMetric(labelValues, metricValue)
case f: GaugeMetricFamily => f.addMetric(labelValues, metricValue)
case _ =>
}
}
metrics.values.toList.asJava
}
}

Related

I want to test ViewModel for my android app. The view model is regarding displaying posts in an app

The app is written in Kotlin and Compose. I am testing the app using Espresso(JUnit4). I have tested the activities but I'm facing issues regarding testing ViewModels. I have attached the code written for the activities and ViewModel. Please help me in testing the project.
The code for FeedScreenViewModel is given below:-
import androidx.lifecycle.ViewModel
import androidx.lifecycle.viewModelScope
import androidx.paging.cachedIn
import com.amplitude.android.Amplitude
import com.kotlang.auth.login.UserProfileProto
import com.navachar.neptune.data.respository.FeedRepository
import com.navachar.neptune.data.db.entities.Post
import dagger.hilt.android.lifecycle.HiltViewModel
import kotlinx.coroutines.flow.SharingStarted
import kotlinx.coroutines.flow.stateIn
import kotlinx.coroutines.launch
import javax.inject.Inject
#HiltViewModel
class FeedScreenViewModel #Inject constructor(
// TODO: Make this private
val amplitude: Amplitude,
private val feedRepository: FeedRepository,
): ViewModel()
{
companion object {
private const val TAG = "FeedScreenViewModel"
}
val profile = feedRepository.getProfile().stateIn(
scope = viewModelScope,
started = SharingStarted.Eagerly,
initialValue = UserProfileProto.getDefaultInstance()
)
val feedPagingFlow = feedRepository.feedPagingFlow()
.cachedIn(viewModelScope)
/**
* Adds or removes like from the given post
*
* #param post the post to update like for
* #param isLiked the updated like state
*/
fun togglePostLike(post: Post, isLiked: Boolean) {
viewModelScope.launch {
feedRepository.setPostLike(post, isLiked)
}
}
}
The code for FeedScreen is:-
import android.content.Context
import android.content.Intent
import android.net.Uri
import androidx.compose.foundation.Image
import androidx.compose.foundation.clickable
import androidx.compose.foundation.layout.*
import androidx.compose.foundation.lazy.LazyColumn
import androidx.compose.foundation.shape.CircleShape
import androidx.compose.material.*
import androidx.compose.runtime.Composable
import androidx.compose.runtime.CompositionLocalProvider
import androidx.compose.runtime.collectAsState
import androidx.compose.runtime.getValue
import androidx.compose.ui.Alignment
import androidx.compose.ui.Modifier
import androidx.compose.ui.draw.clip
import androidx.compose.ui.layout.ContentScale
import androidx.compose.ui.platform.LocalContext
import androidx.compose.ui.platform.testTag
import androidx.compose.ui.res.painterResource
import androidx.compose.ui.res.stringResource
import androidx.compose.ui.text.style.TextAlign
import androidx.compose.ui.unit.dp
import androidx.hilt.navigation.compose.hiltViewModel
import androidx.paging.LoadState
import androidx.paging.compose.LazyPagingItems
import androidx.paging.compose.collectAsLazyPagingItems
import androidx.paging.compose.items
import com.amplitude.android.Amplitude
import com.kotlang.auth.login.UserProfileProto
import com.navachar.neptune.R
import
com.navachar.neptune.presentation.screens.main.feed.actionscreens.post.PostActivity
import com.navachar.neptune.presentation.screens.main.feed.components.ErrorMessage
import com.navachar.neptune.presentation.screens.main.feed.components.PagingFooter
import com.navachar.neptune.presentation.screens.main.feed.components.Post
import com.navachar.neptune.presentation.screens.main.profile.ProfileActivity
import com.navachar.neptune.data.db.entities.Post
import com.navachar.neptune.data.db.entities.User
import com.navachar.neptune.data.db.entities.PostFull
const val TAG8="feed"
#Composable
fun FeedScreen(
modifier: Modifier = Modifier,
viewModel: FeedScreenViewModel = hiltViewModel()
) {
val context = LocalContext.current
val profile by viewModel.profile.collectAsState()
val pagingItems = viewModel.feedPagingFlow.collectAsLazyPagingItems()
FeedScreen(
modifier = modifier,
pagingItems = pagingItems,
openCreatePostScreen = {
context.startActivity(Intent(context, PostActivity::class.java))
},
toggleLike = viewModel::togglePostLike,
openPostDetails = { post ->
Intent(
context,
CommentActivity::class.java,
).apply {
putExtra("activity", "MainActivity")
putExtra("postId", post.id)
context.startActivity(this)
}
},
sharePost = {
shareContent(
it,
profile,
viewModel.amplitude,
context,
)
},
openProfile = { user ->
context.startActivity(
Intent(
context,
ProfileActivity::class.java
).apply {
putExtra("UserId", user.id)
}
)
},
onRequestOpenUrl = { url ->
context.startActivity(Intent(Intent.ACTION_VIEW, Uri.parse(url)))
}
)
}
#Composable
private fun FeedScreen(
modifier: Modifier,
pagingItems: LazyPagingItems<PostFull>,
openCreatePostScreen: () -> Unit,
toggleLike: (post: Post, isLiked: Boolean) -> Unit,
openPostDetails: (post: Post) -> Unit,
sharePost: (post: PostFull) -> Unit,
openProfile: (user: User) -> Unit,
onRequestOpenUrl: (url: String) -> Unit,
) {
Scaffold(
modifier = modifier,
content = {
Feed(
modifier = Modifier
.fillMaxSize()
.padding(it),
pagingItems = pagingItems,
toggleLike = toggleLike,
openPostDetails = openPostDetails,
sharePost = sharePost,
openProfile = openProfile,
onRequestOpenUrl = onRequestOpenUrl,
)
},
bottomBar = {
BottomBar(onClick = openCreatePostScreen)
}
)
}
#Composable
private fun Feed(
modifier: Modifier,
pagingItems: LazyPagingItems<PostFull>,
toggleLike: (post: Post, isLiked: Boolean) -> Unit,
openPostDetails: (post: Post) -> Unit,
sharePost: (post: PostFull) -> Unit,
openProfile: (user: User) -> Unit,
onRequestOpenUrl: (url: String) -> Unit,
) {
when (val refreshState = pagingItems.loadState.refresh) {
is LoadState.NotLoading -> {
FeedLazyList(
modifier = modifier.testTag(TAG8),
pagingItems = pagingItems,
openPostDetails = openPostDetails,
openProfile = openProfile,
sharePost = sharePost,
toggleLike = toggleLike,
onRequestOpenUrl = onRequestOpenUrl,
)
}
is LoadState.Loading -> {
Box(
modifier = modifier,
contentAlignment = Alignment.Center
) {
CircularProgressIndicator()
}
}
is LoadState.Error -> {
ErrorMessage(
modifier = modifier,
error = refreshState.error,
onRetry = pagingItems::retry,
)
}
}
}
#Composable
private fun FeedLazyList(
modifier: Modifier,
pagingItems: LazyPagingItems<PostFull>,
openPostDetails: (post: Post) -> Unit,
openProfile: (user: User) -> Unit,
sharePost: (post: PostFull) -> Unit,
toggleLike: (post: Post, isLiked: Boolean) -> Unit,
onRequestOpenUrl: (url: String) -> Unit,
) {
LazyColumn(
modifier = modifier
) {
items(pagingItems) { post ->
if (post != null) {
Post(
post = post,
openPostDetails = { openPostDetails(post.post) },
openProfile = {
if (post.author != null) {
openProfile(post.author)
}
},
onSharePost = { sharePost(post) },
onToggleLike = {
toggleLike(post.post, it)
},
onRequestOpenUrl = onRequestOpenUrl,
)
} else {
// TODO: Show a shimmer placeholder for post
}
}
item {
PagingFooter(
modifier = Modifier
.fillMaxWidth()
.padding(16.dp),
pagingItems = pagingItems
)
}
}
}
#Composable
private fun BottomBar(
onClick: () -> Unit
) {
Row(
Modifier
.fillMaxWidth()
.clickable(onClick = onClick)
.padding(16.dp),
verticalAlignment = Alignment.CenterVertically,
) {
CompositionLocalProvider(
LocalContentAlpha provides ContentAlpha.medium
) {
Image(
modifier = Modifier
.size(24.dp)
.clip(CircleShape),
painter = painterResource(id = R.drawable.ic_account),
contentDescription = "",
contentScale = ContentScale.FillBounds
)
Spacer(modifier = Modifier.width(8.dp))
Text(
modifier = Modifier.fillMaxWidth(),
text = stringResource(R.string.post_something),
style = MaterialTheme.typography.body2,
textAlign = TextAlign.Start,
)
}
}
}
private fun shareContent(
post: PostFull,
profile: UserProfileProto,
amplitude: Amplitude,
context: Context
) {
// TODO: Separate the amplitude logic from UI
val eventName = "CTA_Post_Share"
val eventTime = System.currentTimeMillis()
val userId = profile.loginId
val loadTime = ""
val postTag = post.post.tags.joinToString()
val postContentType = if (post.media.isEmpty()) "Text" else "Media"
val postId = post.post.id
amplitude.track(
eventType = eventName,
eventProperties = mapOf(
Pair("Event_Time", eventTime),
Pair("Screen_Load_Time", loadTime),
Pair("User_ID", userId),
Pair("Post_Tag", postTag),
Pair("Post_Content_Type", postContentType),
Pair("Post_ID", postId)
)
)
// TODO: Localise the share content
val shareContent = "${post.author?.name} Posted at Urvar:\n\n" +
"${post.post.body.take(200)}...\n\n" +
"Read More: http://www.urvar.com/post/${post.post.id}"
val sendIntent: Intent = Intent().apply {
action = Intent.ACTION_SEND
putExtra(
Intent.EXTRA_TEXT,
shareContent
)
type = "text/plain"
}
val shareIntent = Intent.createChooser(sendIntent, null)
context.startActivity(shareIntent)
}
Please help me test this viewmodel.
The code generated by Test for this View model is:-
class FeedScreenViewModelTest {
#Before
fun setUp() {
}
#After
fun tearDown() {
}
#Test
fun getProfile() {
}
#Test
fun getFeedPagingFlow() {
}
#Test
fun togglePostLike() {
}
#Test
fun getAmplitude() {
}
}
Please tell me what to write inside these functions.

Kotlin Mockito NullPointerException

There are classes:
#Singleton
class Exchange(
#Client("\${exchange.rest.url}") #Inject val httpClient: RxHttpClient
) : Exchange {
override suspend fun getSymbols(): List<String> {
val response = httpClient.retrieve(GET<String>("/someurl/symbols"), ExchangeInfo::class.java).awaitFirst()
return response.data
.map { it.symbol }.toList()
}
}
#Introspected
#JsonIgnoreProperties(ignoreUnknown = true)
class ExchangeInfo(
val data: List<Symbol>
)
#Introspected
#JsonIgnoreProperties(ignoreUnknown = true)
data class Symbol(
val symbol: String
)
And I want to test the function getSymbols()
I am writing a test class:
import new.project.ExchangeInfo
import new.project.SpotSymbol
import io.micronaut.http.HttpRequest
import io.micronaut.http.client.RxHttpClient
import io.reactivex.Emitter
import io.reactivex.Flowable
import kotlinx.coroutines.Dispatchers
import kotlinx.coroutines.reactive.awaitFirst
import kotlinx.coroutines.runBlocking
import kotlinx.coroutines.withContext
import okhttp3.internal.immutableListOf
import org.junit.jupiter.api.Test
import org.mockito.Mockito.mock
import org.mockito.kotlin.whenever
class ExchangeTest {
#Test
fun getSymbolsTest() = runBlocking {
//the answer I want to receive
val response: Flowable<ExchangeInfo> = Flowable.generate<ExchangeInfo, String>(
java.util.concurrent.Callable<String> { -> "symbol" },
io.reactivex.functions.BiConsumer<String, Emitter<ExchangeInfo>> { t1, t2 -> }
)
val httpClient = mock(RxHttpClient::class.java)
whenever(httpClient.retrieve(HttpRequest.GET<String>("/someurl/symbols"), ExchangeInfo::class.java))
.thenReturn(response)
val exchange = Exchange(httpClient)
//calling the tested method
val result = exchange.getSymbols()
assert(immutableListOf("symbol") == result)
}
}
When running the test, I get:
java.lang.NullPointerException: httpClient.retrieve (GET <… ExchangeInfo :: class.java) must not be null
Can you please tell me how to properly mock httpClient?
it is necessary to do so: doReturn(Flowable.just(ExchangeInfo)).when(httpClient).retrieve(any<HttpRequest<String>>(), any<Class<Any>>())

Akka Kafka stream supervison strategy not working

I am running an Akka Streams Kafka application and I want to incorporate the supervision strategy on the stream consumer such that if the broker goes down, and the stream consumer dies after a stop timeout, the supervisor can restart the consumer.
Here is my complete code:
UserEventStream:
import akka.actor.{Actor, PoisonPill, Props}
import akka.kafka.{ConsumerSettings, Subscriptions}
import akka.kafka.scaladsl.Consumer
import akka.stream.scaladsl.Sink
import akka.util.Timeout
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.kafka.common.serialization.{ByteArrayDeserializer, StringDeserializer}
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
import scala.util.{Failure, Success}
import akka.pattern.ask
import akka.stream.ActorMaterializer
class UserEventStream extends Actor {
val settings = Settings(context.system).KafkaConsumers
implicit val timeout: Timeout = Timeout(10 seconds)
implicit val materializer = ActorMaterializer()
override def preStart(): Unit = {
super.preStart()
println("Starting UserEventStream....s")
}
override def receive = {
case "start" =>
val consumerConfig = settings.KafkaConsumerInfo
println(s"ConsumerConfig with $consumerConfig")
startStreamConsumer(consumerConfig("UserEventMessage" + ".c" + 1))
}
def startStreamConsumer(config: Map[String, String]) = {
println(s"startStreamConsumer with config $config")
val consumerSource = createConsumerSource(config)
val consumerSink = createConsumerSink()
val messageProcessor = context.actorOf(Props[MessageProcessor], "messageprocessor")
println("START: The UserEventStream processing")
val future =
consumerSource
.mapAsync(parallelism = 50) { message =>
val m = s"${message.record.value()}"
messageProcessor ? m
}
.runWith(consumerSink)
future.onComplete {
case Failure(ex) =>
println("FAILURE : The UserEventStream processing, stopping the actor.")
self ! PoisonPill
case Success(ex) =>
}
}
def createConsumerSource(config: Map[String, String]) = {
val kafkaMBAddress = config("bootstrap-servers")
val groupID = config("groupId")
val topicSubscription = config("subscription-topic").split(',').toList
println(s"Subscriptiontopics $topicSubscription")
val consumerSettings = ConsumerSettings(context.system, new ByteArrayDeserializer, new StringDeserializer)
.withBootstrapServers(kafkaMBAddress)
.withGroupId(groupID)
.withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
.withProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true")
Consumer.committableSource(consumerSettings, Subscriptions.topics(topicSubscription: _*))
}
def createConsumerSink() = {
Sink.foreach(println)
}
}
StreamProcessorSupervisor (this is the supervisor class of the UserEventStream class):
import akka.actor.{Actor, Props}
import akka.pattern.{Backoff, BackoffSupervisor}
import akka.stream.ActorMaterializer
import stream.StreamProcessorSupervisor.StartClient
import scala.concurrent.duration._
object StreamProcessorSupervisor {
final case object StartSimulator
final case class StartClient(id: String)
def props(implicit materializer: ActorMaterializer) =
Props(classOf[StreamProcessorSupervisor], materializer)
}
class StreamProcessorSupervisor(implicit materializer: ActorMaterializer) extends Actor {
override def preStart(): Unit = {
self ! StartClient(self.path.name)
}
def receive: Receive = {
case StartClient(id) =>
println(s"startCLient with id $id")
val childProps = Props(classOf[UserEventStream])
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
childProps,
childName = "usereventstream",
minBackoff = 1.second,
maxBackoff = 1.minutes,
randomFactor = 0.2
)
)
context.actorOf(supervisor, name = s"$id-backoff-supervisor")
val userEventStrean = context.actorOf(Props(classOf[UserEventStream]),"usereventstream")
userEventStrean ! "start"
}
}
App (the main application class):
import akka.actor.{ActorSystem, Props}
import akka.stream.ActorMaterializer
object App extends App {
implicit val system = ActorSystem("stream-test")
implicit val materializer = ActorMaterializer()
system.actorOf(StreamProcessorSupervisor.props,"StreamProcessorSupervisor")
}
application.conf:
kafka {
consumer {
num-consumers = "1"
c1 {
bootstrap-servers = "localhost:9092"
bootstrap-servers = ${?KAFKA_CONSUMER_ENDPOINT1}
groupId = "localakkagroup1"
subscription-topic = "test"
subscription-topic = ${?SUBSCRIPTION_TOPIC1}
message-type = "UserEventMessage"
poll-interval = 50ms
poll-timeout = 50ms
stop-timeout = 30s
close-timeout = 20s
commit-timeout = 15s
wakeup-timeout = 10s
max-wakeups = 10
use-dispatcher = "akka.kafka.default-dispatcher"
kafka-clients {
enable.auto.commit = true
}
}
}
}
After running the application, I purposely killed the Kafka broker and then found that after 30 seconds, the actor is stopping itself by sending a poison pill. But strangely it doesn't restart as mentioned in the BackoffSupervisor strategy.
What could be the issue here?
There are two instances of UserEventStream in your code: one is the child actor that the BackoffSupervisor internally creates with the Props that you pass to it, and the other is the val userEventStrean that is a child of StreamProcessorSupervisor. You're sending the "start" message to the latter, when you should be sending that message to the former.
You don't need val userEventStrean, because the BackoffSupervisor creates the child actor. Messages sent to the BackoffSupervisor are forwarded to the child, so to send a "start" message to the child, send it to the BackoffSupervisor:
class StreamProcessorSupervisor(implicit materializer: ActorMaterializer) extends Actor {
override def preStart(): Unit = {
self ! StartClient(self.path.name)
}
def receive: Receive = {
case StartClient(id) =>
println(s"startCLient with id $id")
val childProps = Props[UserEventStream]
val supervisorProps = BackoffSupervisor.props(...)
val supervisor = context.actorOf(supervisorProps, name = s"$id-backoff-supervisor")
supervisor ! "start"
}
}
The other issue is that when an actor receives a PoisonPill, that's not the same thing as that actor throwing an exception. Therefore, Backoff.onFailure won't be triggered when UserEventStream sends itself a PoisonPill. A PoisonPill stops the actor, so use Backoff.onStop instead:
val supervisorProps = BackoffSupervisor.props(
Backoff.onStop( // <--- use onStop
childProps,
...
)
)
val supervisor = context.actorOf(supervisorProps, name = s"$id-backoff-supervisor")
supervisor ! "start"

Is it possible to create an "infinite" stream from a database table using Akka Stream

I'm playing with Akka Streams 2.4.2 and am wondering if it's possible to setup a stream which uses a database table for a source and whenever there is a record added to the table that record is materialized and pushed downstream?
UPDATE: 2/23/16
I've implemented the solution from #PH88. Here's my table definition:
case class Record(id: Int, value: String)
class Records(tag: Tag) extends Table[Record](tag, "my_stream") {
def id = column[Int]("id")
def value = column[String]("value")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
Here's the implementation:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
try{
val newRecStream = Source.unfold((0, List[Record]())) { n =>
try {
val q = for (r <- TableQuery[Records].filter(row => row.id > n._1)) yield (r)
val r = Source.fromPublisher(db.stream(q.result)).collect {
case rec => println(s"${rec.id}, ${rec.value}"); rec
}.runFold((n._1, List[Record]())) {
case ((id, xs), current) => (current.id, current :: xs)
}
val answer: (Int, List[Record]) = Await.result(r, 5.seconds)
Option(answer, None)
}
catch { case e:Exception => println(e); Option(n, e) }
}
Await.ready(newRecStream.throttle(1, 1.second, 1, ThrottleMode.shaping).runForeach(_ => ()), Duration.Inf)
}
finally {
system.shutdown
db.close
}
But my problem is that when I attempt to call flatMapConcat the type I get is Serializable.
UPDATE: 2/24/16
Updated to try db.run suggestion from #PH88:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val disableAutoCommit = SimpleDBIO(_.connection.setAutoCommit(false))
val queryLimit = 1
try {
val newRecStream = Source.unfoldAsync(0) { n =>
val q = TableQuery[Records].filter(row => row.id > n).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
Await.ready(newRecStream, Duration.Inf)
}
catch
{
case ex: Throwable => println(ex)
}
finally {
system.shutdown
db.close
}
Which works (I changed query limit to 1 since I only have a couple items in my database table currently) - except once it prints the last row in the table the program exists. Here's my log output:
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback.groovy]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Could NOT find resource [logback-test.xml]
17:09:27,982 |-INFO in ch.qos.logback.classic.LoggerContext[default] - Found resource [logback.xml] at [file:/Users/xxxxxxx/dev/src/scratch/scala/fpp-in-scala/target/scala-2.11/classes/logback.xml]
17:09:28,062 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - About to instantiate appender of type [ch.qos.logback.core.ConsoleAppender]
17:09:28,064 |-INFO in ch.qos.logback.core.joran.action.AppenderAction - Naming appender as [STDOUT]
17:09:28,079 |-INFO in ch.qos.logback.core.joran.action.NestedComplexPropertyIA - Assuming default type [ch.qos.logback.classic.encoder.PatternLayoutEncoder] for [encoder] property
17:09:28,102 |-INFO in ch.qos.logback.classic.joran.action.LoggerAction - Setting level of logger [application] to DEBUG
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.RootLoggerAction - Setting level of ROOT logger to INFO
17:09:28,103 |-INFO in ch.qos.logback.core.joran.action.AppenderRefAction - Attaching appender named [STDOUT] to Logger[ROOT]
17:09:28,103 |-INFO in ch.qos.logback.classic.joran.action.ConfigurationAction - End of configuration.
17:09:28,104 |-INFO in ch.qos.logback.classic.joran.JoranConfigurator#4278284b - Registering current configuration as safe fallback point
17:09:28.117 [main] INFO com.zaxxer.hikari.HikariDataSource - pg-postgres - is starting.
1, WASSSAAAAAAAP!
2, WHAAAAT?!?
3, booyah!
4, what!
5, This rocks!
6, Again!
7, Again!2
8, I love this!
9, Akka Streams rock
10, Tuning jdbc
17:09:39.000 [main] INFO com.zaxxer.hikari.pool.HikariPool - pg-postgres - is closing down.
Process finished with exit code 0
Found the missing piece - need to replace this:
Some(recs.last.id, recs)
with this:
val lastId = if(recs.isEmpty) n else recs.last.id
Some(lastId, recs)
The call to recs.last.id was throwing java.lang.UnsupportedOperationException: empty.last when the result set was empty.
In general SQL database is a 'passive' construct and does not actively push changes like what you described. You can only 'simulate' the 'push' with periodic polling like:
val newRecStream = Source
// Query for table changes
.unfold(initState) { lastState =>
// query for new data since lastState and save the current state into newState...
Some((newState, newRecords))
}
// Throttle to limit the poll frequency
.throttle(...)
// breaks down into individual records...
.flatMapConcat { newRecords =>
Source.unfold(newRecords) { pendingRecords =>
if (records is empty) {
None
} else {
// take one record from pendingRecords and save to newRec. Save the rest into remainingRecords.
Some(remainingRecords, newRec)
}
}
}
Updated: 2/24/2016
Pseudo code example based on the 2/23/2016 updates of the question:
implicit val system = ActorSystem("Publisher")
implicit val materializer = ActorMaterializer()
val db = Database.forConfig("pg-postgres")
val queryLimit = 10
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
val q = TableQuery[Records].filter(row => row.id > lastRowId).take(queryLimit)
db.run(q.result).map { recs =>
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(s"${rec.id}, ${rec.value}")
}
// Block forever
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
db.close
}
It will repeatedly execute the query in unfoldAsync against the DB, retrieving at most 10 (queryLimit) records a time and send the records downstream (-> throttle -> flatMapConcat -> runForeach). The Await at the end will actually block forever.
Updated: 2/25/2016
Executable 'proof-of-concept' code:
import akka.actor.ActorSystem
import akka.stream.{ThrottleMode, ActorMaterializer}
import akka.stream.scaladsl.Source
import scala.concurrent.duration.Duration
import scala.concurrent.{Await, Future}
import scala.concurrent.duration._
object Infinite extends App{
implicit val system = ActorSystem("Publisher")
implicit val ec = system.dispatcher
implicit val materializer = ActorMaterializer()
case class Record(id: Int, value: String)
try {
val completion = Source
.unfoldAsync(0) { lastRowId =>
Future {
val recs = (lastRowId to lastRowId + 10).map(i => Record(i, s"rec#$i"))
Some(recs.last.id, recs)
}
}
.throttle(1, 1.second, 1, ThrottleMode.Shaping)
.flatMapConcat { recs =>
Source.fromIterator(() => recs.iterator)
}
.runForeach { rec =>
println(rec)
}
Await.ready(completion, Duration.Inf)
} catch {
case ex: Throwable => println(ex)
} finally {
system.shutdown
}
}
Here is database infinite streaming working code. This has been tested with millions of records being inserted into postgresql database while streaming app is running -
package infinite.streams.db
import akka.NotUsed
import akka.actor.ActorSystem
import akka.stream.alpakka.slick.scaladsl.SlickSession
import akka.stream.scaladsl.{Flow, Sink, Source}
import akka.stream.{ActorMaterializer, ThrottleMode}
import org.slf4j.LoggerFactory
import slick.basic.DatabaseConfig
import slick.jdbc.JdbcProfile
import scala.concurrent.duration._
import scala.concurrent.{Await, ExecutionContextExecutor}
case class Record(id: Int, value: String) {
val content = s"<ROW><ID>$id</ID><VALUE>$value</VALUE></ROW>"
}
object InfiniteStreamingApp extends App {
println("Starting app...")
implicit val system: ActorSystem = ActorSystem("Publisher")
implicit val ec: ExecutionContextExecutor = system.dispatcher
implicit val materializer: ActorMaterializer = ActorMaterializer()
println("Initializing database configuration...")
val databaseConfig: DatabaseConfig[JdbcProfile] = DatabaseConfig.forConfig[JdbcProfile]("postgres3")
implicit val session: SlickSession = SlickSession.forConfig(databaseConfig)
import databaseConfig.profile.api._
class Records(tag: Tag) extends Table[Record](tag, "test2") {
def id = column[Int]("c1")
def value = column[String]("c2")
def * = (id, value) <> (Record.tupled, Record.unapply)
}
val db = databaseConfig.db
println("Prime for streaming...")
val logic: Flow[(Int, String), (Int, String), NotUsed] = Flow[(Int, String)].map {
case (id, value) => (id, value.toUpperCase)
}
val fetchSize = 5
try {
val done = Source
.unfoldAsync(0) {
lastId =>
println(s"Fetching next: $fetchSize records with id > $lastId")
val query = TableQuery[Records].filter(_.id > lastId).take(fetchSize)
db.run(query.result.withPinnedSession)
.map {
recs => Some(recs.last.id, recs)
}
}
.throttle(5, 1.second, 1, ThrottleMode.shaping)
.flatMapConcat {
recs => Source.fromIterator(() => recs.iterator)
}
.map(x => (x.id, x.content))
.via(logic)
.log("*******Post Transformation******")
// .runWith(Sink.foreach(r => println("SINK: " + r._2)))
// Use runForeach or runWith(Sink)
.runForeach(rec => println("REC: " + rec))
println("Waiting for result....")
Await.ready(done, Duration.Inf)
} catch {
case ex: Throwable => println(ex.getMessage)
} finally {
println("Streaming end successfully")
db.close()
system.terminate()
}
}
application.conf
akka {
loggers = ["akka.event.slf4j.Slf4jLogger"]
loglevel = "INFO"
}
# Load using SlickSession.forConfig("slick-postgres")
postgres3 {
profile = "slick.jdbc.PostgresProfile$"
db {
dataSourceClass = "slick.jdbc.DriverDataSource"
properties = {
driver = "org.postgresql.Driver"
url = "jdbc:postgresql://localhost/testdb"
user = "postgres"
password = "postgres"
}
numThreads = 2
}
}

unmarshall nested json in spray-json

Using spray-json (as I'm using spray-client) in order to get a latitude,longitude object from the google maps API I need to have the whole response structure set up:
case class AddrComponent(long_name: String, short_name: String, types: List[String])
case class Location(lat: Double, lng: Double)
case class ViewPort(northeast: Location, southwest: Location)
case class Geometry(location: Location, location_type: String, viewport: ViewPort)
case class EachResult(address_components: List[AddrComponent],
formatted_address: String,
geometry: Geometry,
types: List[String])
case class GoogleApiResult[T](status: String, results: List[T])
object AddressProtocol extends DefaultJsonProtocol {
implicit val addrFormat = jsonFormat3(AddrComponent)
implicit val locFormat = jsonFormat2(Location)
implicit val viewPortFormat = jsonFormat2(ViewPort)
implicit val geomFormat = jsonFormat3(Geometry)
implicit val eachResFormat = jsonFormat4(EachResult)
implicit def GoogleApiFormat[T: JsonFormat] = jsonFormat2(GoogleApiResult.apply[T])
}
import AddressProtocol._
Is there any way I can just get Location from the json in the response and avoid all this gumph?
The spray-client code:
implicit val system = ActorSystem("test-system")
import system.dispatcher
private val pipeline = sendReceive ~> unmarshal[GoogleApiResult[EachResult]]
def getPostcode(postcode: String): Point = {
val url = s"http://maps.googleapis.com/maps/api/geocode/json?address=$postcode,+UK&sensor=true"
val future = pipeline(Get(url))
val result = Await.result(future, 10 seconds)
result.results.size match {
case 0 => throw new PostcodeNotFoundException(postcode)
case x if x > 1 => throw new MultipleResultsException(postcode)
case _ => {
val location = result.results(0).geometry.location
new Point(location.lng, location.lat)
}
}
}
Or alternatively how can I use jackson with spray-client?
Following jrudolph's advice to json-lenses I also got in quite a bit of fiddling but finally got things to work. I found it quite difficult (as a newbie) and also I am sure this solution is far from the most elegant - nevertheless I think this might help people or inspire others for improvements.
Given JSON:
{
"status": 200,
"code": 0,
"message": "",
"payload": {
"statuses": {
"emailConfirmation": "PENDING",
"phoneConfirmation": "DONE",
}
}
}
And case class for unmarshalling statuses only:
case class UserStatus(emailConfirmation: String, phoneConfirmation: String)
One can do this to unmarshal response:
import scala.concurrent.Future
import spray.http.HttpResponse
import spray.httpx.unmarshalling.{FromResponseUnmarshaller, MalformedContent}
import spray.json.DefaultJsonProtocol
import spray.json.lenses.JsonLenses._
import spray.client.pipelining._
object UserStatusJsonProtocol extends DefaultJsonProtocol {
implicit val userStatusUnmarshaller = new FromResponseUnmarshaller[UserStatus] {
implicit val userStatusJsonFormat = jsonFormat2(UserStatus)
def apply(response: HttpResponse) = try {
Right(response.entity.asString.extract[UserStatus]('payload / 'statuses))
} catch { case x: Throwable =>
Left(MalformedContent("Could not unmarshal user status.", x))
}
}
}
import UserStatusJsonProtocol._
def userStatus(userId: String): Future[UserStatus] = {
val pipeline = sendReceive ~> unmarshal[UserStatus]
pipeline(Get(s"/api/user/${userId}/status"))
}