Akka Cluster starding not able to register to Coordinator - akka

I am trying to create an Akka Sharding Cluster. I want to use proxy only mode on one of the nodes just to route the message to the shard regions. I am getting the following Warning:
[WARN] [02/11/2019 17:04:17.819] [ClusterSystem-akka.actor.default-dispatcher-21] [akka.tcp://ClusterSystem#127.0.0.1:2555/system/sharding/ShardnameProxy] Trying to register to coordinator at [Some(ActorSelection[Anchor(akka.tcp://ClusterSystem#127.0.0.1:2551/), Path(/system/sharding/ShardnameCoordinator/singleton/coordinator)])], but no acknowledgement. Total [1] buffered messages.
**Main.java: ** Starts the cluster using the configuration from application.conf(code added latter)
object Main {
val shardName = "Shardname"
val role = "Master"
var shardingProbeLocalRegin: Option[ActorRef] = None
def main(args: Array[String]): Unit = {
val conf = ConfigFactory.load()
val system = ActorSystem("ClusterSystem",conf.getConfig("main"))
ClusterSharding(system).start(shardName,Test.props,ClusterShardingSettings(system),ShardDetails.extractEntityId,ShardDetails.extractShardId)
}
}
Test.java : Entity for the Sharding Cluster
object Test {
def props: Props = Props(classOf[Test])
class Test extends Actor {
val log = Logger.getLogger(getClass.getName)
override def receive = {
case msg: String =>
log.info("Message from " + sender().path.toString + " Message is " + msg)
sender() ! "Done"
}
}
}
MessageProducer.java(Proxy Only Mode) Message Producer sends a message to the Shard every second.
object MessageProducer {
var shardingProbeLocalRegin: Option[ActorRef] = None
object DoSharding
def prop:Props = Props(classOf[MessageProducer])
var numeric : Long = 0
def main(args: Array[String]): Unit = {
val conf = ConfigFactory.load
val system = ActorSystem("ClusterSystem",conf.getConfig("messgaeProducer"))
ClusterSharding(system).startProxy(Main.shardName,None,extractEntityId,extractShardId)
shardingProbeLocalRegin = Some(ClusterSharding(system).shardRegion(Main.shardName))
val actor = system.actorOf(Props[MessageProducer],"message")
}
}
class RemoteAddressExtensionImpl(system: ExtendedActorSystem) extends Extension {
def address = system.provider.getDefaultAddress
}
object RemoteAddressExtension extends ExtensionKey[RemoteAddressExtensionImpl]
class MessageProducer extends Actor{
val log = Logger.getLogger(getClass.getName)
override def preStart(): Unit = {
println("Starting "+self.path.address)
context.system.scheduler.schedule(10 seconds,1 second ,self,DoSharding)
}
override def receive = {
case DoSharding =>
log.info("sending message" + MessageProducer.numeric)
MessageProducer.shardingProbeLocalRegin.foreach(_ ! "" + (MessageProducer.numeric))
MessageProducer.numeric += 1
}
}
**application.conf: ** Configuration File
main {
akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = on
netty.tcp {
hostname = "127.0.0.1"
port = 2551
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551"
]
sharding.state-store-mode = ddata
auto-down-unreachable-after = 1s
}
akka.extensions = ["akka.cluster.metrics.ClusterMetricsExtension", "akka.cluster.ddata.DistributedData"]
}
}
messgaeProducer {
akka {
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = on
netty.tcp {
hostname = "192.168.2.96"
port = 2554
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551"
//, "akka.tcp://ClusterSystem#127.0.0.1:2552"
]
sharding.state-store-mode = ddata
auto-down-unreachable-after = 1s
}
akka.extensions = ["akka.cluster.metrics.ClusterMetricsExtension", "akka.cluster.ddata.DistributedData"]
}
}
Am I doing anything wrong? Is there any other way to apply for this approach. My main aim is to avoid Single Point of failure for my cluster. If any node goes down then it should not affect any other state. Can anyone help me with this?

Is it solved?
If not, please check your akka.cluster configuration.
You have to set config like this. It works to me
for proxy
akka.cluster {
roles = ["Proxy"]
sharding {
role = "Master"
}
}
for master
akka.cluster {
roles = ["Master"]
sharding {
role = "Master"
}
}

Related

How to access metrics of Alpakka CommittableSource with back off?

Accessing the metrics of an Alpakka PlainSource seems fairly straight forward, but how can I do the same thing with a CommittableSource?
I currently have a simple consumer, something like this:
class Consumer(implicit val ma: ActorMaterializer, implicit val ec: ExecutionContext) extends Actor {
private val settings = ConsumerSettings(
context.system,
new ByteArrayDeserializer,
new StringDeserializer)
.withProperties(...)
override def receive: Receive = Actor.emptyBehavior
RestartSource
.withBackoff(minBackoff = 2.seconds, maxBackoff = 20.seconds, randomFactor = 0.2)(consumer)
.runForeach { handleMessage }
private def consumer() = {
AkkaConsumer
.committableSource(settings, Subscriptions.topics(Set(topic)))
.log(getClass.getSimpleName)
.withAttributes(ActorAttributes.supervisionStrategy(_ => Supervision.Resume))
}
private def handleMessage(message: CommittableMessage[Array[Byte], String]): Unit = {
...
}
}
How can I get access to the consumer metrics in this case?
We are using the Java prometheus client and I solved my issue with a custom collector that fetches its metrics directly from JMX:
import java.lang.management.ManagementFactory
import java.util
import io.prometheus.client.Collector
import io.prometheus.client.Collector.MetricFamilySamples
import io.prometheus.client.CounterMetricFamily
import io.prometheus.client.GaugeMetricFamily
import javax.management.ObjectName
import scala.collection.JavaConverters._
import scala.collection.mutable
class ConsumerMetricsCollector(val labels: Map[String, String] = Map.empty) extends Collector {
val metrics: mutable.Map[String, MetricFamilySamples] = mutable.Map.empty
def collect: util.List[MetricFamilySamples] = {
val server = ManagementFactory.getPlatformMBeanServer
for {
attrType <- List("consumer-metrics", "consumer-coordinator-metrics", "consumer-fetch-manager-metrics")
name <- server.queryNames(new ObjectName(s"kafka.consumer:type=$attrType,client-id=*"), null).asScala
attrInfo <- server.getMBeanInfo(name).getAttributes.filter { _.getType == "double" }
} yield {
val attrName = attrInfo.getName
val metricLabels = attrName.split(",").map(_.split("=").toList).collect {
case "client-id" :: (id: String) :: Nil => ("client-id", id)
}.toList ++ labels
val metricName = "kafka_consumer_" + attrName.replaceAll(raw"""[^\p{Alnum}]+""", "_")
val labelKeys = metricLabels.map(_._1).asJava
val metric = metrics.getOrElseUpdate(metricName,
if(metricName.endsWith("_total") || metricName.endsWith("_sum")) {
new CounterMetricFamily(metricName, attrInfo.getDescription, labelKeys)
} else {
new GaugeMetricFamily(metricName, attrInfo.getDescription, labelKeys)
}: MetricFamilySamples
)
val metricValue = server.getAttribute(name, attrName).asInstanceOf[Double]
val labelValues = metricLabels.map(_._2).asJava
metric match {
case f: CounterMetricFamily => f.addMetric(labelValues, metricValue)
case f: GaugeMetricFamily => f.addMetric(labelValues, metricValue)
case _ =>
}
}
metrics.values.toList.asJava
}
}

Akka Kafka stream supervison strategy not working

I am running an Akka Streams Kafka application and I want to incorporate the supervision strategy on the stream consumer such that if the broker goes down, and the stream consumer dies after a stop timeout, the supervisor can restart the consumer.
Here is my complete code:
UserEventStream:
import akka.actor.{Actor, PoisonPill, Props}
import akka.kafka.{ConsumerSettings, Subscriptions}
import akka.kafka.scaladsl.Consumer
import akka.stream.scaladsl.Sink
import akka.util.Timeout
import org.apache.kafka.clients.consumer.ConsumerConfig
import org.apache.kafka.common.serialization.{ByteArrayDeserializer, StringDeserializer}
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
import scala.util.{Failure, Success}
import akka.pattern.ask
import akka.stream.ActorMaterializer
class UserEventStream extends Actor {
val settings = Settings(context.system).KafkaConsumers
implicit val timeout: Timeout = Timeout(10 seconds)
implicit val materializer = ActorMaterializer()
override def preStart(): Unit = {
super.preStart()
println("Starting UserEventStream....s")
}
override def receive = {
case "start" =>
val consumerConfig = settings.KafkaConsumerInfo
println(s"ConsumerConfig with $consumerConfig")
startStreamConsumer(consumerConfig("UserEventMessage" + ".c" + 1))
}
def startStreamConsumer(config: Map[String, String]) = {
println(s"startStreamConsumer with config $config")
val consumerSource = createConsumerSource(config)
val consumerSink = createConsumerSink()
val messageProcessor = context.actorOf(Props[MessageProcessor], "messageprocessor")
println("START: The UserEventStream processing")
val future =
consumerSource
.mapAsync(parallelism = 50) { message =>
val m = s"${message.record.value()}"
messageProcessor ? m
}
.runWith(consumerSink)
future.onComplete {
case Failure(ex) =>
println("FAILURE : The UserEventStream processing, stopping the actor.")
self ! PoisonPill
case Success(ex) =>
}
}
def createConsumerSource(config: Map[String, String]) = {
val kafkaMBAddress = config("bootstrap-servers")
val groupID = config("groupId")
val topicSubscription = config("subscription-topic").split(',').toList
println(s"Subscriptiontopics $topicSubscription")
val consumerSettings = ConsumerSettings(context.system, new ByteArrayDeserializer, new StringDeserializer)
.withBootstrapServers(kafkaMBAddress)
.withGroupId(groupID)
.withProperty(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG, "earliest")
.withProperty(ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG, "true")
Consumer.committableSource(consumerSettings, Subscriptions.topics(topicSubscription: _*))
}
def createConsumerSink() = {
Sink.foreach(println)
}
}
StreamProcessorSupervisor (this is the supervisor class of the UserEventStream class):
import akka.actor.{Actor, Props}
import akka.pattern.{Backoff, BackoffSupervisor}
import akka.stream.ActorMaterializer
import stream.StreamProcessorSupervisor.StartClient
import scala.concurrent.duration._
object StreamProcessorSupervisor {
final case object StartSimulator
final case class StartClient(id: String)
def props(implicit materializer: ActorMaterializer) =
Props(classOf[StreamProcessorSupervisor], materializer)
}
class StreamProcessorSupervisor(implicit materializer: ActorMaterializer) extends Actor {
override def preStart(): Unit = {
self ! StartClient(self.path.name)
}
def receive: Receive = {
case StartClient(id) =>
println(s"startCLient with id $id")
val childProps = Props(classOf[UserEventStream])
val supervisor = BackoffSupervisor.props(
Backoff.onFailure(
childProps,
childName = "usereventstream",
minBackoff = 1.second,
maxBackoff = 1.minutes,
randomFactor = 0.2
)
)
context.actorOf(supervisor, name = s"$id-backoff-supervisor")
val userEventStrean = context.actorOf(Props(classOf[UserEventStream]),"usereventstream")
userEventStrean ! "start"
}
}
App (the main application class):
import akka.actor.{ActorSystem, Props}
import akka.stream.ActorMaterializer
object App extends App {
implicit val system = ActorSystem("stream-test")
implicit val materializer = ActorMaterializer()
system.actorOf(StreamProcessorSupervisor.props,"StreamProcessorSupervisor")
}
application.conf:
kafka {
consumer {
num-consumers = "1"
c1 {
bootstrap-servers = "localhost:9092"
bootstrap-servers = ${?KAFKA_CONSUMER_ENDPOINT1}
groupId = "localakkagroup1"
subscription-topic = "test"
subscription-topic = ${?SUBSCRIPTION_TOPIC1}
message-type = "UserEventMessage"
poll-interval = 50ms
poll-timeout = 50ms
stop-timeout = 30s
close-timeout = 20s
commit-timeout = 15s
wakeup-timeout = 10s
max-wakeups = 10
use-dispatcher = "akka.kafka.default-dispatcher"
kafka-clients {
enable.auto.commit = true
}
}
}
}
After running the application, I purposely killed the Kafka broker and then found that after 30 seconds, the actor is stopping itself by sending a poison pill. But strangely it doesn't restart as mentioned in the BackoffSupervisor strategy.
What could be the issue here?
There are two instances of UserEventStream in your code: one is the child actor that the BackoffSupervisor internally creates with the Props that you pass to it, and the other is the val userEventStrean that is a child of StreamProcessorSupervisor. You're sending the "start" message to the latter, when you should be sending that message to the former.
You don't need val userEventStrean, because the BackoffSupervisor creates the child actor. Messages sent to the BackoffSupervisor are forwarded to the child, so to send a "start" message to the child, send it to the BackoffSupervisor:
class StreamProcessorSupervisor(implicit materializer: ActorMaterializer) extends Actor {
override def preStart(): Unit = {
self ! StartClient(self.path.name)
}
def receive: Receive = {
case StartClient(id) =>
println(s"startCLient with id $id")
val childProps = Props[UserEventStream]
val supervisorProps = BackoffSupervisor.props(...)
val supervisor = context.actorOf(supervisorProps, name = s"$id-backoff-supervisor")
supervisor ! "start"
}
}
The other issue is that when an actor receives a PoisonPill, that's not the same thing as that actor throwing an exception. Therefore, Backoff.onFailure won't be triggered when UserEventStream sends itself a PoisonPill. A PoisonPill stops the actor, so use Backoff.onStop instead:
val supervisorProps = BackoffSupervisor.props(
Backoff.onStop( // <--- use onStop
childProps,
...
)
)
val supervisor = context.actorOf(supervisorProps, name = s"$id-backoff-supervisor")
supervisor ! "start"

I want to create a simple singleton cluster and send message from a remote node

Here i have created a singleton actor. The master and the seed node is the same. From a different project i am trying to add into the cluster and want to send message. I am able to join in the cluster but cannot be able to send message.
My master and seed node:
package Demo
import akka.actor._
import akka.cluster.singleton.{ClusterSingletonManager, ClusterSingletonManagerSettings, ClusterSingletonProxy, ClusterSingletonProxySettings}
import com.typesafe.config.ConfigFactory
import scala.concurrent.duration._
object MainObject1 extends App{
DemoMain1.start(8888)
}
object DemoMain1 {
val role = "test"
val singletonname = "Ruuner"
val mastername = "Master"
def start(port: Int): ActorSystem = {
val conf = ConfigFactory.parseString(
s"""
|akka.actor.provider = "akka.cluster.ClusterActorRefProvider"
|
|akka.remote.netty.tcp.port = $port
|akka.remote.netty.tcp.hostname = 127.0.0.1
|akka.cluster.roles = ["$role"]
|akka.cluster.seed-nodes = ["akka.tcp://DemoMain1#127.0.0.1:8888"]
""".stripMargin
)
val system = ActorSystem("DemoMain1", conf)
val settings = ClusterSingletonManagerSettings(system).withRole(role)
val manager = ClusterSingletonManager.props(Props[DemoMain1], PoisonPill, settings)
val actor=system.actorOf(manager, mastername)
system
}
class DemoMain1 extends Actor with Identification {
import context._
override def preStart(): Unit = {
println(s"Master is created with id $id in $system")
println(self.path.address.host)
system.scheduler.scheduleOnce(100.seconds, self, 'exit)
}
override def receive : Receive={
case 'exit => println("$id is exiting")
context stop self
//SupervisorStrategy.Restart
case msg => println(s"messasge from $system is $msg ")
sender() ! 'how
}
}
}
The other node which is trying to join the cluster and send message.
import akka.actor._
import akka.cluster.singleton.{ClusterSingletonProxy, ClusterSingletonProxySettings}
import com.typesafe.config.ConfigFactory
import scala.concurrent.duration._
object Ping extends App{
def ping: ActorSystem = {
val conf = ConfigFactory.parseString(
s"""
|akka.actor.provider = "akka.cluster.ClusterActorRefProvider"
|
|akka.remote.netty.tcp.port = 0
|akka.remote.netty.tcp.hostname = 127.0.0.1
|akka.cluster.roles = ["slave"]
|akka.cluster.seed-nodes = ["akka.tcp://DemoMain1#127.0.0.1:8888"]
|akka.cluster.auto-down-unreachable-after = 10s
""".stripMargin
)
val system = ActorSystem("DemoMain1", conf)
system.actorOf(Props[Ping])
system
}
class Ping extends Actor {
import context._
val path = "akka.tcp://DemoMain1#127.0.0.1:8888/DemoMain1/user/Master/actor"
val settings = ClusterSingletonProxySettings(system).withRole("slave")
val actor = context.actorOf(ClusterSingletonProxy.props(path, settings))
val pingInterval = 1.seconds
override def preStart(): Unit = {
system.scheduler.schedule(pingInterval, pingInterval) {
println(s"Locate Ping $system")
actor ! 'hi
}
}
override def receive: Receive = {
case msg => println(s"The message is $msg")
}
}
Ping.ping
}
If i give ip addresses of the system then also the message s not sent.
It appears the role in your ClusterSingletonProxySettings(system).withRole("slave") settings for your Ping actor doesn't match that of your ClusterSingletonManagerSettings(system).withRole(role) where role = "test".
ClusterSingletonProxy is supposed to be present on all nodes with the specified role on which the cluster is set up, hence its role settings should match ClusterSingletonManager's. Here's a sample configuration.

Akka-stream and delegating processing to an actor

I have the following case, where I'm trying to delegate processing to an actor. What I want to happen is that whenever my flow processes a message, it sends it to the actor, and the actor will uppercase it and write it to the stream as a response.
So I should be able to connect to port 8000, type in "hello", have the flow send it to the actor, and have the actor publish it back to the stream so it's echoed back to me uppercased. The actor itself is pretty basic, from the ActorPublisher example in the docs.
I know this code doesn't work, I cleaned up my experiments to get it to compile. Right now, it's just two separate streams. I tried to experiment with merging the sources or the sinks, to no avail.
object Sample {
def main(args: Array[String]): Unit = {
implicit val system = ActorSystem("sample")
implicit val materializer = ActorMaterializer()
val connections: Source[IncomingConnection,
Future[ServerBinding]] = Tcp().bind("localhost", 8000)
val filter = Source.actorPublisher[ByteString](Props[Filter])
val filterRef = Flow[ByteString]
.to(Sink.ignore)
.runWith(filter)
connections runForeach { conn =>
val echo = Flow[ByteString] .map {
// would like to send 'p' to the actor,
// and have it publish to the stream
case p:ByteString => filterRef ! p
}
}
}
}
// this actor is supposed to simply uppercase all
// input and write it to the stream
class Filter extends ActorPublisher[ByteString] with Actor
{
var buf = Vector.empty[ByteString]
val delay = 0
def receive = {
case p: ByteString =>
if (buf.isEmpty && totalDemand > 0)
onNext(p)
else {
buf :+= ByteString(p.utf8String.toUpperCase)
deliverBuf()
}
case Request(_) =>
deliverBuf()
case Cancel =>
context.stop(self)
}
#tailrec final def deliverBuf(): Unit =
if (totalDemand > 0) {
if (totalDemand <= Int.MaxValue) {
val (use, keep) = buf.splitAt(totalDemand.toInt)
buf = keep
use foreach onNext
} else {
val (use, keep) = buf.splitAt(Int.MaxValue)
buf = keep
use foreach onNext
deliverBuf()
}
}
}
I've had this problem too before, solved it in a bit of roundabout way, hopefully you're ok with this way. Essentially, it involves creating a sink that immediately forwards the messages it gets to the src actor.
Of course, you can use a direct flow (commented it out), but I guess that's not the point of this exercise :)
object Sample {
def main(args: Array[String]): Unit = {
implicit val system = ActorSystem("sample")
implicit val materializer = ActorMaterializer()
val connections: Source[IncomingConnection,
Future[ServerBinding]] = Tcp().bind("localhost", 8000)
def filterProps = Props[Filter]
connections runForeach { conn =>
val actorRef = system.actorOf(filterProps)
val snk = Sink.foreach[ByteString]{s => actorRef ! s}
val src = Source.fromPublisher(ActorPublisher[ByteString](actorRef))
conn.handleWith(Flow.fromSinkAndSource(snk, src))
// conn.handleWith(Flow[ByteString].map(s => ByteString(s.utf8String.toUpperCase())))
}
}
}
// this actor is supposed to simply uppercase all
// input and write it to the stream
class Filter extends ActorPublisher[ByteString]
{
import akka.stream.actor.ActorPublisherMessage._
var buf = mutable.Queue.empty[String]
val delay = 0
def receive = {
case p: ByteString =>
buf += p.utf8String.toUpperCase
deliverBuf()
case Request(n) =>
deliverBuf()
case Cancel =>
context.stop(self)
}
def deliverBuf(): Unit = {
while (totalDemand > 0 && buf.nonEmpty) {
val s = ByteString(buf.dequeue() + "\n")
onNext(s)
}
}

akka cluster singleton proxy can not identify actor on none leader node

I was trying to build a two-node akka cluster,and both of the nodes are configured as seed node. At each node I create actor and locate it by these ways below:
akka {
//loglevel = "DEBUG"
log-config-on-start = on
actor {
provider = "akka.cluster.ClusterActorRefProvider"
}
remote {
log-remote-lifecycle-events = off
netty.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
seed-nodes = [
"akka.tcp://ClusterSystem#127.0.0.1:2551",
"akka.tcp://ClusterSystem#127.0.0.1:2552"
]
roles = [
"transport"
]
auto-down = on
}
}
final String SINGLETON_GROUP = "client";
getContext().system().actorOf(
ClusterSingletonManager.defaultProps(
Props.create(ClientActor.class, channelActive.getCtx()),
loginMessage.getId(),
PoisonPill.getInstance(),
"transport"
), SINGLETON_GROUP
);
private ActorRef getActor(String id) {
ActorRef remoteActor = getContext().system().actorOf(
ClusterSingletonProxy.defaultProps("user/" + SINGLETON_GROUP + "/" + id,
"transport"));
return remoteActor;
}
what I expected is that I can create an actor at any node, and locate it everywhere as long as I got its singletonPath.
However, the result is : the getActor() only works at "leader" node, and can not identify at other nodes.
If I got wrong understanding about clusterSingleton ?