akka getContext(). become() HotSwap not receiving messages - akka

having code in akka and playing with become() need to understand why it receive only first msg and then it ignores ...
package ping_pong;
import akka.actor.AbstractActor;
import akka.actor.ActorRef;
import akka.actor.ActorSystem;
import akka.actor.Props;
public class PingPongActor extends AbstractActor {
public static void main(String[] args) throws Exception {
ActorSystem _system = ActorSystem.create("PingPongActorApp");
ActorRef masterpp = _system.actorOf(Props.create(PingPongActor.class), "pp");
masterpp.tell(PING, masterpp);
System.out.println("after first msg");
masterpp.tell(PING, masterpp);
masterpp.tell(PING, masterpp);
masterpp.tell(PING, masterpp);
masterpp.tell(PING, masterpp);
masterpp.tell(PING, masterpp);
System.out.println("last msg");
}
static String PING = "PING";
static String PONG = "PONG";
int count = 0;
#Override
public Receive createReceive() {
return receiveBuilder().match(String.class, ua -> {
if (ua.matches(PING)) {
System.out.println("PING" + count);
count += 1;
Thread.sleep(100);
if (count <= 10) {
getSelf().tell(PONG, getSelf());
}
getContext().become(receiveBuilder().match(String.class, ua1 -> {
if (ua1.matches(PONG)) {
System.out.println("PONG" + count);
count += 1;
Thread.sleep(100);
getContext().unbecome();
}
}).build());
if (count > 10) {
System.out.println("DONE" + count);
getContext().stop(getSelf());
}
}
}
).build();
}
}
It gives result:
21:36:34.098 [PingPongActorApp-akka.actor.default-dispatcher-4] INFO
akka.event.slf4j.Slf4jLogger - Slf4jLogger started after first msg
last msg PING0 PONG1
the question is why it ignores the other PING or also PONG messages ... ?

When you getContext().become, you are replacing the entire Receive for that actor, so
getContext().become(receiveBuilder().match(String.class, ua1 -> {
if (ua1.matches(PONG)) {
System.out.println("PONG" + count);
count += 1;
Thread.sleep(100);
getContext().unbecome();
}
}).build());
You are installing a Receive which will only respond to messages matching PONG.
As an aside: Thread.sleep is basically the single worst thing you can do in an actor, as it prevents the actor from doing anything and also consumes a dispatcher thread. It would be a much better idea to schedule a message to yourself in 100 millis which would then trigger the unbecome.

Related

Sending messages and roadside to roadside (R2R) communication in Veins

I'm new in Omnet Veins. I try to create my own application. So first of all, I have done this in the existing TraciDemo11p files (I have just kept the files name and modify the code).
In the first step, I want to make all nodes sending a HelloMsg (a new packet that I have created .msg .h and .cc).
To well understand how messages are exchanged between nodes, I launched the simulation and all is well, but I cannot realize if the messages are received by nodes or not.
This is a screenshot of what I have:
enter image description here
I followed the transmission of the message between the application, mac and phy layers. I can see that the message is successfully transmitted by node1 for example. But does the message on node[0] "packet was not detected by the card. power was under sensitivity threshold" mean that the packet was not received by node[0]?. If it is the case, how can I fix that? Also, I cannot find the source file of this message (apparently, in PhyLayer80211p.cc or BasehyLayer.cc but I cannot find it).
In the second step, I want to use two RSUs. Nodes broadcast a helloMessage and then each RSU will repeat the received signal. To clarify more, this exactly what I have:
First of all. I add another RSU to the veins example as follows:
##########################################################
# RSU SETTINGS #
# #
# #
##########################################################
*.rsu[0].mobility.x = 6490
*.rsu[0].mobility.y = 1000
*.rsu[0].mobility.z = 3
*.rsu[1].mobility.x = 7491
*.rsu[1].mobility.y = 1000
*.rsu[1].mobility.z = 3
*.rsu[*].applType = "TraCIDemoRSU11p"
*.rsu[*].appl.headerLength = 80 bit
*.rsu[*].appl.sendBeacons = false
*.rsu[*].appl.dataOnSch = false
*.rsu[*].appl.beaconInterval = 1s
*.rsu[*].appl.beaconUserPriority = 7
*.rsu[*].appl.dataUserPriority = 5
Also, I made two maxInterferenceDistance, one of the nodes and the other for the RSUs:
##########################################################
# 11p specific parameters #
# #
# NIC-Settings #
##########################################################
*.connectionManager.sendDirect = true
*.connectionManager.maxInterfDist = 1000m #2600m
*.connectionManager.drawMaxIntfDist = false #false
*.connectionManager.maxInterfDistNodes = 300m
*.connectionManager.drawMaxIntfDistNodes = false
*.**.nic.mac1609_4.useServiceChannel = false
*.**.nic.mac1609_4.txPower = 20mW
*.**.nic.mac1609_4.bitrate = 6Mbps
*.**.nic.phy80211p.sensitivity = -89dBm
*.**.nic.phy80211p.useThermalNoise = true
*.**.nic.phy80211p.thermalNoise = -110dBm
*.**.nic.phy80211p.decider = xmldoc("config.xml")
*.**.nic.phy80211p.analogueModels = xmldoc("config.xml")
*.**.nic.phy80211p.usePropagationDelay = true
*.**.nic.phy80211p.antenna = xmldoc("antenna.xml", "/root/Antenna[#id='monopole']")
To make the transmission range of RSU different on that of nodes, I made this change in the isInRange function of the baseConnectionMannager:
bool BaseConnectionManager::isInRange(BaseConnectionManager::NicEntries::mapped_type pFromNic, BaseConnectionManager::NicEntries::mapped_type pToNic)
{
double dDistance = 0.0;
if ((pFromNic->hostId == 7) || (pFromNic->hostId == 8)) {
EV<<"RSU In range from: "<<pFromNic->getName()<<" "<<pFromNic->hostId<<" to: "<<pToNic->getName()<<" "<<pToNic->hostId<<"\n";
if(useTorus) {
dDistance = sqrTorusDist(pFromNic->pos, pToNic->pos, *playgroundSize);
} else {
dDistance = pFromNic->pos.sqrdist(pToNic->pos);
}
return (dDistance <= maxDistSquared);
} else {
if(useTorus) {
dDistance = sqrTorusDist(pFromNic->pos, pToNic->pos, *playgroundSize);
} else {
dDistance = pFromNic->pos.sqrdist(pToNic->pos);
}
return (dDistance <= maxDistSquaredNodes);
}
}
Where node IDs 7 and 8 are the RSUs in the scenario I run.
In addition, I have the TraciDemo11p (for nodes) and TraciDemoRSU11p (for RSUs) modified as follow:
- In the TraciDemo11p, nodes when enter the network broadcast a Hello message to all their neighbors. The code is:
void TraCIDemo11p::initialize(int stage) {
BaseWaveApplLayer::initialize(stage);
if (stage == 0) {
HelloMsg *msg = createMsg();
SendHello(msg);
}
}
HelloMsg* TraCIDemo11p::createMsg() {
int source_id = myId;
double t0 = 0;
int port = 0;
char msgName[20];
sprintf(msgName, "send Hello from %d at %f from gate %d",source_id, t0, port);
HelloMsg* msg = new HelloMsg(msgName);
populateWSM(msg);
return msg;
}
void TraCIDemo11p::SendHello(HelloMsg* msg) {
findHost()->getDisplayString().updateWith("r=16,green");
msg->setSource_id(myId);
cMessage* mm = dynamic_cast<cMessage*>(msg);
scheduleAt(simTime() + 10 + uniform(0.01, 0.02), mm);
}
void TraCIDemo11p::handleSelfMsg(cMessage* msg) {
if (dynamic_cast<HelloMsg*>(msg)) {
HelloMsg* recv = dynamic_cast<HelloMsg*>(msg);
ASSERT(recv);
int sender = recv->getSource_id();
if (sender == myId) {
EV <<myId <<" broadcasting Hello Message \n";
recv->setT0(SIMTIME_DBL(simTime()));
sendDown(recv->dup());
}
}
else {
BaseWaveApplLayer::handleSelfMsg(msg);
}
}
void TraCIDemo11p::onHelloMsg(HelloMsg* hmsg) {
if ((hmsg->getSource_id() == 7) || (hmsg->getSource_id() == 8)) {
EV <<"Node: "<<myId<<" receiving HelloMsg from rsu: "<<hmsg->getSource_id()<<"\n";
} else {
EV <<"Node: "<<myId<<" receiving HelloMsg "<<hmsg->getKind()<<" from node: "<<hmsg->getSource_id()<<"\n";
NBneighbors++;
neighbors.push_back(hmsg->getSource_id());
EV <<"Node: "<<myId<<" neighbors list: ";
list<int>::iterator it = neighbors.begin();
while (it != neighbors.end()) {
EV <<*it<<" ";
it++;
}
}
}
void TraCIDemo11p::handlePositionUpdate(cObject* obj) {
BaseWaveApplLayer::handlePositionUpdate(obj);
}
On the other hand, RSUs just repeat the message they received from nodes. So, I have on the TraciDemoRSU11p:
void TraCIDemoRSU11p::onHelloMsg(HelloMsg* hmsg) {
if ((hmsg->getSource_id() != 7) && (hmsg->getSource_id() != 8))
{
EV <<"RSU: "<<myId<<" receiving HelloMsg "<<hmsg->getKind()<<" from node: "<<hmsg->getSource_id()<<" at: "<<SIMTIME_DBL(simTime())<<" \n";
//HelloMsg *msg = createMsg();
//SendHello(msg);
hmsg->setSenderAddress(myId);
hmsg->setSource_id(myId);
sendDelayedDown(hmsg->dup(), 2 + uniform(0.01,0.2));
}
else {
EV<<"Successful connection between RSUs \n";
EV <<"RSU: "<<myId<<" receiving HelloMsg "<<hmsg->getKind()<<" from node: "<<hmsg->getSource_id()<<"\n";
}
}
After the execution of this code, I can see:
a few numbers of vehicles receiving the hello message from their neighbors.
also, just a few messages were received by the two RSUs.
Each RSUs repeats the signal it receives, but there is no communication between the two RSU, which are supposed in the transmission of one another.
And always I have a lot of this message "packet was not detected by the card. power was under sensitivity threshold" printed on my screen.
Is there any problem in the transmission range or it is a question of interference? Also, I would like to mention that in the analysis there is no packet loss.
Thanks in advance.
Please help.

Simple program with Callable Future never terminates

I was playing around with Callable and Future and stumbled upon an issue.
This is a piece of code that never terminates and times out even though the IDE allows 5 seconds to run, and the code does not need more than 3 seconds (It gives a Time Limit Exceeded error): https://ideone.com/NcL0YV
/* package whatever; // don't place package name! */
import java.util.*;
import java.util.concurrent.*;
import java.lang.*;
import java.io.*;
/* Name of the class has to be "Main" only if the class is public. */
class Ideone
{
public static void main (String[] args) throws java.lang.Exception
{
Ideone obj = new Ideone();
Future<Integer> res = obj.doCallable();
System.out.println(res.get());
}
public Future<Integer> calculate(Integer input) {
ExecutorService executor = Executors.newFixedThreadPool(1);
return executor.submit(() -> {
long start = System.currentTimeMillis();
Thread.sleep(2000);
System.out.println("Sleep time in ms = "+(System.currentTimeMillis()-start));
return input * input;
});
}
public Future<Integer> doCallable() {
int value = 99;
try {
Callable<Future> callable = () -> calculate(value);
Future<Integer> future = callable.call();
return future;
} catch (final Exception e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
}
This is a similar piece of code that terminates because I added "System.exit(0)" (which is not advisable): https://ideone.com/HDvl7y
/* package whatever; // don't place package name! */
import java.util.*;
import java.util.concurrent.*;
import java.lang.*;
import java.io.*;
/* Name of the class has to be "Main" only if the class is public. */
class Ideone
{
public static void main (String[] args) throws java.lang.Exception
{
Ideone obj = new Ideone();
Future<Integer> res = obj.doCallable();
System.out.println(res.get());
System.exit(0);
}
public Future<Integer> calculate(Integer input) {
ExecutorService executor = Executors.newFixedThreadPool(1);
return executor.submit(() -> {
long start = System.currentTimeMillis();
Thread.sleep(2000);
System.out.println("Sleep time in ms = "+(System.currentTimeMillis()-start));
return input * input;
});
}
public Future<Integer> doCallable() {
int value = 99;
try {
Callable<Future> callable = () -> calculate(value);
Future<Integer> future = callable.call();
return future;
} catch (final Exception e) {
e.printStackTrace();
throw new RuntimeException(e);
}
}
}
Please help me understand why do we need System.exit(0) or shutdown() even if callable task is complete (future.get() call is a blocking call).
EDIT:
I did all the above to solve the main issue of ever-increasing threads in my application because of the below code snippet. I am not sure how to complete this future automatically after a certain timeout without involving the main thread (which exits right away).
#Override
public void publish(#NonNull final String message,
#NonNull final String topicArn) throws PublishingException {
if (!publishAsync(message, topicArn)) {
throw new PublishingException("Publish attempt failed for the message:"
+ message);
}
}
private boolean publishAsync(final String message,
final String topicArn) {
Callable<Future> publishCallable = () -> snsClient.publishAsync(topicArn, message);
try {
Future<PublishResult> result = publishCallable.call();
log.debug("Asynchronously published message {} to SNS topic {}.", message, topicArn);
return !result.isDone() || result.get().getMessageId() != null;
} catch (final Exception e) {
return false;
}
}
void shutdown()
Initiates an orderly shutdown in which previously submitted tasks are executed, but no new tasks will be accepted. Invocation has no additional effect if already shut down.
This method does not wait for previously submitted tasks to complete execution. Use awaitTerminationto do that.

vert.x: publish and consume messages from event bus

I wrote the following code:
public class VertxApp {
public static void main(String[] args) { // This is OK
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new ReceiveVerticle()); // line A
vertx.deployVerticle(new SendVerticle()); // line B
}
}
public class ReceiveVerticle extends AbstractVerticle{
#Override
public void start(Future<Void> startFuture) {
vertx.eventBus().consumer("address", message -> {
System.out.println("message received by receiver");
System.out.println(message.body());
});
}
}
public class SendVerticle extends AbstractVerticle {
#Override
public void start(Future<Void> startFuture) throws InterruptedException {
System.out.println("SendVerticle started!");
int i = 0;
for (i = 0; i < 5; i++) {
System.out.println("Sender sends a message " + i );
vertx.eventBus().publish("address", "message" + i);
}
}
}
This code is inconsistent. There is a race condition. If I run the code several times, sometimes all 5 messages sent are consumed, and sometimes none of them is consumed.
Can you please explain why there is race condition here and how it can be solved?
There is no race condition, deploying a verticle is an asynchronous operation and your receiver verticle may register the consumer after the sender verticle has sent the messages.
To make sure operations happen in order, use the deploy method which takes a handler argument:
Vertx vertx = Vertx.vertx();
vertx.deployVerticle(new ReceiveVerticle(), ar -> {
if (ar.succeeded()) {
vertx.deployVerticle(new SendVerticle());
} else {
// handle the problem -> ar.cause()
}
});

Akka infinite loop dead letter

I have two UntypedActors which will exchange messages in infinite loop.
class ActorA extends UntypedActor {
#Override
public void onReceive(Object message) throws Throwable {
if (message instanceof String) {
ActorRef actorb = getContext().actorOf(Props.create(ActorB.class));
actorb.tell(0, getSelf());
}
if (message instanceof Integer) {
int count = (Integer) message;
System.out.println("ActorA: " + count++);
getSender().tell(count, getSelf());
}
}
}
class ActorB extends UntypedActor {
#Override
public void onReceive(Object message) throws Throwable {
if (message instanceof Integer) {
int count = (Integer) message;
System.out.println("ActorB: " + count++);
getSender().tell(count, getSelf());
}
}
}
Below is a code which will execute above actors.
ActorSystem system = ActorSystem.create("bb");
ActorRef master = system.actorOf(Props.create(ActorA.class), "ddd");
master.tell("Msg", master);
system.shutdown();
After many iterations suddenly I receive:
from Actor[akka://bb/user/ddd/$a#-1708720414] to Actor[akka://bb/user/ddd#250962293] was not delivered. [1] dead letters encountered.
My question is why after multiple loops between two actors finally I received dead letter ? I know that its connected with memory allocation but I received only over 100 loops and dead letter appeared.

How to use storm Trident for batching tuples?

I was using storm previously and I need to more batching capabilities so I searched for batching in storm.
And I found out Trident which do micro-batching in real-time.
But somehow, I cannot figure out how Trident handle micro-batching (flow, batch size, batch interval) to know it really has what I need.
What I would like to do is to collect/save tuples emitted by a spout in an interval and re-emit them to downstream component/bolt/function with another interval of time.
(For example, spout emit one tuple per second, next trident function will collect/save tuples and emit 50 tuples per minute to next function.)
Can somebody guide me how I can apply Trident in this case?
Or any other applicable way using storm features?
Excellent question! But sadly this kind of micro batching is not supported out of the Trident box.
But you can try implementing your own frequency driven micro-batching. Something like this skeleton example:
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
import java.util.concurrent.LinkedBlockingQueue;
import org.apache.storm.task.OutputCollector;
import org.apache.storm.task.TopologyContext;
import org.apache.storm.topology.OutputFieldsDeclarer;
import org.apache.storm.topology.base.BaseRichBolt;
import org.apache.storm.tuple.Tuple;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class MicroBatchingBolt extends BaseRichBolt {
private static final long serialVersionUID = 8500984730263268589L;
private static final Logger LOG = LoggerFactory.getLogger(MicroBatchingBolt.class);
protected LinkedBlockingQueue<Tuple> queue = new LinkedBlockingQueue<Tuple>();
/** The threshold after which the batch should be flushed out. */
int batchSize = 100;
/**
* The batch interval in sec. Minimum time between flushes if the batch sizes
* are not met. This should typically be equal to
* topology.tick.tuple.freq.secs and half of topology.message.timeout.secs
*/
int batchIntervalInSec = 45;
/** The last batch process time seconds. Used for tracking purpose */
long lastBatchProcessTimeSeconds = 0;
private OutputCollector collector;
#Override
#SuppressWarnings("rawtypes")
public void prepare(Map stormConf, TopologyContext context, OutputCollector collector) {
this.collector = collector;
}
#Override
public void execute(Tuple tuple) {
// Check if the tuple is of type Tick Tuple
if (isTickTuple(tuple)) {
// If so, it is indication for batch flush. But don't flush if previous
// flush was done very recently (either due to batch size threshold was
// crossed or because of another tick tuple
if ((System.currentTimeMillis() / 1000 - lastBatchProcessTimeSeconds) >= batchIntervalInSec) {
LOG.debug("Current queue size is " + this.queue.size()
+ ". But received tick tuple so executing the batch");
finishBatch();
} else {
LOG.debug("Current queue size is " + this.queue.size()
+ ". Received tick tuple but last batch was executed "
+ (System.currentTimeMillis() / 1000 - lastBatchProcessTimeSeconds)
+ " seconds back that is less than " + batchIntervalInSec
+ " so ignoring the tick tuple");
}
} else {
// Add the tuple to queue. But don't ack it yet.
this.queue.add(tuple);
int queueSize = this.queue.size();
LOG.debug("current queue size is " + queueSize);
if (queueSize >= batchSize) {
LOG.debug("Current queue size is >= " + batchSize
+ " executing the batch");
finishBatch();
}
}
}
private boolean isTickTuple(Tuple tuple) {
// Check if it is tick tuple here
return false;
}
/**
* Finish batch.
*/
public void finishBatch() {
LOG.debug("Finishing batch of size " + queue.size());
lastBatchProcessTimeSeconds = System.currentTimeMillis() / 1000;
List<Tuple> tuples = new ArrayList<Tuple>();
queue.drainTo(tuples);
for (Tuple tuple : tuples) {
// Prepare your batch here (may it be JDBC, HBase, ElasticSearch, Solr or
// anything else.
// List<Response> responses = externalApi.get("...");
}
try {
// Execute your batch here and ack or fail the tuples
LOG.debug("Executed the batch. Processing responses.");
// for (int counter = 0; counter < responses.length; counter++) {
// if (response.isFailed()) {
// LOG.error("Failed to process tuple # " + counter);
// this.collector.fail(tuples.get(counter));
// } else {
// LOG.debug("Successfully processed tuple # " + counter);
// this.collector.ack(tuples.get(counter));
// }
// }
} catch (Exception e) {
LOG.error("Unable to process " + tuples.size() + " tuples", e);
// Fail entire batch
for (Tuple tuple : tuples) {
this.collector.fail(tuple);
}
}
}
#Override
public void declareOutputFields(OutputFieldsDeclarer declarer) {
// ...
}
}
Source: http://hortonworks.com/blog/apache-storm-design-pattern-micro-batching/ and Using tick tuples with trident in storm