My problem is my controller finishes successfully, but twig/view then renders a blank. The request returns no html.
My controller ends fine with:
$logger->info('Now sending song [' . $song->getId() . ' to twig');
return array('song'=>$song);
}
As I have no error, it's difficult to locate the problem. All I have is the dev.log from monolog:
[2011-10-04 07:41:03] app.INFO: Now sending song [672] to twig [] []
[2011-10-04 07:41:03] event.DEBUG: Notified event "kernel.view" to listener "Sensio\Bundle\FrameworkExtraBundle\EventListener\TemplateListener::onKernelView". [] []
[2011-10-04 07:41:03] event.DEBUG: Listener "Sensio\Bundle\FrameworkExtraBundle\EventListener\TemplateListener::onKernelView" stopped propagation of the event "kernel.view". [] []
[2011-10-04 07:41:03] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\ResponseListener::onKernelResponse". [] []
[2011-10-04 07:41:03] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Bundle\SecurityBundle\EventListener\ResponseListener::onKernelResponse". [] []
[2011-10-04 07:41:03] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Bridge\Monolog\Handler\FirePHPHandler::onKernelResponse". [] []
[2011-10-04 07:41:03] event.DEBUG: Notified event "kernel.response" to listener "Sensio\Bundle\FrameworkExtraBundle\EventListener\CacheListener::onKernelResponse". [] []
[2011-10-04 07:41:03] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelResponse". [] []
[2011-10-04 07:41:03] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Bundle\WebProfilerBundle\EventListener\WebDebugToolbarListener::onKernelResponse". [] []
I think that 3rd line must be the problem, but I don't have a clue what it means, or any means for a resolution.
EDIT:
If I change my controller:
- removed the #Template()
- end looks like:
$logger->info('Now sending song [' . $song->getId() . '] to twig');
$content = $this->renderView('MyPadBundle:Play:index.html.twig', array('song'=>$song));
$logger->info('content rendered');
return new Response($content);
Then my dev.log is:
[2011-10-04 09:42:52] app.INFO: Now sending song [189] to twig [] []
[2011-10-04 09:42:53] app.INFO: content rendered [] []
[2011-10-04 09:42:53] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\ResponseListener::onKernelResponse". [] []
[2011-10-04 09:42:53] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Bundle\SecurityBundle\EventListener\ResponseListener::onKernelResponse". [] []
[2011-10-04 09:42:53] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Bridge\Monolog\Handler\FirePHPHandler::onKernelResponse". [] []
[2011-10-04 09:42:53] event.DEBUG: Notified event "kernel.response" to listener "Sensio\Bundle\FrameworkExtraBundle\EventListener\CacheListener::onKernelResponse". [] []
[2011-10-04 09:42:53] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Component\HttpKernel\EventListener\ProfilerListener::onKernelResponse". [] []
[2011-10-04 09:42:53] event.DEBUG: Notified event "kernel.response" to listener "Symfony\Bundle\WebProfilerBundle\EventListener\WebDebugToolbarListener::onKernelResponse". [] []
It's like the kernel just can't return the response?
I don't know what's wrong, but this does solve it:
$logger->info('Now sending song [' . $song->getId() . '] to twig');
$em->clear();
return array('song'=>$song);
Should've known it would be doctrine with the lack of error reporting. Anyway, the table is only 1000 rows, so not sure how it can be too much for twig too handle, especially since i only pass one row/entity.
Have to admit I'm disappointed with doctrine; or maybe the problem lies with bad design on symfony's templating?
Related
I am facing a strange issue in SQS. Let me simplify my use-case, I have 7 messages in the FIFO queue and my standalone app should keep-on polling the messages in the same sequence for my business case infinitely. For instance, my app read message1 and after some business processing, the app will delete it and repost the same message into the same queue(tail of the queue), and these steps will be continued for the next set of messages endlessly. Here, my expectation is my app will be polling the message continuously and doing the operations based on the messages in the queue in the same sequence, but that's where the problem arises. When the message is read from the queue for the very first time, delete it, and repost the same message into the same queue, even after the successful sendMessageResult, the reposted message is not present in the queue.
I have included the below code to simulate the issue, briefly, Test_Queue.fifo queue with Test_Queue_DLQ.fifo configured as reDrivePolicy is created. At the very first time after creating the queue, the message is posted -> "Test_Message" into Test_Queue.fifo queue(Getting the MessageId in response ) and long-polling the queue to read the message, and after iterating the ReceiveMessageResult#getMessages, deleting the message(Getting MessageId in response). Again, after the successful deletion of the message, the same message is reposted into the tail of the same queue(Getting the MessageId in response). But, the reposted message is not present in the queue. When, I checked the AWS admin console the message count is 0 in the Messages available and Messages in flight sections and the reposted message is not even present in Test_Queue_DLQ.fifo queue. As per the SQS docs, if we delete the message, even if it is present in flight mode should be removed, so reposting the same message should not be an issue. I suspect on SQS side, where they are performing some equals comparison and skipping the same message during in visibleTimeOut interval to avoid deduplication of the same message in the distributed environment, but couldn't get any clear picture.
Code snippet to simulate the above issue
public class SQSIssue {
#Test
void sqsMessageAbsenceIssueTest() {
AmazonSQS amazonSQS = AmazonSQSClientBuilder.standard().withEndpointConfiguration(new AwsClientBuilder
.EndpointConfiguration("https://sqs.us-east-2.amazonaws.com", "us-east-2"))
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(
"<accessKey>", "<secretKey>"))).build();
//create queue
String queueUrl = createQueues(amazonSQS);
String message = "Test_Message";
String groupId = "Group1";
//Sending message -> "Test_Message"
sendMessage(amazonSQS, queueUrl, message, groupId);
//Reading the message and deleting using message.getReceiptHandle()
readAndDeleteMessage(amazonSQS, queueUrl);
//Reposting the same message into the queue -> "Test_Message"
sendMessage(amazonSQS, queueUrl, message, groupId);
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest()
.withQueueUrl(queueUrl)
.withWaitTimeSeconds(5)
.withMessageAttributeNames("All")
.withVisibilityTimeout(30)
.withMaxNumberOfMessages(10);
ReceiveMessageResult receiveMessageResult = amazonSQS.receiveMessage(receiveMessageRequest);
//Here I am expecting the message presence in the queue as I recently reposted the same message into the same queue after the message deletion
Assertions.assertFalse(receiveMessageResult.getMessages().isEmpty());
}
private void readAndDeleteMessage(AmazonSQS amazonSQS, String queueUrl) {
ReceiveMessageRequest receiveMessageRequest = new ReceiveMessageRequest()
.withQueueUrl(queueUrl)
.withWaitTimeSeconds(5)
.withMessageAttributeNames("All")
.withVisibilityTimeout(30)
.withMaxNumberOfMessages(10);
ReceiveMessageResult receiveMessageResult = amazonSQS.receiveMessage(receiveMessageRequest);
receiveMessageResult.getMessages().forEach(message -> amazonSQS.deleteMessage(queueUrl, message.getReceiptHandle()));
}
private String createQueues(AmazonSQS amazonSQS) {
String queueName = "Test_Queue.fifo";
String deadLetterQueueName = "Test_Queue_DLQ.fifo";
//Creating DeadLetterQueue
CreateQueueRequest createDeadLetterQueueRequest = new CreateQueueRequest()
.addAttributesEntry("FifoQueue", "true")
.addAttributesEntry("ContentBasedDeduplication", "true")
.addAttributesEntry("VisibilityTimeout", "600")
.addAttributesEntry("MessageRetentionPeriod", "262144");
createDeadLetterQueueRequest.withQueueName(deadLetterQueueName);
CreateQueueResult createDeadLetterQueueResult = amazonSQS.createQueue(createDeadLetterQueueRequest);
GetQueueAttributesResult getQueueAttributesResult = amazonSQS.getQueueAttributes(
new GetQueueAttributesRequest(createDeadLetterQueueResult.getQueueUrl())
.withAttributeNames("QueueArn"));
String deadLetterQueueArn = getQueueAttributesResult.getAttributes().get("QueueArn");
//Creating Actual Queue with DeadLetterQueue configured
CreateQueueRequest createQueueRequest = new CreateQueueRequest()
.addAttributesEntry("FifoQueue", "true")
.addAttributesEntry("ContentBasedDeduplication", "true")
.addAttributesEntry("VisibilityTimeout", "600")
.addAttributesEntry("MessageRetentionPeriod", "262144");
createQueueRequest.withQueueName(queueName);
String reDrivePolicy = "{\"maxReceiveCount\":\"5\", \"deadLetterTargetArn\":\""
+ deadLetterQueueArn + "\"}";
createQueueRequest.addAttributesEntry("RedrivePolicy", reDrivePolicy);
CreateQueueResult createQueueResult = amazonSQS.createQueue(createQueueRequest);
return createQueueResult.getQueueUrl();
}
private void sendMessage(AmazonSQS amazonSQS, String queueUrl, String message, String groupId) {
SendMessageRequest sendMessageRequest = new SendMessageRequest()
.withQueueUrl(queueUrl)
.withMessageBody(message)
.withMessageGroupId(groupId);
SendMessageResult sendMessageResult = amazonSQS.sendMessage(sendMessageRequest);
Assertions.assertNotNull(sendMessageResult.getMessageId());
}
}
From Using the Amazon SQS message deduplication ID:
The message deduplication ID is the token used for deduplication of sent messages. If a message with a particular message deduplication ID is sent successfully, any messages sent with the same message deduplication ID are accepted successfully but aren't delivered during the 5-minute deduplication interval.
Therefore, you should supply a different Deduplication ID each time the message is placed back onto the queue.
https://stackoverflow.com/a/65844632/3303074 is fitting, I should have added the SendMessageRequest#withMessageDeduplicationId, but I would like to add few more points to the answer, The technical reason behind the message disappearance is because I have enabled the ContentBasedDeduplication for the queue. Amazon SQS uses an SHA-256 hash to generate the MessageDeduplicationId using the body of the message (but not the attributes of the message) if the MessageDeduplicationId has not been mentioned explicitly when sending the message. When ContentBasedDeduplication is in effect, messages with identical content sent within the deduplication interval are treated as duplicates and only one copy of the message is delivered. So even we add different attributes for the same message repost into the same queue will not work as expected. Adding MessageDeduplicationId helps to solve the issue because even the queue has ContentBasedDeduplication set, the explicit MessageDeduplicationId overrides the generated one.
Code Snippet
SendMessageRequest sendMessageRequest = new SendMessageRequest()
.withQueueUrl(queueUrl)
.withMessageBody(message)
.withMessageGroupId(groupId)
// Adding explicit MessageDeduplicationId
.withMessageDeduplicationId(UUID.randomUUID().toString());
SendMessageResult sendMessageResult = amazonSQS.sendMessage(sendMessageRequest);
In ZMQ Proxy, we have 2 types of sockets, DEALER and ROUTER. Also, I've tried to use the capture socket, but it didn't work based on what exactly I looked for.
I'm looking for a way to log what message my proxy server receives.
Q : a way to log what message my proxy server receives.
The simplest way is to make use of an API v4+ directly supported logging via a ManInTheMiddle-"capture" socket:
// [ROUTER]--------------------------------------+++++++
// |||||||
// [DEALER]---------------*vvvvvvvv *vvvvvvv
int zmq_proxy (const void *frontend, const void *backend, const void *capture);
// [?]---------------------------------------------------------------*^^^^^^^
Where the capture ought be either of { ZMQ_PUB | ZMQ_DEALER | ZMQ_PUSH | ZMQ_PAIR }
If the capture socket is not NULL, the proxy shall send all messages, received on both frontend and backend, to the capture socket.
If this ZeroMQ API-granted is not meeting your expectation, feel free to express your expectations in as sufficiently detailed manner as needed ( and implement either an "external" capture-socket payload { message-content | socket_monitor() }-based filtering or one may design a brand new, user-defined logging-proxy, where your expressed features will get implemented with a use of your custom use-case specific requirements, implemented in your application-specific code, resorting to re-use but the clean and plain ZeroMQ API for all the DEALER-inbound/outbound-ROUTER message-passing and log-filtering/processing logic. )
There is no other way I can imagine to take place and solve the task.
It also works with a pair of PAIR sockets. As soon as one end of a pair of sockets is connected to the capture socket, messages are sent to the capture sockets AND to the other end of the proxy.
http://zguide.zeromq.org/page:all#ZeroMQ-s-Built-In-Proxy-Function
and
http://api.zeromq.org/3-2:zmq-proxy
and
http://zguide.zeromq.org/page:all#Pub-Sub-Tracing-Espresso-Pattern
helped me.
This code in python demonstrates it:
import zmq, threading, time
def peer_run(ctx):
""" this is the run method of the PAIR thread that logs the messages
going through the broker """
sock = ctx.socket(zmq.PAIR)
sock.connect("inproc://peer") # connect to the caller
sock.send(b"") # signal the caller that we are ready
while True:
try:
topic = sock.recv_string()
obj = sock.recv_pyobj()
except Exception:
topic = None
obj = sock.recv()
print(f"\n !!! peer_run captured message with topic {topic}, obj {obj}. !!!\n")
def proxyrun():
""" zmq broker run method in separate thread because zmq.proxy blocks """
xpub = ctx.socket(zmq.XPUB)
xpub.bind(xpub_url)
xsub = ctx.socket(zmq.XSUB)
xsub.bind(xsub_url)
zmq.proxy(xpub, xsub, cap)
def pubrun():
""" publisher run method in a separate thread, publishes 5 messages with topic 'Hello'"""
socket = ctx.socket(zmq.PUB)
socket.connect(xsub_url)
for i in range(5):
socket.send_string(f"Hello {i}", zmq.SNDMORE)
socket.send_pyobj({'a' : 123})
time.sleep(0.01)
ctx = zmq.Context()
xpub_url = "ipc://xpub"
xsub_url = "ipc://xsub"
#xpub_url = "tcp://127.0.0.1:5567"
#xsub_url = "tcp://127.0.0.1:5568"
# set up the capture socket pair
cap = ctx.socket(zmq.PAIR)
cap.bind("inproc://peer")
cap_th = threading.Thread(target=peer_run, args=(ctx,), daemon=True)
cap_th.start()
cap.recv() # wait for signal from peer thread
print("cap received message from peer, proceeding.")
# start the proxy
th_proxy=threading.Thread(target=proxyrun, daemon=True)
th_proxy.start()
# create req/rep socket just to prove that pub/sub can run alongside it
zmq_rep_sock = ctx.socket(zmq.REP)
zmq_rep_sock.bind("ipc://ghi")
# create sub socket and connect it to proxy's pub socket
zmq_sub_sock = ctx.socket(zmq.SUB)
zmq_sub_sock.connect(xpub_url)
zmq_sub_sock.setsockopt(zmq.SUBSCRIBE, b"Hello")
# create the poller
poller = zmq.Poller()
poller.register(zmq_rep_sock, zmq.POLLIN)
poller.register(zmq_sub_sock, zmq.POLLIN)
# create publisher thread and start it
th_pub = threading.Thread(target=pubrun, daemon=True)
th_pub.start()
# receive publisher's messages ordinarily
while True:
events = dict(poller.poll())
print(f"received events: {events}")
if zmq_rep_sock in events:
message = zmq_rep_sock.recv_pyobj()
print(f"received zmq_rep_sock {message}")
elif zmq_sub_sock in events:
topic = zmq_sub_sock.recv_string()
message = zmq_sub_sock.recv_pyobj()
print(f"received zmq_sub_sock {topic} , {message}")
output
cap received message from peer, proceeding.
!!! peer_run captured message with topic None, obj b'\x80\x03}q\x00X\x01\x00\x00\x00aq\x01K{s.'. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 1 , {'a': 123}
!!! peer_run captured message with topic Hello 2, obj {'a': 123}. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 2 , {'a': 123}
!!! peer_run captured message with topic Hello 3, obj {'a': 123}. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 3 , {'a': 123}
!!! peer_run captured message with topic Hello 4, obj {'a': 123}. !!!
received events: {<zmq.sugar.socket.Socket object at 0x76310f70>: 1}
received zmq_sub_sock Hello 4 , {'a': 123}
Be aware of the slow joiner problem, hence the sleep command in the publisher/
I want to send notifications to clients via websockets. This notifications are generated by actors, hence I'm trying to create a stream of actor's messages at server startup and subscribe websockects connections to this stream (sending only those notifications emitted since subscription)
With Source.actorRef we can create a Source of actor messages.
val ref = Source.actorRef[Weather](Int.MaxValue, fail)
.filter(!_.raining)
.to(Sink foreach println )
.run()
ref ! Weather("02139", 32.0, true)
But how can I subscribe (akka http*) websockets connections to this source if has been materialized already?
*WebSockets connections in Akka HTTP requires a Flow[Message, Message, Any]
What I'm trying to do is something like
// at server startup
val notifications: Source[Notification,ActorRef] = Source.actorRef[Notificacion](5,OverflowStrategy.fail)
val ref = notifications.to(Sink.foreach(println(_))).run()
val notificationActor = system.actorOf(NotificationActor.props(ref))
// on ws connection
val notificationsWS = path("notificationsWS") {
parameter('name) { name ⇒
get {
onComplete(flow(name)){
case Success(f) => handleWebSocketMessages(f)
case Failure(e) => throw e
}
}
}
}
def flow(name: String) = {
val messages = notifications filter { n => n.name equals name } map { n => TextMessage.Strict(n.data) }
Flow.fromSinkAndSource(Sink.ignore, notifications)
}
This doensn't work because the notifications source is not the one that is materialized, hence it doens't emit any element.
Note: I was using Source.actorPublisher and it was working but ktoso discourages his usage and also I was getting this error:
java.lang.IllegalStateException: onNext is not allowed when the stream has not requested elements, totalDemand was 0.
You could expose the materialised actorRef to some external router actor using mapMaterializedValue.
Flow.fromSinkAndSourceMat(Sink.ignore, notifications)(Keep.right)
.mapMaterializedValue(srcRef => router ! srcRef)
The router can keep track of your sources actorrefs (deathwatch can help tidying things up) and forward messages to them.
NB: you're probably already aware, but note that by using Source.actorRef to feed your flow, your flow will not be backpressure aware (with the strategy you chose it will just crash under load).
I am trying to find out what message delivery guarantees Akka supports. I came to the following conclusion:
At-most-once : Supported by default
At-least-once : Supported with Akka Persistence
Exactly-once : ?
Does Akka support exactly-once? How would I be able to achieve this if it doesn't?
Akka out of the box provides At-Most-Once delivery, as you've discovered. At-Least-Once is available in some libraries such as Akka Persistence, and you can create it yourself fairly easily by creating an ACK-RETRY protocol in your actors. The Sender keeps periodically sending the message until the receiver acknowledges receipt of it.
Put simply, for At-Least-Once the responsibility is with the Sender. E.g in Scala:
class Sender(receiver: ActorRef) extends Actor {
var acknowledged = false
override def preStart() {
receiver ! "Do Work"
system.scheduler.scheduleOnce(50 milliseconds, self, "Retry")
}
def receive = {
case "Retry" =>
if(!acknowledged) {
receiver ! "Do Work"
system.scheduler.scheduleOnce(50 milliseconds, self, "Retry")
}
case "Ack" => acknowledged = true
}
}
class Receiver extends Actor {
def receive = {
case "Do Work" =>
doWork()
sender ! "Ack"
}
def doWork() = {...}
}
But with At-Most-Once processing, the receiver has to ensure that repeated instances of the same message only result in work being done once. This could be achieved through making the work done by the receiver idempotent so it can be repeatedly applied, or by having the receiver keep a record of what it has processed already. For At-Most-Once the responsibility is with the receiver:
class AtMostOnceReceiver extends Actor {
var workDone = false
def receive = {
case "Do Work" =>
if(!workDone) {
doWork()
workDone = true
}
sender ! Ack
}
}
folks!
I'm using akka 2.2.3 and developing simple tcp server application.
The work flow is:
1. client connects to server
2. server accepts connection,
3. server sends to client message "Hello!"
On page http://doc.akka.io/docs/akka/2.2.3/scala/io-tcp.html I can see how I can send response message to request. But, how can I send message before some data was received?
How can I send message to a client without receiving a init.Event first?
Code from documentation page:
class AkkaSslHandler(init: Init[WithinActorContext, String, String])
extends Actor with ActorLogging {
def receive = {
case init.Event(data) ⇒
val input = data.dropRight(1)
log.debug("akka-io Server received {} from {}", input, sender)
val response = serverResponse(input)
sender ! init.Command(response)
log.debug("akka-io Server sent: {}", response.dropRight(1))
case _: Tcp.ConnectionClosed ⇒ context.stop(self)
}
}
You use the init for creating the TcpPipelineHandler as well, and you can of course always send commands to that actor. For this you will need to pass its ActorRef to your handler actor besides the Init.