I'm trying to add SSE functionality to my server application using Redis PubSub, guided by many articles i.e:
how-to-use-actioncontollerlive-along-with-resque-redis.
The server is hosted in Heroku, thus heartbeating is necessary as well.
...
sse = SSE.new(response.stream)
begin
redis = Redis.new(:url => ENV['REDISCLOUD_URL'])
redis.subscribe(<UUID>, HEARTBEAT_CHANNEL) do |on|
on.message do |channel, data|
begin
if channel == HEARTBEAT_CHANNEL
sse.write('', event: "hb")
else
sse.write(data, event: "offer_update")
end
rescue StandardError => e #I'll call this section- "internal rescue"
puts "Internal: #{$!}"
redis.quit
sse.close
puts "SSE was closed
end
end
end
rescue StandardError => e #I'll call this section- "external rescue"
puts "External: #{$!}"
ensure
redis.quit
sse.close
puts "sse was closed"
end
The questions:
I didn't see the "Internal rescue" in any place over the net talking about SSE. but I don't get who can catch an exception if raised by the sse.write? The common scenario is that HB is sent while client isn't connected anymore which makes this section so critical ("Internal: client disconnected" is appeared). Am I right?
In which case will the "external rescue" be triggered? Does a client disconnection cause sse.write to raise an exception in the "inner block" (inside the on.message body)? because it was never caught by the external rescue when I tried to simulate it tens of times.
This code suffers as well. redis.quit in the internal rescue section raises another exception which is caught by the external rescue statement: External: undefined method 'disconnect' for #<Redis::SubscribedClient:0x007fa8cd54b820>. So - how should it done? how can I recognize ASAP a client disconnection in order to free the memory&socket?
How can it be that exceptions raised by the sse.write have NOT been caught by the external rescue (as it should be from beginning) while another error (described in my 3rd question) has been caught? all of this code (outer+inner sections) is running in the same thread, right? I'd be happy for a deep explaination.
You catch the exception inside the subscribe, and so redis doesn't know about it and will not stop it's inner loop properly. redis.quit will cause it to crash and stop since it cannot keep waiting for a message. This is obviously not a good way to do it.
If your code throws an exception inside the subscribe it will cause redis to gracefully unsubscribe and your exception can be rescued outside, as in your "external rescue".
Another point is that you shouldn't catch exceptions without fully handling them, and you should never catch generic exceptions without re-raising them. In this case you can safely let the ClientDisconnected exception bubble to Rails' code.
Here's how your controller's code should look like:
def subscribe
sse = SSE.new(response.stream)
redis = Redis.new(:url => ENV['REDISCLOUD_URL'])
redis.subscribe(<UUID>, HEARTBEAT_CHANNEL) do |on|
on.message do |channel, data|
if channel == HEARTBEAT_CHANNEL
sse.write('', event: "hb")
else
sse.write(data, event: "offer_update")
end
end
end
ensure
redis.quit if reddis # might not have had a chance to initialize yet
sse.close if sse # might not have had a chance to initialize yet
puts "sse was closed"
end
Related
The new support for Event Hub Riders in 7.0 plus the existing InMemoryRepository backing for Sagas looks like it could provide a straightforward means of creating aggregate states based on a stream of correlated messages, e.g. across all sensors in a Building). In this scenario, the Building's Identifier would be used as the CorrelationId of the Messages, the Saga, and as the PartitionKey of the EventData messages sent to the Event Hub, ensuring the same consuming service instance receives all messages for that device at a given time. Given the way Event Hub's rebalancing works, it can be assumed that at some point while this service is running, the service instance managing messages for a Partition will shift to a new host, which will start reading messages sent by the sensors in the building. At that moment:
The new host does not know anything about the old host's processing. It just knows that it is now receiving messages for the Event Hub partition that includes that Building's messages.
The devices sending the messages do not know anything about the transition in state aggregation responsibility "downstream of them" - they are still happily reporting new measurements as always.
The challenge this creates is: on the new service instance, we need a new Saga to be created to take over for the previous Saga, but the only thing that knows no Saga lives for a given entity is MassTransit: nothing on the new instance knows a sensor reading from Building A is the first one from Building A since this service instance took over tracking the aggregate Building A state. We thought this could be handled by marking the same Message (DataCollected) with both InitiatedBy and Orchestrates:
public class BuildingAggregator:
ISaga,
InitiatedBy<DataCollected>, //init saga on first DataCollected with a given CorrelationId seen
Orchestrates<DataCollected> //then keep handling those in that saga
{
//saga Consume methods
}
However, this throws the following exception when the BuildingAggregator receives its second DataCollected message with a given Guid:
Saga exception on receipt of MassTransitFW_POC.Program+DataCollected: The message cannot be accepted by an existing saga
at MassTransit.Saga.Policies.NewSagaPolicy`2.MassTransit.Saga.ISagaPolicy<TSaga,TMessage>.Existing(SagaConsumeContext`2 context, IPipe`1 next)
at MassTransit.Saga.SendSagaPipe`2.<Send>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.ExceptionServices.ExceptionDispatchInfo.Throw()
at MassTransit.Saga.SendSagaPipe`2.<Send>d__5.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
at System.Runtime.CompilerServices.TaskAwaiter.ThrowForNonSuccess(Task task)
at System.Runtime.CompilerServices.TaskAwaiter.HandleNonSuccessAndDebuggerNotification(Task task)
at MassTransit.Saga.InMemoryRepository.InMemorySagaRepositoryContextFactory`1.<Send>d__4`1.MoveNext()
--- End of stack trace from previous location where exception was thrown ---
Is there another way of achieving this logic? Is this the "wrong way" to apply Sagas?
As per Chris Patterson's comments on the question above, this is achievable with the state machine syntax:
Initially(
When(DataCollected)
.Then(f => _logger.LogInformation("Initiating Network Manager for Network: {NetworkId}", f.Data.NetworkId))
.TransitionTo(Running));
During(Running,
When(DataCollected)
.Then(f => { // activities and state transitions }),
When(SimulationComplete)
.Then(f => _logger.LogInformation("Network {NetworkId} shutting down.", f.Instance.CorrelationId))
.TransitionTo(Final));
Note how the DataCollected event is handled both in the Initially state transition and in a state transition set by the Initially condition.
I have a network of nodes all implemented using custom GraphStageLogic. I can't find any API to determine when a stage throws an exception (e.g. IllegalArgumentException for Cannot pull port). The only thing Akka does is fail the down stream connections. What I need to determine is, for example in postStop or through a callback, when a node shuts down due to runtime exception, and propagate that information to a Promise that monitors the state of the entire system. Using withAttributes(supervisionStrategy) does not have any effect, either. It seems bewildering to me that there is no way to monitor exceptions thrown inside a GraphStageLogic? failStage is final like basically the entire API of GraphStageLogic.
Using decider when defining the ActorMaterializer used for materializing the Graph should work:
implicit val materializer: ActorMaterializer = ActorMaterializer(
ActorMaterializerSettings(actorSystem).withSupervisionStrategy(decider))
where decider is the typical
val decider: Supervision.Decider = {
case e: IllegalArgumentException => ....
}
When actor fails, i need to send cause of failure to another actor.
I know there are supervision strategies and i use them. The problem is - i cannot find correct place for such error reporting.
I tried watching actor, but Terminated message does not provide cause of termination.
Currently, i added error handling in Decider:
override def supervisorStrategy: SupervisorStrategy =
OneForOneStrategy(maxNrOfRetries = 10, withinTimeRange = Duration(1, TimeUnit.SECONDS), loggingEnabled = true) {
case e: Exception =>
onActorError(sender(), e)
Stop
}
But I think that it is not a good time and place to do so, "decider" should return strategy, and not implicitly do something else.
So the question is: is there a proper place to catch actor exceptions and do something about it?
postRestart method of the supervised actor seems like a good place to do the postmortem logging.
From documentation:
The new actor’s postRestart method is invoked with the exception which
caused the restart. By default the preStart is called, just as in the
normal start-up case.
I use Pusher in my Rails-4 application.
The problem is that sometimes the connection is slow, so the execution of the code becomes slower.
I also get from time to time the following error:
Pusher::HTTPError: execution expired (HTTPClient::ConnectTimeoutError)
I send signals via Pusher with this code:
Pusher[channel].trigger!(event, msg)
I would like to execute it in background, so if an exception is thrown it will not break the flow of my app, and neither slow it down.
I tried to wrap the call with begin ... rescue but it didn't solve the exception problem. Of course even if it would, it wouldn't solve the slow-down problem i want to avoid.
Information on performing asynchronous triggers can be found here:
https://github.com/pusher/pusher-gem#asynchronous-requests
This also provides you within information on catching/handling errors.
Finally I implemented this solution:
Thread.new do
begin
Pusher[channel].trigger!(ch, ev, msg)
ActiveRecord::Base.connection.close
rescue Pusher::Error => e
Rails.logger.error "Pusher error: #{e.message}"
end
end
I have been trying to use c++/cx StorageFile::ReadAsync() to read a file in a store-apps, but it always return an invalid params exception no matter what
// "file" are returned from FileOpenPicker
IRandomAccessStream^ reader = create_task(file->OpenAsync(FileAccessMode::Read)).get();
if (reader->CanRead)
{
BitmapImage^ b = ref new BitmapImage();
const int count = 1000000;
Streams::Buffer^ bb = ref new Streams::Buffer(count);
create_task(reader->ReadAsync(bb, 1, Streams::InputStreamOptions::None)).get();
}
I have turn on all the manifest capabilities and added "file open picker" + "file type association" for Declarations. Any ideas ? thanks!
ps: most solutions I found is for C#, but the code structure are similar...
If this code is executing on the UI thread (or in any other Single Threaded Apartment, or STA), then the calls to .get() will throw if the tasks have not yet completed, because the call to .get() would block the thread. You must not block the UI thread or any other STA, and when compiling with C++/CX support enabled, the libraries enforce this.
If you turn on first chance exception handling in the debugger (Debug -> Exceptions..., check the C++ Exceptions check box), you should see that the first exception to be thrown is an invalid_operation exception, from the following line in <ppltasks.h>:
// In order to prevent Windows Runtime STA threads from blocking the UI, calling
// task.wait() task.get() is illegal if task has not been completed.
if (!_IsCompleted() && !_IsCanceled())
{
throw invalid_operation("Illegal to wait on a task in a Windows Runtime STA");
}
The "invalid parameter" you are reporting is the fatal error that is caused when this exception reaches the ABI boundary: the debugger is notified that the application is about to terminate because this exception was unhandled.
You need to restructure your code to use continuations, using task::then, as described in the article Asynchronous Programming in C++ Using PPL
Just to make sure you understand the async pattern, what is happening in your code is that you call create_task and immediately after that task has started you are trying to get the result with .get(). Calls to .get() will throw immediately if the task is still running or the file could not be found. Therefore, the correct way of structuring this is using a .then on your file task, ensuring that you have the result of this task before starting the next one.
create_task(file->OpenAsync(FileAccessMode::Read)).then([](IRandomAccessStream^ reader)
{
//do stuff with the reader
});
At that point the reader is available so you can do whatever you want to, even start a new task.
Also, it is possible that the call to OpenAsync is failing cause the file is empty, I would add a try catch block to the previous task, the one that gets the file, just to make sure that's not the problem.