Rails - run pusher in background - ruby-on-rails-4

I use Pusher in my Rails-4 application.
The problem is that sometimes the connection is slow, so the execution of the code becomes slower.
I also get from time to time the following error:
Pusher::HTTPError: execution expired (HTTPClient::ConnectTimeoutError)
I send signals via Pusher with this code:
Pusher[channel].trigger!(event, msg)
I would like to execute it in background, so if an exception is thrown it will not break the flow of my app, and neither slow it down.
I tried to wrap the call with begin ... rescue but it didn't solve the exception problem. Of course even if it would, it wouldn't solve the slow-down problem i want to avoid.

Information on performing asynchronous triggers can be found here:
https://github.com/pusher/pusher-gem#asynchronous-requests
This also provides you within information on catching/handling errors.

Finally I implemented this solution:
Thread.new do
begin
Pusher[channel].trigger!(ch, ev, msg)
ActiveRecord::Base.connection.close
rescue Pusher::Error => e
Rails.logger.error "Pusher error: #{e.message}"
end
end

Related

Alamofire error on reopening app

I have been working on an app that allows the downloading of large files. I am using Swift 3 and Alamofire,
The downloads work in the background, and on iOS 10.2.x this all worked perfectly fine.
But on updating to iOS 10.3.x when the device is switched to sleep, upon opening the app again the following errors are thrown:
[] nw_socket_get_input_frames recvmsg(fd 6, 1024 bytes): [57] Socket is not connected
[] nw_endpoint_handler_add_write_request [1.1 192.124.249.2:443 failed socket-flow (satisfied)] cannot accept write requests
[] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument
The download is continuing in the background, and upon completion will trigger the completion callbacks fine. But because of these errors, it seems the progress callback isn't being called unless I close the app and open it again and reload the table cell view on open.
I can't find much info about these kind of errors online, only information on hiding the errors from being printed to the console.
Can anyone help?
Thanks
I had the same problem trying to download a large file (800 mb aprox) using Alamofire. At first my implementation was calling validate() and then responseData(queue:completionHandler:).
When this error occurs, Alamofire was catching it as a Failure with alamofire.AFError.ResponseValidationFailureReason.dataFileNil (whith no resumeData) which suggest that the closure sent on the to parameter is nil, which wasn't my case.
So my solution was removing the validate() call and doing the validation manually by catching the status codes on a switch.
Now without the validate() call Alamofire was catching the same error as a Failure but with a 200 as status code and resumeData. When this occurs I just make the download again but this time sending the resumeData on the download call.
Since the resumeData was almost at 100% (because the error happens when Alamofire is trying to write the file, at the end of the download) the second download was very short.
So basically my code looked like this:
switch response.response?.statusCode {
// other cases
case 200:
if let resumeData = response.resumeData {
// failed
// retry the download using resumeData
} else {
// succeed finished download
}
// more cases
}
If you need more info/details let me know

Ember.js - Function finished before store is done

I'm building an ember app, and I keep running into the same problem where I make a call to the store, but the function keeps compiling before the store has retrieved the data from the backend. Specifically I'm having the problem with a findRecord. I've implemented it both ways:
var admin = this.store.findRecord('admin', 1);
console.log(admin.get('season'));
console.log('Are we here?');
and
this.store.findRecord('admin', 1).then(function(admin) {
console.log(admin.get('season'));
});
console.log('Are we here?');
In both cases, the Are we here? is logged BEFORE the season. Obviously the console logs are just for the example, and it creates an actual problem with what I'm trying to get done. Does anybody know a simple fix for this delay?
Thanks.
Of course it is. It's an asynchronous behavior. It takes some time to solve promise which is returned from findRecord(), thus the consequence is:
this.store.findRecord(); //synchronous
console.log('we are here'); //synchronous
in the meantime a promise returned from findRecord() gets resolved (asynchronous behavior)
console.log(admin.get('season'));
An asynchronous call will not stop your code from progressing, that´s the purpose of it. Else it would block UI updates and user interaction while loading data.

Can Amazon Simple Workflow (SWF) be made to work with jRuby?

For uninteresting reasons, I have to use jRuby on a particular project where we also want to use Amazon Simple Workflow (SWF). I don't have a choice in the jRuby department, so please don't say "use MRI".
The first problem I ran into is that jRuby doesn't support forking and SWF activity workers love to fork. After hacking through the SWF ruby libraries, I was able to figure out how to attach a logger and also figure out how to prevent forking, which was tremendously helpful:
AWS::Flow::ActivityWorker.new(
swf.client, domain,"my_tasklist", MyActivities
) do |options|
options.logger= Logger.new("logs/swf_logger.log")
options.use_forking = false
end
This prevented forking, but now I'm hitting more exceptions deep in the SWF source code having to do with Fibers and the context not existing:
Error in the poller, exception:
AWS::Flow::Core::NoContextException: AWS::Flow::Core::NoContextException stacktrace:
"aws-flow-2.4.0/lib/aws/flow/implementation.rb:38:in 'task'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:292:in 'respond_activity_task_failed'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:204:in 'respond_activity_task_failed_with_retry'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:335:in 'process_single_task'",
"aws-flow-2.4.0/lib/aws/decider/task_poller.rb:388:in 'poll_and_process_single_task'",
"aws-flow-2.4.0/lib/aws/decider/worker.rb:447:in 'run_once'",
"aws-flow-2.4.0/lib/aws/decider/worker.rb:419:in 'start'",
"org/jruby/RubyKernel.java:1501:in `loop'",
"aws-flow-2.4.0/lib/aws/decider/worker.rb:417:in 'start'",
"/Users/trcull/dev/etl/flow/etl_runner.rb:28:in 'start_workers'"
This is the SWF code at that line:
# #param [Future] future
# Unused; defaults to **nil**.
#
# #param block
# The block of code to be executed when the task is run.
#
# #raise [NoContextException]
# If the current fiber does not respond to `Fiber.__context__`.
#
# #return [Future]
# The tasks result, which is a {Future}.
#
def task(future = nil, &block)
fiber = ::Fiber.current
raise NoContextException unless fiber.respond_to? :__context__
context = fiber.__context__
t = Task.new(nil, &block)
task_context = TaskContext.new(:parent => context.get_closest_containing_scope, :task => t)
context << t
t.result
end
I fear this is another flavor of the same forking problem and also fear that I'm facing a long road of slogging through SWF source code and working around problems until I finally hit a wall I can't work around.
So, my question is, has anyone actually gotten jRuby and SWF to work together? If so, is there a list of steps and workarounds somewhere I can be pointed to? Googling for "SWF and jRuby" hasn't turned up anything so far and I'm already 1 1/2 days into this task.
I think the issue might be that aws-flow-ruby doesn't support Ruby 2.0. I found this PDF dated Jan 22, 2015.
1.2.1
Tested Ruby Runtimes The AWS Flow Framework for Ruby has been tested
with the official Ruby 1.9 runtime, also known as YARV. Other versions
of the Ruby runtime may work, but are unsupported.
I have a partial answer to my own question. The answer to "Can SWF be made to work on jRuby" is "Yes...ish."
I was, indeed, able to get a workflow working end-to-end (and even make calls to a database via JDBC, the original reason I had to do this). So, that's the "yes" part of the answer. Yes, SWF can be made to work on jRuby.
Here's the "ish" part of the answer.
The stack trace I posted above is the result of SWF trying to raise an ActivityTaskFailedException due to a problem in some of my activity code. That part is my fault. What's not my fault is that the superclass of ActivityTaskFailedException has this code in it:
def initialize(reason = "Something went wrong in Flow",
details = "But this indicates that it got corrupted getting out")
super(reason)
#reason = reason
#details = details
details = details.message if details.is_a? Exception
self.set_backtrace(details)
end
When your activity throws an exception, the "details" variable you see above is filled with a String. MRI is perfectly happy to take a String as an argument to set_backtrace(), but jRuby is not, and jRuby throws an exception saying that "details" must be an Array of Strings. This exception blows through all the nice error catching logic of the SWF library and into this code that's trying to do incompatible things with the Fiber library. That code then throws a follow-on exception and kills the activity worker thread entirely.
So, you can run SWF on jRuby as long as your activity and workflow code never, ever throws exceptions because otherwise those exceptions will kill your worker threads (which is not the intended behavior of SWF workers). What they are designed to do instead is communicate the exception back to SWF in a nice, trackable, recoverable fashion. But, the SWF code that does the communicating back to SWF has, itself, code that's incompatible with jRuby.
To get past this problem, I monkey-patched AWS::Flow::FlowException like so:
def initialize(reason = "Something went wrong in Flow",
details = "But this indicates that it got corrupted getting out")
super(reason)
#reason = reason
#details = details
details = details.message if details.is_a? Exception
details = [details] if details.is_a? String
self.set_backtrace(details)
end
Hope that helps someone in the same situation as me.
I'm using JFlow, it lets you start SWF flow activity workers with JRuby.

storagefile::ReadAsync exception in c++/cx?

I have been trying to use c++/cx StorageFile::ReadAsync() to read a file in a store-apps, but it always return an invalid params exception no matter what
// "file" are returned from FileOpenPicker
IRandomAccessStream^ reader = create_task(file->OpenAsync(FileAccessMode::Read)).get();
if (reader->CanRead)
{
BitmapImage^ b = ref new BitmapImage();
const int count = 1000000;
Streams::Buffer^ bb = ref new Streams::Buffer(count);
create_task(reader->ReadAsync(bb, 1, Streams::InputStreamOptions::None)).get();
}
I have turn on all the manifest capabilities and added "file open picker" + "file type association" for Declarations. Any ideas ? thanks!
ps: most solutions I found is for C#, but the code structure are similar...
If this code is executing on the UI thread (or in any other Single Threaded Apartment, or STA), then the calls to .get() will throw if the tasks have not yet completed, because the call to .get() would block the thread. You must not block the UI thread or any other STA, and when compiling with C++/CX support enabled, the libraries enforce this.
If you turn on first chance exception handling in the debugger (Debug -> Exceptions..., check the C++ Exceptions check box), you should see that the first exception to be thrown is an invalid_operation exception, from the following line in <ppltasks.h>:
// In order to prevent Windows Runtime STA threads from blocking the UI, calling
// task.wait() task.get() is illegal if task has not been completed.
if (!_IsCompleted() && !_IsCanceled())
{
throw invalid_operation("Illegal to wait on a task in a Windows Runtime STA");
}
The "invalid parameter" you are reporting is the fatal error that is caused when this exception reaches the ABI boundary: the debugger is notified that the application is about to terminate because this exception was unhandled.
You need to restructure your code to use continuations, using task::then, as described in the article Asynchronous Programming in C++ Using PPL
Just to make sure you understand the async pattern, what is happening in your code is that you call create_task and immediately after that task has started you are trying to get the result with .get(). Calls to .get() will throw immediately if the task is still running or the file could not be found. Therefore, the correct way of structuring this is using a .then on your file task, ensuring that you have the result of this task before starting the next one.
create_task(file->OpenAsync(FileAccessMode::Read)).then([](IRandomAccessStream^ reader)
{
//do stuff with the reader
});
At that point the reader is available so you can do whatever you want to, even start a new task.
Also, it is possible that the call to OpenAsync is failing cause the file is empty, I would add a try catch block to the previous task, the one that gets the file, just to make sure that's not the problem.

Qt (QFtp) question

Hello i am learning qt and trying to upload a file using QFtp i wrote the folowing code
this->connect(this->ftp, SIGNAL(done(bool)), this, SLOT(ftpDone(bool)));
this->connect(this->ftp, SIGNAL(dataTransferProgress(qint64, qint64)), this, SLOT(dataTransferProgress(qint64, qint64)));
this->connect(this->ftp, SIGNAL(stateChanged(int)), this, SLOT(stateChanged(int)));
.....
if(this->file.open(QIODevice::ReadWrite))
{
this->ftp->setTransferMode(QFtp::Active);
this->ftp->connectToHost(this->settings->getHost());
this->ftp->login(this->settings->getUser(), this->settings->getPassword());
this->ftp->cd(remoteFilePath);
this->ftp->get(this->fileName, &this->file);
this->ftp->close();
}
and it kind of stops it reports in dataTransferProgress that it is at 0/XXX but the slot is never invoked again (using the same code but with the get function i can download a file and it works without a problem) also the error that i get after the time out is QFtp::UnknownError.
Assuming all the commands until get are successful, it's likely that you are closing the connection before get finishes. You should save the identifier returned by get and call close when the commandFinished signal is called with that identifier.
Note: Except setTransferMode all of the methods you used are asynchronous. They will be executed in the order that they are called, but since you aren't performing any error checking, it's possible for one to fail and the rest will still be attempted which might result in some confusion.
The proper way of doing this is to connectToHost first, if that's successful (you can track this with the commandFinished signal) call login etc.