We have an application that was developed with Flash, AS2 and ColdFusion backend (remoting). I observed that when there was a database query failure, and that came in to Flash, the _result handler will be called (instead of _status), and the player hangs with the infamous unresponsive / abort the script error.
Doing a trace on the result produces nothing. Trying to enumerate properties in the result also produces nothing.
That's very strange. Does anyone have any idea about what could be causing this / how to solve it?
Use debug version of flash player in your browser if you don't use it already, most likely it will throw an exception popup.
Second thing is to install http://amfexplorer.riaforge.org/ and see what back-end sends, if anything.
If this doesn't help try putting result parsing code into try-catch and see where it blows up applciation:
try {
// statements
} catch (myErr) {
// statements
} finally {
// statements
}
Related
I have been working on an app that allows the downloading of large files. I am using Swift 3 and Alamofire,
The downloads work in the background, and on iOS 10.2.x this all worked perfectly fine.
But on updating to iOS 10.3.x when the device is switched to sleep, upon opening the app again the following errors are thrown:
[] nw_socket_get_input_frames recvmsg(fd 6, 1024 bytes): [57] Socket is not connected
[] nw_endpoint_handler_add_write_request [1.1 192.124.249.2:443 failed socket-flow (satisfied)] cannot accept write requests
[] tcp_connection_write_eof_block_invoke Write close callback received error: [22] Invalid argument
The download is continuing in the background, and upon completion will trigger the completion callbacks fine. But because of these errors, it seems the progress callback isn't being called unless I close the app and open it again and reload the table cell view on open.
I can't find much info about these kind of errors online, only information on hiding the errors from being printed to the console.
Can anyone help?
Thanks
I had the same problem trying to download a large file (800 mb aprox) using Alamofire. At first my implementation was calling validate() and then responseData(queue:completionHandler:).
When this error occurs, Alamofire was catching it as a Failure with alamofire.AFError.ResponseValidationFailureReason.dataFileNil (whith no resumeData) which suggest that the closure sent on the to parameter is nil, which wasn't my case.
So my solution was removing the validate() call and doing the validation manually by catching the status codes on a switch.
Now without the validate() call Alamofire was catching the same error as a Failure but with a 200 as status code and resumeData. When this occurs I just make the download again but this time sending the resumeData on the download call.
Since the resumeData was almost at 100% (because the error happens when Alamofire is trying to write the file, at the end of the download) the second download was very short.
So basically my code looked like this:
switch response.response?.statusCode {
// other cases
case 200:
if let resumeData = response.resumeData {
// failed
// retry the download using resumeData
} else {
// succeed finished download
}
// more cases
}
If you need more info/details let me know
Good afternoon,
in my project is installed elmah framework to logging exceptions. On the localhost it works fine, but when I deploy it to production it stops logging null reference exceptions. All others exceptions are logged (or I didn't find out next which is not logged).
I have set logging into SqlServer.
I can't find out what is wrong, can someone give me advice please? (How I said it loggs all exceptions what I fired but only this one is never caught)
Thank you
Well, Thomas Ardal answered right.
Problem was in the FilterConfig.cs file. Because in default settings it didn't want log any 500 errors, dangerous requests, null reference exceptions etc, i have added this lines:
public class ElmahHandleErrorAttribute : HandleErrorAttribute
{
public override void OnException(ExceptionContext filterContext)
{
if(filterContext.Exception is HttpRequestValidationException)
{
ErrorLog.GetDefault(HttpContext.Current).Log(new Error(filterContext.Exception));
}
}
}
and added this line to the RegisterGlobalFilters method on the first place.
filters.Add(new ElmahHandleErrorAttribute());
After that it started log some exceptions but not all. Solution is that I remove if condition and catch everything. So if anyone will have similar problem, be sure, that problem will be somewhere in filters...
I'm starting to use Boost, so may be I'm messing something up.
I'm trying to set up http server with boost (ASIO). I've taken the code from docs: http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/examples/cpp03_examples.html (HTTP Server, the first one)
The only difference from the example is I'm running server by my own method "run" and starting io_service in background thread, like in the docs: http://www.boost.org/doc/libs/1_54_0/doc/html/boost_asio/reference/io_service.html
boost::asio::io_service::work work(io_service_);
(Also I'm stopping io_service from my run method too.)
When I'm starting this modified server everything seems to be OK, run method is working fine. But then I'm trying to get a doc from the server the request hangs and control flow never comes to "request_handle" method.
Am I missing something?
UPD. Here is my code of run method:
void NetstreamServer::run()
{
LOG4CPLUS_DEBUG(logger, "NetstreamServer is running");
boost::asio::io_service::work work(io_service_);
try
{
while (true)
{
if (condition)
{
io_service_.stop();
break;
}
}
}
catch (std::exception const& e)
{
LOG4CPLUS_ERROR(logger, "NetstreamServer" << " caught exception: " << e.what());
}
}
You should call io_service_::run() - otherwise no one will dispatch the completion handlers of Asio objects serviced by io_service_.
Without including the code you changed, everyone here can only guess. Unfortunately you also do not include the compiler and the OS you are using. Even with boost claiming it is platform independent, you should always include this information, as it reality, platforms are different even with boost.
Let me do a guess. You use Microsoft Windows? How do you prevent the "main" function to exit? You moved the blocking "run" function out of it in another thread, the main function has no wait point anymore. Let me guess again, you used something like "getchar". With that, you can exit your server with only hitting the keyboard return key. If yes, the problem is the getchar, with unfortunately blocks every io of the asio socket implementation, but only on Windows based systems.
I would not need to guess if you would include the informations mentioned in your post. In particular all(!) changes you made to the code sample.
Did anybody used the unit-tests from the addon-sdk(cfx test)?
I made a test that looks like this:
exports.test_open_tab = function(test) {
const tabs = require("tabs");
tabs.open({
url: "http://valid url with lots of params",
onReady: function(tab) {
test.done();
}
});
test.waitUntilDone(600*1000);
};
basically this should open a tab, wait 600seconds, and them mark it as passed.
It actually displays a lot of errors and warning in the console from the loaded page(jquery and google analytics stuff, used by the loaded page) and then it marks the test as failed.
Any idea why?
One obvious issue is that you don't actually have any test results. If the fact that onReady() is called is a positive result you should write:
onReady: function(tab) {
test.pass("onReady called");
test.done();
}
Btw, the only case where it would wait 600 seconds is if onReady isn't called for some reason. Otherwise your test.done() call will complete the test execution.
You can somewhat reduce the number of warnings logged by disabling javascript.options.strict preference. However, these warnings might indicate real issues and in current Firefox versions it probably makes more sense to switch off display of JavaScript and CSS warnings in the console.
I have a server application which I am debugging which basically parses scripts (VBscript, Python, Jscript and SQl) for the application that requests it.
This is a very critical application which, if it crashes causes havoc for a lot of users. The problem I am facing is how to handle exceptions so that the application can continue and the users know if something is wrong in their scripts.
An example: In the SQL scripts the application normally returns a set of values (Date, Number, String and Number). So the scripts have to have a statement at the end as such:
into dtDate, Number, Number, sString. These are values that are built into the application and the server application knows how to interpret these. These fields are treated in the server app as part of an array. The return values should normally be in a specific order as the indexes for these fields into the array are hardcoded inside the server application.
Now when a user writing a script forgets one of these fields, then the last field (normally string) throws an IndexOutofBoundsException.
The question is how does one recover from exceptions of this nature without taking down the application?
Another example is an error in a script for which no error parsing message can be generated. These errors just disappear in the background in the application and eventually cause the server app to crash. The scripts on which it fails don't necessarily fail to execute entirely, but part of it doesn't execute and the other parts do, which makes it look fairly odd to a user.
This server app is a native C++ application and uses COM technologies.
I was wondering if anyone has any ideas on what the best way is to handle exceptions such as the ones described above without crashing the application??
You can't handle problems like this with exceptions. You could have a top-level catch block that catches the exception and hope that not too much state of the program got irrecoverably munched to try to keep the program alive. Still doesn't make the user happy, that query she is waiting for still doesn't run.
Ensuring that changes don't destabilize a critical business app requires organization. People that sign-off on the changes and verify that they work as intended before it is allowed into production. QA.
since you talk about parsing different languages, you probably have something like
class IParser //parser interface
{
virtual bool Parse( File& fileToParse, String& errMessage ) = 0;
};
class VBParser : public Parser
class SQLParser : public Parser
Suppose the Parse() method throws an exception that is not handled, your entire app crashes. Here's a simplified example how this could be fixed at the application level:
//somewhere main server code
void ParseFileForClient( File& fileToParse )
{
try
{
String err;
if( !currentParser->Parse( fileToParse, err ) )
ReportErrorToUser( err );
else
//process parser result
}
catch( std::exception& e )
{
ReportErrorToUser( FormatExceptionMessage( err ) );
}
catch( ... )
{
ReportErrorToUser( "parser X threw unknown exception; parsing aborted" );
}
}
If you know an operation can throw an exception, then you need to add exception handling to this area.
Basically, you need to write the code in an exception safe manner which usually uses the following guidelines
Work on temporary values that can throw exceptions
Commit the changes using the temp values after (usually this will not throw an exception)
If an exception is thrown while working on the temp values, nothing gets corrupted and in the exception handling you can manage the situation and recover.
http://www.gotw.ca/gotw/056.htm
http://www.gotw.ca/gotw/082.htm
It really depends on how long it takes to start up your server application. It may be safer to let the application crash and then reload it. Or taking a cue from Chrome browser run different parts of your application in different processes that can crash. If you can safely recover an exception and trust that your application state is ok then fine do it. However catching std::exception and continuing can be risky.
There are simple to complex ways to baby sit processes to make sure if they crash they can be restarted. A couple of tools I use.
bluepill http://asemanfar.com/Bluepill:-a-new-process-monitoring-tool
pacemaker http://www.clusterlabs.org/
For simple exceptions that can happen inside your program due to user errors,
simply save the state that can be changed, and restore it like this:
SaveStateThatCanBeAlteredByScript();
try {
LoadScript();
} catch(std::exception& e){
RestoreSavedState();
ReportErrorToUser(e);
}
FreeSavedState();
If you want to prevent external code from crashing (possible untrustable code like plugins), you need an IPC scheme. On Windows, I think you can memory map files with OpenFile(). On POSIX-systems you can use sem_open() together with mmap().
If you have a server. You basically have a main loop that waits for a signal to start up a job. The signal could be nothing and your server just goes through a list of files on the file system or it could be more like a web server where it waits for a connection and executes the script provided on the connection (or any thing like that).
MainLoop()
{
while(job = jobList.getJob())
{
job.execute();
}
}
To stop the server from crashing because of the scripts you need to encapsulate the external jobs in a protected region.
MainLoop()
{
// Don't bother to catch exceptions from here.
// This probably means you have a programming error in the server.
while(job = jobList.getJob())
{
// Catch exception from job.execute()
// as these exceptions are generally caused by the script.
try
{
job.execute();
}
catch(MyServerException const& e)
{
// Something went wrong with the server not the script.
// You need to stop. So let the exception propagate.
throw;
}
catch(std::exception const& e)
{
log(job, e.what());
}
catch(...)
{
log(job, "Unknown exception!");
}
}
}
If the server is critical to your operation then just detecting the problem and logging it is not always enough. A badly written server will crash so you want to automate the recovery. So you should write some form of heartbeat processes that checks at regular intervals if the processes has crashed and if it has automatically restart it.