Elmah Does not email in a fire and forget scenario - elmah

I have a MVC app where I am trying to capture all the incoming requests in a ActionFilter. Here is the logging code. I am trying to log in a fire and forget model.
My issue is if I execute this code synchronously by taking out the Task.Run Elmah does send out an email. But for the code shown below I can see the error getting logged to the InMemory logger in elmah.axd but no emails.
public void Log(HttpContextBase context)
{
Task.Run(() =>
{
try
{
throw new NotImplementedException(); //simulating an error condition
using (var s = _documentStore.OpenSession())
{
s.Store(GetDataToLog(context));
s.SaveChanges();
}
}
catch (Exception ex)
{
ErrorSignal.FromCurrentContext().Raise(ex);
}
});
}

Got this answer from Atif Aziz (ELMAH Lead contributor) on the ELMAH google group:
When you use Task.Run, the HttpContext is not transferred to the thread pool thread on which your action will execute. When ErrorSignal.FromCurrentContext is called from within your action, my guess is that it's probably failing with another exception because there is no current context. That exception is lying with the Task. If you're on .NET 4, you're lucky because you'll see the ASP.NET app crash eventually (but possibly much after the fact) when the GC will kick in and collect the Task and its exception will go “unobserved”. If you're on .NET 4.5, the policy has been changed and the exception will simply get lost. Either way, your observation will be that mailing is not working. In fact, logging won't work either unless you use Elmah.ErrorLog.GetDefault(null).Log(new Error(ex)), where a null context is allowed. But that call only logs the error but does not do any mailing. ELMAH's modules are connected to the ASP.NET context. If you detach from that context by forking to another thread, then you cannot rely on ELMAH's modules. You can only use Elmah.ErrorLog.GetDefault(null).Log(new Error(ex)) reliably to log an error.

Related

Widevine Session Update endless Loop

I am using libwidevinecdm.so from chrome to handle DRM protected data. I am currently successfully setting the widevine server certificate I get from the license server. I can also create a session with the pssh box of the media im trying to decode. So far everything is successful (all promises resolve fine).
(session is created like this: _cdm->CreateSessionAndGenerateRequest(promise_id, cdm::SessionType::kTemporary, cdm::InitDataType::kCenc, pssh_box.data(), static_cast<uint32_t>(pssh_box.size()));)
I am then getting a session message of type kLicenseRequest which I am forwarding to the respective license server. The license server responds with a valid response and the same amount of data as I can see in the browser when using Chrome. I am then passing this to my session like this:
_cdm->UpdateSession(promise_id, session_id.data(), static_cast<uint32_t>(session_id.size()),
license_response.data(), static_cast<uint32_t>(license_response.size()));
The problem now is that this promise never resolves. It keeps posting the kLicenseRequest message over and over again to my session without ever returning. Does this mean my response is wrong? Or is this something else?
Br
Yanick
The issue is caused by the fact, that everything in CreateSessionAndGenerateRequest is done synchronous - that means by the time CreateSessionAndGenerateRequest returns your promise will always be resolved.
The CDM will emit the kLicenseRequest inside CreateSessionAndGenerateRequest and it doesn't do so in a "fire & forget" fashion, but the function waits there until you have returned from the cdm::Host_10::OnSessionMessage. Since my implementation of OnSessionMessage was creating a synchronous HTTP Request to the license server before - also synchronously - calling the UpdateSession the entire chain ended up to be blocking.
So ultimately I was calling UpdateSession while still being inside CreateSessionAndGenerateRequest and I assume the CDM cannot handle this and reacts by creating a new session with the given ID and generating a request again, which of course triggered another UpdateSession and so on.
Ultimately the simplest way to break the cycle was to make something asynchronous. I decided to launch a separate thread when receiving kLicenseRequest, wait for a few milliseconds to make sure that CreateSessionAndGenerateRequest has time to finish (not sure if that is really required) and then issue the request to the license server.
The only change I had to do was adding the surrounding std::thread:
void WidevineSession::forward_license_request(const std::vector<uint8_t> &data) {
std::thread{
[=]() {
std::this_thread::sleep_for(std::chrono::milliseconds{100});
net::HttpRequest request{"POST", _license_server_url};
request.add_header("Authorization", fmt::format("Bearer {}", _access_token))
.byte_body(data);
const auto response = _client.execute(request);
if (response.status_code() != 200) {
log->error("Widevine license request not accepted by license server: {} {} ({})", response.status_code(), response.status_text(), utils::bytes_to_utf8(response.body()));
throw std::runtime_error{"Error requesting widevine license"};
}
log->info("Successfully requested widevine license from license server");
_adapter->update_session(this, _session_id, response.body());
}
}.detach();
}

JXBrowser modify cookies

I have come across the https://jxbrowser.support.teamdev.com/support/solutions/articles/9000013108-network-events in the JXBrowser and wanted to add new cookies so that it could be used in the subsequent calls.
The support is available to add headers however since no direct access is available for the cookies I tried using the
public void onBeforeSendHeaders(BeforeSendHeadersParams paramBeforeSendHeadersParams)
{
List<Cookie> cookieList = browser.getCookieStorage().getAllCookies();
}
Also note that the calls of below snippet produces the same exception
browser.getURL(); //Exception is thrown here
CookieStorage storage = setCookies(paramBeforeSendHeadersParams, browser, list);
storage.save();// Exceptino is thrown here
but if i do this i get
java.lang.IllegalStateException: You are trying to execute some code that invokes synchronous message send to IPC channel. This code is executed in the scope of the handler which is bounded to synchronous message received from IPC channel. Such code execution causes a deadlock in native code with high probability and is forbidden.
What is the reasoning behind this any help is appreciated
As I understand, you want your application to share cookies between several Browser instances.
It is possible to make Two Browser instances with the same BrowserContext instances which use the same user data directory. As a result, they will share cookies and cache files. For example:
BrowserContext context = new BrowserContext(
new BrowserContextParams("C:\\my-data1"));
Browser browser1 = new Browser(context);
Browser browser2 = new Browser(context);
In this case, you should not receive the exception.

Loopback: return error from beforeValidation hook

I need to make custom validation of instance before saving it to MySQL DB.
So I perform (async) check inside beforeValidate model hook.
MyModel.beforeValidate = function(next){
// async check that finally calls next() or next(new Error('fail'))
}
But when check fails and I pass Error obj to next function, the execution continues anyway.
Is there any way to stop execution and response to client with error?
This is a known bug in the framework, see https://github.com/strongloop/loopback/issues/614
I am working on a new hook implementation that will not have issues like the one you have experienced, see loopback-datasource-juggler#367 and the pull request loopback-datasource-juggler#403

Getting "Error: Assertion Failed: calling set on destroyed object" when trying to rollback a deletion

I am looking into how to show proper deletion error message in ember when there is an error coming back from the server. I looked at this topic and followed its suggestion:
Ember Data delete fails, how to rollback
My code is just like it, I return a 400 and my catch fires and logs, but nothing happens, when I pause it in the debugger though and try to rollback, I get Error: Assertion Failed: calling set on destroyed object
So A) I cannot rollback B) the error is eaten normally.
Here is my code
visitor.destroyRecord().then(function() {
console.log('SUCCESS');
}).catch(function(response) {
console.log('failed to remove', response);
visitor.rollback();
});
In case it's relevant, my model does have multiple relationships. What am I doing wrong? Ember-data version is 1.0.0.8 beta (previous one from the release a few days ago).
Thanks in advance.
EDIT
I discovered now that the record actually is restored currently inside the cache according to ember inspector, but the object will not reappear in the rendering of the visitors. I need some way to force it to reload into the template...
After destroyRecord, the record is gone and the deletion cannot be rolled back. The catch clause will just catch a server error. If you want the record back, and think it's still on the server, you'll have to reload it.
See the following comment on deleteRecord from the Ember Data source:
Marks the record as deleted but does not save it. You must call
`save` afterwards if you want to persist it. You might use this
method if you want to allow the user to still `rollback()` a
delete after it was made.
This implies that a rollback after save is not possible. There is also no sign anywhere I can see in the Ember Data code for somehow reverting a record deletion when the DELETE request fails.
In theory you might be able to muck with the isDeleted flag, or override various internal hooks, but I'd recommend against that unless you really know how things work.
Try reloading the model after the rollback. It will reload from the server but it was the only way around this that I could find.
visitor.destroyRecord().then(function() {
console.log('SUCCESS');
}).catch(function(response) {
console.log('failed to remove', response);
visitor.rollback();
visitor.reload().then(function(vis)
{
console.log('visitor.reload :: ' + JSON.stringify(vis));
});
});
Hope that helps.

Handling Exceptions in a critical application that should not crash

I have a server application which I am debugging which basically parses scripts (VBscript, Python, Jscript and SQl) for the application that requests it.
This is a very critical application which, if it crashes causes havoc for a lot of users. The problem I am facing is how to handle exceptions so that the application can continue and the users know if something is wrong in their scripts.
An example: In the SQL scripts the application normally returns a set of values (Date, Number, String and Number). So the scripts have to have a statement at the end as such:
into dtDate, Number, Number, sString. These are values that are built into the application and the server application knows how to interpret these. These fields are treated in the server app as part of an array. The return values should normally be in a specific order as the indexes for these fields into the array are hardcoded inside the server application.
Now when a user writing a script forgets one of these fields, then the last field (normally string) throws an IndexOutofBoundsException.
The question is how does one recover from exceptions of this nature without taking down the application?
Another example is an error in a script for which no error parsing message can be generated. These errors just disappear in the background in the application and eventually cause the server app to crash. The scripts on which it fails don't necessarily fail to execute entirely, but part of it doesn't execute and the other parts do, which makes it look fairly odd to a user.
This server app is a native C++ application and uses COM technologies.
I was wondering if anyone has any ideas on what the best way is to handle exceptions such as the ones described above without crashing the application??
You can't handle problems like this with exceptions. You could have a top-level catch block that catches the exception and hope that not too much state of the program got irrecoverably munched to try to keep the program alive. Still doesn't make the user happy, that query she is waiting for still doesn't run.
Ensuring that changes don't destabilize a critical business app requires organization. People that sign-off on the changes and verify that they work as intended before it is allowed into production. QA.
since you talk about parsing different languages, you probably have something like
class IParser //parser interface
{
virtual bool Parse( File& fileToParse, String& errMessage ) = 0;
};
class VBParser : public Parser
class SQLParser : public Parser
Suppose the Parse() method throws an exception that is not handled, your entire app crashes. Here's a simplified example how this could be fixed at the application level:
//somewhere main server code
void ParseFileForClient( File& fileToParse )
{
try
{
String err;
if( !currentParser->Parse( fileToParse, err ) )
ReportErrorToUser( err );
else
//process parser result
}
catch( std::exception& e )
{
ReportErrorToUser( FormatExceptionMessage( err ) );
}
catch( ... )
{
ReportErrorToUser( "parser X threw unknown exception; parsing aborted" );
}
}
If you know an operation can throw an exception, then you need to add exception handling to this area.
Basically, you need to write the code in an exception safe manner which usually uses the following guidelines
Work on temporary values that can throw exceptions
Commit the changes using the temp values after (usually this will not throw an exception)
If an exception is thrown while working on the temp values, nothing gets corrupted and in the exception handling you can manage the situation and recover.
http://www.gotw.ca/gotw/056.htm
http://www.gotw.ca/gotw/082.htm
It really depends on how long it takes to start up your server application. It may be safer to let the application crash and then reload it. Or taking a cue from Chrome browser run different parts of your application in different processes that can crash. If you can safely recover an exception and trust that your application state is ok then fine do it. However catching std::exception and continuing can be risky.
There are simple to complex ways to baby sit processes to make sure if they crash they can be restarted. A couple of tools I use.
bluepill http://asemanfar.com/Bluepill:-a-new-process-monitoring-tool
pacemaker http://www.clusterlabs.org/
For simple exceptions that can happen inside your program due to user errors,
simply save the state that can be changed, and restore it like this:
SaveStateThatCanBeAlteredByScript();
try {
LoadScript();
} catch(std::exception& e){
RestoreSavedState();
ReportErrorToUser(e);
}
FreeSavedState();
If you want to prevent external code from crashing (possible untrustable code like plugins), you need an IPC scheme. On Windows, I think you can memory map files with OpenFile(). On POSIX-systems you can use sem_open() together with mmap().
If you have a server. You basically have a main loop that waits for a signal to start up a job. The signal could be nothing and your server just goes through a list of files on the file system or it could be more like a web server where it waits for a connection and executes the script provided on the connection (or any thing like that).
MainLoop()
{
while(job = jobList.getJob())
{
job.execute();
}
}
To stop the server from crashing because of the scripts you need to encapsulate the external jobs in a protected region.
MainLoop()
{
// Don't bother to catch exceptions from here.
// This probably means you have a programming error in the server.
while(job = jobList.getJob())
{
// Catch exception from job.execute()
// as these exceptions are generally caused by the script.
try
{
job.execute();
}
catch(MyServerException const& e)
{
// Something went wrong with the server not the script.
// You need to stop. So let the exception propagate.
throw;
}
catch(std::exception const& e)
{
log(job, e.what());
}
catch(...)
{
log(job, "Unknown exception!");
}
}
}
If the server is critical to your operation then just detecting the problem and logging it is not always enough. A badly written server will crash so you want to automate the recovery. So you should write some form of heartbeat processes that checks at regular intervals if the processes has crashed and if it has automatically restart it.