How can I intercept Selenium errors? - customization

In developing Selenium extensions I have scripting to verify the correct handling of failure cases. Unfortunately, I have to execute those commands one-by-one in the IDE, and manually examine each error message. What I would like to do is define a custom Selenium command that I can insert before each command that intentionally fails in a given way. Eg: willFail|expected-error-text.
In other words, I want to alter Selenium command completion behavior such that if the next command throws the given error message, then the result is success and the script continues. But if it succeeds or throws a different error, then the script stops with an error.
I imagine this will involve setting observer function(s), and/or intercepting Selenium function(s). I'd expect the issues to be:
How/where to do the initialization. The relevant Selenium objects can be hard to find.
What/when to return in order to alter the result.
Is there something else left out-of-sync by altering a result?
The PowerDebugger extension allows you to pause the IDE upon a failure, and then resume. So I suspect that the how-to is in there somewhere. But I can't quite figure out how it hooks into Selenium command processing. Samit Badle, are you out there?
I am using Selenium IDE 2.2.0.

With some experimentation I have found that the function TestLoop.resume() is responsible for determining the outcome of each command.
It is defined in chrome/content/selenium-core/scripts/selenium-executionloop.js.
This function executes the command, and either halts the script, or allows it to continue.
To alter this behavior, a Selenium extension can temporarily replace this function with a custom version. To accomplish this, save a reference of editor.selDebugger.runner.IDETestLoop.prototype.resume, and replace it with the custom function. The custom function should then restore the native function, and carry out command execution as appropriate.

Related

Is there a way to stop RGui from crashing when an RCPP program fails to work correctly?

I'm using Rcpp to run C++ code using RGui (version 3.4.1) as a user interface. Quite often I make changes to the C++ code which compile correctly but cause errors (e.g. searching beyond the end of an array) when I run the relevant program in RGui, causing RGui to crash. This is aggravating because I have to re-open RGui, re-open my R script (sometimes with unsaved changes lost), set the working directory again, etc. before I can re-compile the C++ code and run the program in such a way as to find the problem or test amendments. Sometimes it promptly crashes again because I haven't fixed or bypassed the problem.
Is there some way to change the way Rcpp runs such that RGui returns an error message instead of crashing in these sorts of situations?
Briefly:
It is spelled Rcpp. Capital R, lowercase cpp.
Yes, don't have bugs :)
In general, 2. is the only viable answer. If you need a managed language, use R.
If the code takes your environment down, test outside the environment. Seriously. That is for example why I (co-)wrote littler and test "raw code" on the command-line: it can only take the command-line app down.
We do have a feature in eg RcppArmadillo to test for "out of bounds" vector access: use x.at(i,j) which will warn. See http://arma.sourceforge.net/docs.html#element_access
I don't actually know of a way to prevent this apart from more careful programming, and saving before execution. But having done this a few times I have discovered a way to get back at unsaved changes, (at least in windows).
When you get the pop-up that tells you to restart R, you don't do it. You open up task manager and right-click on the process and select 'Create Dump File'. Find this file in explorer and open it with some text editor.
They are very big, and full of all sorts of stuff, but if you use find function to search for some string you know to be in your script, then you can find all the unsaved work. You can then copy and paste this into another file to save.
If you use R-studio instead of R-GUI, it usually manages to look after your unsaved work better.

Automatically invoke a function in application code when a GDB breakpoint is hit

I have multiple different processes communicating over IPC and when debugging a single process using gdb, whenever a breakpoint is hit, I am trying to send a message to other processes. Is there a way to automatically invoke a function/piece of code (NotifyAll()) whenever a breakpoint is hit without manually running commands and invoking the function in the gdb console.
Basically, whenever a gdb debugger is attached to one of these processes, I want gdb to know that it should invoke NotifyAll() whenever a breakpoint (application-wide) is hit.
Yes, this can be done using the Python scripting capabilities in gdb.
In particular you want to add a listener to gdb.events.stop that checks for a breakpoint stop event, then calls your function. It's possible (I don't know offhand) that you'll have to defer the calling of the function by posting an event to the gdb event loop.
To make this work with the minimum of manual intervention, use the gdb script auto-loading feature to associate this Python script with your application. This will require users to trust the script (read about add-auto-load-safe-path), but that's all.
Note that doing things like this is potentially confusing to people trying to debug your application. For example, setting a breakpoint in the RPC code will cause problems unless your script takes extra care.

py.test: dump stuck background threads at the end of the tests

I am using pytest to run my projects Python unit tests.
For some reason, sometimes the test runner does not exist after printing the test stats. I suspect this is because some tests open background threads and some dangling threads are not cleaned up properly in the tear down. As this does not occur every time, it makes it harder to pin down what is exactly happening.
I am hoping to find a way to make pytest to display what threads after it prints failed and passed tests. Some ideas I came up with?
Run custom hook after tests are finished - does py.test support any of such hooks?
Some other way (custom py.test wrapping script)
Other alternative ways I think would be just print thread dump at the end of each tear down.
Python 3.4.
Try using the pytest-timeout plugin... after a timeout occurs, it will dump all threads and exit the process.
If you would like to implement custom code yourself though, take a look at pytest hooks. I guess you could use pytest_runtest_teardown hook to write custom tear down code.

Google Closure Javascript testing, disable autodiscover tests

Currently i am implementing the Google closure testing possibilities.
It works as a charm.
I Define the TestCase by hand, and add the test by hand. I also create a separate runner for the tests so I can catch all the results and pass them to another function.
This function sends the results through ajax to PHP so the results can be logged in the database (also works as expected).
The problem however is that because I do this, and I load the page in the browser the tests get executed 2 times (one time because of the auto-discovery and once because i defined it in the testcase.
I would like to disable the auto-discovery, but I don't want to disable the flag in the closure library, this because when the library gets updated we need to reset the flag to false again.
So how can i disable auto-discovery without modifying the code in the closure library?
Thanks in advance!
If you look into jsusnit.js, you'll see that goog.testing.jsunit.AUTO_RUN_ONLOAD = true; is hard-coded there and you can override this variable only through closure compiler's define.
If you don't compile your test code (I don't, because of speed of iteration), the only option seems to change this to false, and redo the change on closure library updates.

Is there a better way to shell out a command in C++ than system()?

I have a C++ windows service that sometimes crashes when I call the system() function to shell out a command. I've run the exact text of the command in the windows command-line, and it runs just fine, but for some reason it fails when system() is run.
To make matters worse, I can't seem to get any information as to why system() is failing. There doesn't seem to be an exception raised, because I'm doing a catch(...) and nothing's getting caught. My service just stops running. I know that it's the call to system() that is failing because I've put logging information before and after the call, and anything after just doesn't log anything.
So, is there a different way that I can shell out my command? At the very least, something that will give me some information if things go wrong, or at least let me handle an exception or something.
I belive system() is technically part of the C standard library, and therefore wouldn't throw exceptions. You should be able to check the return code or the ERRNO variable to get some information about what happened. This MSDN link has some information about the possible return codes on Windows.
I've also seen system() fail for other external reasons, such as virus scanners, so you might investigate that as well.
I don't know of a better way to run shell commands, but I could be wrong.
EDIT: If it still just seems to crash for no reason, you might try using process monitor to see what is going on at a lower level. Since the output from process monitor can be kind of overwhelming, a trick I like to use is to add a statement right before the call to system() to your program to open a nonexistent file like "C:\MARKER.TXT" or something, then you can search the process monitor output for the name of the file and look at the entries right afterward that may have something to do with the problem.
Ordinary catch() will not catch fatal exceptions (e.g. segmentation fault). You have to use structured exception handling. Better yet, enable post-mortem debugging; this article explains how you can enable post-mortem debugging of services.
You could use fork/exec, but I think that is what the system is doing.
I think your problem could be the user account associated on your service.
Either there's an environment problem (missing entry in path) or the account the service is using to run doesn't have the rights to exec whatever you're trying to run.
Run services.msc and look at the properties for your service.
On the logon page, as a test, change the setup so it uses your account to run the service. If it succeeds, you know what the problem is.
Another thing to look at is the path while inside the service. Use getenv( "PATH" ) and see if a path you might be reliant on is missing.
Hope this helps...
I ended up using CreateProcess. It's been working out so far.