Flash Builder debugger silently calls getters to update Variables View - is this a flaw? - flexbuilder

I've been using Flex/Flash Builder for a couple of years now, and have only discovered something today. I was looking at some complex code which worked fine, but I needed to know how it behaved, so I placed a breakpoint in it and low and behold it threw an exception. I took it out again and it ran through OK (even with trace statements). I thought this was odd.
To cut a long story short, after a lot of head scratching it turns out that if you have a getter in your code, and during the debug process your Variables View (or Expressions View) displays the var that the getter gets, then the Flash Builder debugger will run the getter (silently) in order to calculate the value displayed in the Variables View.
In other words, under certain circumstances, by debugging your code, it may run a completely different path to how it would run if you weren't debugging (and which path it follows depends upon whether you have the Variables View open and are displaying the var's value in it).
Is this a pretty serious flaw?
I always thought that if you run an application in debug mode it should run the exact same code path that it takes when not debugging, provided the inputs are identical (although I realise it can get pretty hard to replicate real time use when debugging around event handlers for losing/gaining focus etc).
The other scary thing is that the getter is called silently by the debugger - ie if you have a breakpoint in the getter it won't stop there when the debugger calls it to update the var's value in the Variables View. So you don't even realise its running the getter.
Shouldn't the var in the Variables View just display a value of null until its getter is eventually run under the normal course of code execution?
Edit (28/7/11): My original post stated that setters were run, which is wrong - it is the getters, as described above. As such this "flaw" exists only when code in the getters performs additional functionality above and beyond the mere getting of the var value - additional functionality like this in getters is a "code smell" to me, therefore the original flaw exists, but really only in tainted code

Related

How to change Promise constructor in Node.js project running locally?

This might sound strange, but I would like to alter my local Node.js version and modify the Promise implementation to add a new source instance property.
global.Promise = class SourcePromise extends Promise {
constructor(params) {
super(params)
this.source = new Error('This is where this promise was created').stack
}
}
This would be to help me debug an error occurring on my Nuxt app, but only on the server. I'm able to catch the error by listening to the unhandledRejection event, but the error returned is not an Error object, it is simply undefined so I have no clue where it's coming from. The callback of unhandledRejection also returns the promise so I tried to add the code snippet above at the very beginning of nuxt start script to be able to log the source like:
process.on('unhandledRejection', (error, promise) => {
console.log('Unhandled Rejection:', error?.stack)
console.log('Promise source:', promise.source)
})
but promise.source is also undefined. If I log console.log(Promise.resolve().source) from any script, it works and I get the source so the only explanation I have in mind would be that the promise is created in a child process where my Promise extension is not defined.
To sum up, since it's happening in a separate process and I can't identify which one, the only way I see of implementing my SourcePromise globally in all Node processes would be to change the Promise definition directly in my local version of Node. Is it even possible?
I'm on macOS Monterey 12.3.1 using nvm v0.38.0
EDIT
I ended up cloning Node.js from Github to be able to build it locally and use it to start my Nuxt server. The problem is: it's written in C++ which I don't understand. I think I found where the promise constructor is defined which calls NewJsPromise that seems to be defined here, but I'll need help from a C++ developer since I still don't know how or where to add the stack...
(V8 developer here.) Bad news up front: I don't know exactly how to do what you're asking, and I don't have the time to figure it out for you.
But since you're not getting other answers so far, I'll try to say a few things that might help you make progress:
I don't think this is a "separate process" issue. JS objects, including Promises, can't travel between processes. If you see it in one process, then it was created in that same process.
There's more than one piece of code in V8 that creates Promises, so to be sure that you'll catch the promise in question, you'll have to update all of them:
NewJSPromise in src/builtins/promise-misc.tq, which you've found, is the Torque (not C++!) implementation, used for both the JavaScript Promise constructor and several other builtins. Note that if you put your modifications only into the PromiseConstructor, that would skip the other uses of the helper, so be sure to update NewJSPromise.
There's the C++ version in Factory::NewJSPromise in src/heap/factory.cc (which is used, probably among other things, for V8's API, i.e. any case where Node itself or a custom Node add-on creates Promises).
There's inlining support in the optimizing compiler in PromiseBuiltinReducerAssembler::ReducePromiseConstructor in src/compiler/js-call-reducer.cc (which could potentially be dealt with by disabling compiler support for inlining Promise creation).
The general outline of what you'd do is:
Update the Promise object definition to include another field.
Look at how Error objects are created to see how to get the stack. Error objects employ some fancy "lazy creation" scheme for stack traces for performance reasons; it may be easier to follow that example (to minimize divergence), or it may be easier to simplify it (because you don't care about performance). Note that AFAICT Error objects are always created in C++; you'd have to figure out how to get stacks to Torque-created Promises (off the top of my head I don't have a good suggestion for that).
Update all places where Promises are created (see above) to initialize the new field accordingly.
I strongly suspect that there are less time consuming ways to debug your original problem. (I dunno, maybe just audit all places where you're dealing with promises, and check that they all have rejection handlers? That might take hours, but quite possibly fewer hours than the V8 modification project sketched above.)
Good luck with your investigation!

How do C++ developers capture programmatic errors in release builds?

I have a C++ application that crashes with segfault with some unknown customer data. Customer refuses to share his input data. Is it possible to figure out where did error happen?
When Java application crashes on end-user side it usually produces a stack trace that can help developer to figure out where is the error in program and what program invariants where broken.
But what should C++ developer do in this case? Should I recompile application with some compiler option so it provides some diagnostics when error happens?
If you don't have the input data required to recreate the problem (for whatever reason...including difficult customers) and you don't have core/minidumps, there is not much you can do. I've been in many situations such as this. My recourse was to recreate what I thought was the execution path based on interviewing the customer and then just do a meticulous code review to find possibilities of error conditions. I would test every candidate condition and eventually find the problem. This is painful, time consuming, and the main prerequisite is that you are able to read code nearly like you're reading your native language.
Begin Story Time
I worked somewhere that had a crash bug randomly manifest in a multi-tenant system. No amount of logging, core dumps, etc. would help us find it. Finally I reviewed the code (line. by. line. for multiple thousands of lines) and noticed that the developer was constructing a std::string instance from a char* sequence passed to the ctor. It was DEEP down in the parts of the code that hardly ever changed, so correlating the issue to recent changes was just a set of false leads. I asked the developer, "Are all your char arrays null terminated?" Answer: "No." Me: "Well we are then randomly reading memory until it finds a null, and apparently sometimes the heap has a lot of contiguous non-zero memory." Handling the char array bounds differently resulted in fixing the problem.
End Story Time
While you can't find a single way to find all bugs, there is a defensive design you can apply that is quite simple. Most people put it in the code once they get burned by this type of situation. The approach is to add support for different levels of logging verbosity and essentially instrument your code with log outputs that don't execute unless the code is set to use the correct level of verbosity. Turning the verbosity level up until the bug is recreated gives you at least some idea of where it is happening. Often customers will not have a problem sharing redacted log data (assuming there is sensitive data in the logs). Load the logs in Splunk or something similar (if the customer doesn't already aggregate their logs in an analysis tool) and you'll have an easier time reviewing the data.
Unfortunately with C++ you don't get nice stack traces and post-mortem data for free (in general). You have to add these post-mortem troubleshooting capabilities into your design up front. Most of the design gets driven from the expected deployment environment and user personas of your code, so add "difficult customer" as a persona and start coding. :)

How to improve branch coverage in C++

I have a fairly large test suite for a C++ library with close to 100% line coverage, but only 55.3% branch coverage. Skimming through the results of lcov, it seems as if most of the missed branches can be explained by C++'s many ways to throw std::bad_alloc, e.g. whenever an std::string is constructed.
I was asking myself how to improve branch coverage in this situation and thought that it would be nice to have a new operator that can be configured to throw std::bad_alloc after just as many allocations needed to hit each branch missed in my test suite.
I (naively) tried defining a global void* operator new (std::size_t) function which counts down a global int allowed_allocs and throws std::bad_alloc whenever 0 is reached.
This has several problems though:
It is hard to get the number of new calls until the "first" desired throw. I may execute a dry run to calculate the required calls to succeed, but this does not help if several calls can fail in the same line, e.g. something like std::to_string(some_int) + std::to_string(another_int) where each std::to_string, the concatenation via operator+ and also the initial allocation may fail.
Even worse, my test suite (I am using Catch) uses a lot of new calls itself, so even if I knew how many calls my code needs, it is hard to guess how many additional calls of the test suite are necessary. (To make things worse, Catch has several "verbose" modes which create lots of outputs which again need memory...)
Do you have any idea how to improve the branch coverage?
Update 2017-10-07
Meanwhile, I found https://stackoverflow.com/a/43726240/266378 with a link to a Python script to filter some of the branches created by exceptions from the lcov output. This brought my branch coverage to 71.5%, but the remaining unhit branches are still very strange. For instance, I have several if statements like this:
with four (?) branches of which one remained unhit (reference_token is a std::string).
Does anyone has an idea what these branches mean and how they can be hit?
Whose code are you wanting to test - yours or the Standard Library? It strikes me that your coverage report is telling you about branches in 'std::string' rather than your code.
Can you configure 'lcov' to ignore the std library and just focus on your code?
I had a successful go at this a while back. I didn't have a test suite, just ran my application, but found the following.
Some form of isolation of the thing under test was important. I had vectors and maps, which basically interrupted test when they were also prone.
I think I succeeded when I had an IPC between the fault injection and the failure point. That allowed the fault injector code to new and delete independently of the thing under test
I have also succeeded with similar things when in the same binary, but having two independent allocation paths - a custom allocator for the fault injection code - which ensures it does not get interfered with.
My successful system took a call-stack of a malloc and sent it through IPC to another program. It decided if it had seen the stack before, and if not, it failed the allocation. Then the program might crash and fail (the core-dump was captured), then the test system re-launched.
This dramatically improved the quality of the code I was developing.

What features are standard for a testing framework?

So, I've been developing a few programs for AutoCAD 2005, and I've been constantly running into problems--specifically, I've been working on a program that needs to draw lines based on absolute angles ("azimuths") and distances, converting from a special input format to degrees to radians and back, and, like many other programmers, my code has gotten especially bulky and buggy; I've been stuck for almost a week and a half for a program/script that should've taken about three or four days.
I've been thinking about implementing a testing framework for making development much smoother, but unlike other languages, I'm working in a language that supposedly has absolutely no libraries for it and, even better, it's a embedding scripting environment.
I have a few ideas about how the design might work out, but I need to explain a few things:
Executing commands/AutoLISP background
Most of the programs I write are in the form of console-like commands, much like a shell. For example, say I write a function x. In AutoLISP, it's expressed as (the slash is also literally there): (defun x (arguments / local variables) body of function). To make it exposed to the console, I would need to change the name from x to C:x.
So, most of my testing must be done directly from the console; I tend to avoid the inbuilt Visual Lisp editor in AutoCAD because it seems to be disjointed from the actual working environment that the program will most likely be used in, and it doesn't seem to have an actual debugger. So, I frequently have to use (print "string") or other methods to debug my code.
Ideas/Thoughts/Questions
1. What types of functions might I want to expose for a testing framework? I've heard about multiple paradigms myself, such as compile-time testing, asserts, test classes in Java, etc. Should I try to code an assert? Perhaps I should create reflection-using tests?
2. How and where might I want to inject tests? I've thought about writing a different function and turning all local variables into globals to expose it to a possible testing context, but I'm still unsure of how I might want to do this. AutoLISP is lacking in regular Lisp macros, I believe, but I think it still has very nice reflection capabilities, so it is possible for me to actually feed in commands to the console in order to do things. I feel that an external, non-intrusive framework would make the most sense, but I'd like to get a more experienced answer on this.
Base functionality: with the same inputs, the system gives the same outputs.
Useful functionality: asserts. Given some setup in testing code, you then run part of the program to be tested, and make assertations about the output. If all of the assertations are as-expected, print something minimal. If an assertation fails, print something more verbose, to help track back what went wrong.
Incremental functionality. if something sneaks by your tests, and you have to manually find a bug, write a test that will cover that bug next time.
Continual functionality. Have the tests run at least once-per-submission to your source control system. They can run as a presubmit if failures are common but testing itself is quick, or can run as a postsubmit if failures are rare but testing is slow.

Need an efficient strategy for removing compiler-imposed SAVE attribute for local variables

As part of a code modernization effort, I'm trying to eliminate saved state within functions and subroutines. The code in question fails unless it is built using a 'all variables SAVE' flag (such as /SAVE or -save under the Intel compiler).
I'm dealing with about 90,000 lines of code (over 500 functions & subroutines) and I'm faced with a few unattractive options:
The Extremely Tedious Conservative Approach
I can disable the compiler flag and globally add the SAVE directive to each routine's declaration header (skipping any routines that I've declared as PURE or ELEMENTAL), then iteratively remove it from routines, rebuilding and testing the code, and restoring it if testing fails. This is guaranteed to work but it seems like a massive and inelegant waste of time.
The Slightly Less Tedious Additive Approach
I can disable the compiler flag then iteratively add SAVE to suspicious routines, rebuilding and testing, and stopping once the code successfully runs. This is less tedious, but still fairly wasteful. Worse, it's not guaranteed to work if code testing hasn't covered every scenario where saved state is required.
The Question
Is there is a more efficient approach to detecting saved state within routines? Is this a job for a static analysis tool? Or am I doomed to the endless tedium of manual editing and testing?
(A natural question is "Why bother?" Three reasons: First, it's bad coding practice; state should be saved at the global, module, or object level, not within a specific routine. Second, pragmatically, local saved state interferes with code parallelization. Finally, minimizing compiler flags reduces the code's dependency on a particular compiler and removes hidden state from the build process.)
I agree with you that what you're doing is the right thing to do (TM), but I don't have any solutions that would guarantee it's not going to be tedious. In any case, in order to get rid of a lot of saved state, not merely moving it from a compiler option into the code, might require more or less extensive refactoring of the code anyway.
Anyway, one thing worth trying is running your code under valgrind. Without such a global -save option, valgrind might catch some "use of uninitialized variable" errors.