WebStorm Conditional Code Folding - webstorm

I'm wondering whether there is any conditional code folding mechanism in WebStorm. When writing Karma tests, I find that I have many nested describe blocks containing beforeEach and it functions.
Is there a way I can fold all beforeEach / it statements on any level but leave the describe blocks unfolded?

Related

What is prolog/epilog in Tessy?

I try to use TESSY unit test application in my projects and now I try to be compatible with this app. I have a lot of question and doubt but one of the biggest is what is Prolog/Epilog in TESSY?
I don't have any experience with Prolog language. I think maybe it is one tools for generating test case by programming. I want generate test case instead of creating one by one and I don't know this tool can help me or no but I guess it can.
I only find TESSY user manual and my reference is this document. If you know any reference (book ,..) let me know.
What is Prolog/Epilog in TESSY?
In the TESSY user manual where it refers to prolog and epilog, it is not refering to the programming language Prolog but code that is executed before and after a unit test. The problem is the manual spells the words as prolog and epilog but they can also be spelled prologue and epilogue. Try querying with those alternatives and it should help.
If you look at the definition of prologue
any introductory proceeding, event, etc.:
or the definition of epilogue
a concluding part added to a literary work
it should make more sense.
AFAIK the words were first used with programming during the days of assembly, see: Function prologue, and carried over to unit testing. The meaning is basically the same.
Also note in the image in the manual that you can stack the prologue and epilogue, e.g
prolog 0
prolog 1
code or test
epilogue 1
epilogue 0
Also while not a requirement, if there is a prologue there is typically a matching epilogue.
Sometimes it helps to look at the same idea using different terminology in a similar entity. In this case JUnit is a popular unit test framework for testing Java where the words before and after are used. See: How do I use a test fixture?

Is there a way to skip certain tests by default in Codeception?

I'm adding unit tests to a project that did not originally have tests. A lot of the code makes real changes to file systems, databases, web calls, etc. so I'd still like to be able to test it, but safely until such time as I can refactor it to allows mocks in testing.
I know how to explicitly skip tests in Codeception with --skip or --skip-group, but is there a way to skip "dangerous" tests by default, unless they are specifically included? Ideally, I'd like to define a group called dangerous (or something similar), and add the tests that should only be run with caution to it.
Looking through the source code for Codeception, there doesn't seem to be a configuration option for it.

How can I verify that refactoring preserves code flow, not just behavior?

Sometimes, I see if-statements that could be written in a better way. Usually these are cases where we have several layers of nested if-statements and I've identified a simpler way of rewriting the block of if-statements.
Of course the biggest concern is that the resulting code will have a different code flow in certain cases.
How can I compare the two code-blocks and determine if the code flow is the same or different?
Is there a way to support this analysis with static analysis tools? Are there any other techniques that might help?
Find some way to exercise all possible paths through the code that you want to refactor. You could
write unit tests by hand
use Daikon http://plse.cs.washington.edu/daikon/, which exercises code automatically and systematically to infer invariants (I haven't used it myself, but I have tried a commercial descendant targeted at Java)
Either way, use a code coverage tool to verify that you have complete statement and decision coverage. Use a coverage tool that reports the number of times each statement is executed during the coverage run. You might even be able to get trucov, which actually generates diagrams of code paths, to work.
Do your refactoring.
Run the coverage tool again and compare statement execution counts before and after the refactoring. If any statement execution count changed, the flow must have changed. The opposite isn't guaranteed to be true, but it's probably close enough to true for practical applications. Alternatively, if you got trucov to work, compare execution graphs before and after; that would be definitive.

Code coverage metrics when using groovy AST transforms

We use several AST transforms in our groovy code, such as #ToString and #EqualsAndHashCode. We use these so we don't have to maintain and test them. The problem is that code coverage metrics (using jacoco right now but open to change if it will help) don't know these are autogenerated methods and they cause a lot of code to appear uncovered even though it's not actually code we're writing.
Is there a way to include these from coverage metrics in any tools?
I guess you could argue that since we're putting the annotations we should still be testing the code being generated since a unit test shouldn't care how these methods are created, but just that they work.
I had a similar issue with #Log and the conditionals that it inserts into the code. That gets reported (cobertura) as a lack of branch coverage.
But as you said: it just reports it correctly. The code is not covered.
If you don't need the code, you should not have generated it. If you need it and aim for full test coverage, you must test it or at least "exercise" it, i.e. somehow use it from your test cases even without asserts.
From a test methodology standpoint, not covering generated code is equally questionable as using exclusion patterns. From a pragmatic standpoint, you may just want to live with it.

Are assertions redundant when you have unit tests?

I'm not used yet to write unit tests and I want to do this on a full framework of little tools (making it more secure to use). That way I'll certainly learn more about unit tests than what I learnt until now.
However I'm really used to add assertions systematically everywhere I see there is a context to be sure about (that are removed in the final release). Mostly as preconditions in functions implementations and each time I retrieve informations that have to be correct (like C/C++ pointers validity for a famous example).
Now I'm asking : are assertions redundant when you have unit tests? Because it looks redundant as you're testing the behaviour of a bit of code; but in the same time it's not the same execution context.
Should I do both?
Assertions that check preconditions can help detect and locate integration bugs. That is, whereas unit tests demonstrate that a method operates correctly when it is used (called) correctly, assertions that check preconditions can detect incorrect uses (calls) to the method. Using assertions causes faulty code to fail fast, which assists debugging.
Assertions not only validate the code but serve as a form of documentation that informs readers about properties of various structures that they can be sure will be satisfied at that point in execution (e.g. node->next != NULL). This helps to create a mental model of the code when you're reading through it.
Assertions also serve to prevent disaster scenarios during runtime. e.g.
assert(fuel);
launch_rocket();
Trying to launch when there is no fuel might be disastrous. Your unit tests might have caught this scenario but it's always possible that you've missed it and an abort because a condition is unmet is Way better than a crash and burn during runtime.
So, in short, I'd keep them there. It's a good habit to add them and there's no gain in unlearning it.
I would say that you need both. Unit tests test that your in-method asserts are correct ;)
Unit testing and assertions are vastly different things, yet they complement each other.
Assertions deals with proper input-output of methods.
Unit tests deal with ensuring a unit works according to some governing rules, e.g. a specification. A unit may be ax method, a class, or a set of collaborating classes (sometimes such tests go by the name "integration tests")
So the assertions for a method square root are along the lines of the input and output both be non negative numbers.
Unitests on the other hand may test that square root of 9 is 3 and that square root of some negative number yields an exception.