Unit testing monaco editor renders only first line - unit-testing

I am using vite+vitest for my project. I installed vitest-canvas-mock and its successful in running the tests.
But it is rendering only the first line in the code. I am not able to add text or see any other features.
Did anyone figure out how to write proper unit tests involving monaco-editor.
We are heavily invested in monaco-editor and it would be good to be able to write unit tests.

Monaco-editor only renders as much as is needed to fill the available space. Make sure you have enough vertical space in your mock DOM to render more lines.

Related

OpenCover takes much longer to run than the nunit-console

I'm trying to add unit tests to this project: https://github.com/JimBobSquarePants/ImageProcessor
When running the unit tests, they take maybe 1 or 2 minutes to run (it's an image processing library, and I don't expect them to be insanely fast).
The problem is that when I run OpenCover over these tests, they take something like 20 minutes to run.
The gist of the current unit tests is that there are a bunch of test images, and each unit test (more like integration tests, actually) reads each image, and runs a bunch of effects on it.
I'm guessing that I'm doing something wrong, but what? Why does it takes so much more time on OpenCover than NUnit runner ?
OpenCover instruments the IL of your assemblies (for which it can find a PDB file - because that is where the file location information is kept) and then for each sequence point (think of places you can put a break point) and each conditional branch path will cause an action to register the visit (and increase the visit count).
For algorithmic code you will find running coverage on heavy integration tests will be a performance issue so make sure you only run coverage on tight integration tests or on unit tests e.g. in your case perhaps use small images (as previously suggested) that can test the correctness of your code.
You haven't described how you are running OpenCover (or which version - I'll assume latest) but make sure you have excluded the test assemblies and are only instrumenting the target assemblies.
Finally OpenCover uses a few queues and threads but if you throw a lot of data at it due to loops etc then it will need time to process the data so it works much better on machines with 4 or more cores. When you are running your tests have a look at the task manager and see what is happening.
This is speculation because I don't use OpenCover, but a coverage analysis tool is supposed to instrument all lines it passes through. Since you are doing image manipulation, each pixel will certainly trigger OpenCover to do some analysis on the matching code lines, and you have lots of pixels
Let's say OpenCover takes 0.01ms to instrument one line of code (again this is pure speculation), that you are working with 1280*1024 images and that each pixel needs 3 lines of code (cap red channel, xor green and blue, whatever), you get 1310720 * 0.01 * 3 = approximately 39 seconds. For one test.
I doubt you only have one test, so multiply this by the amount of tests; you may have an idea of why it is slow.
You should perhaps try testing your algorithms on a smaller scale: unless you are doing image wide operations (I don't see which ones?) you code don't need the whole image to work on. Alternatively use smaller images?
EDIT: I had a look at the test suite here and (one again, not knowing OpenCover itself) can say that the problem comes from all the data you are testing; evey single image is loaded and processed for the same tests, which is not how you want to be unit testing something.
Test loading each image type into the Image class for the lib, then test one rotation from an Image class, one resize operation, etc. Don't test everything everytime!
Since the tests are necessary, maybe you could explore the OpenCover options to exclude some data. Perhaps refining your coverage analysis by instrumenting only the outer shell of your algorithm would help. Have a look at filters to see what you could hide in order to make it run acceptably.
Alternatively you could run the code coverage only daily, preferently at night?
I know, very old issue, but I ran also in this issue.
Also with a image library (trimming bitmaps) I ran into very long running time for the unit tests.
It can be fixed, by setting the option '-threshold:' for OpenCover to (for example) 50.

Generate new code coverage for a single file without clearing all other coverage reports in PHPUnit

First the question:
In PHPUnit 3.5, is there a way to generate a coverage report for a single test without the report for the entire test suite being overwritten. I.e. only update the coverage report for the affected files? I still want the output to go to the same folder.
For those that want a bit of background:
Working with PHPUnit 3.5, I have a project which retroactively needs to be covered with unit tests. Now in order to know which classes still need tests I run the entire test suite and generate a html coverage report on it. Because running the complete suite takes some time, I would like to avoid having to run it every time I want to check which tests still need to be implemented. But at the same time I also want the coverage report for the unit test that I'm currently working on, so that I can make sure I'm executing each line of code in a class (this of course is very fast back and forth, so it makes no sense to run the entire suite just to generate this report). I can generate the report for a single test, and I can generate it for the entire suite. But what I'm looking for is a hybrid, which would allow me to first generate a report for the entire suite, and then just update the report with coverage information for the test I'm currently working on.
I've set up a ruby script which will simply run the test for the current file I'm working on and generate a coverage report on that file. But working like that, it always resets the coverage report for all other files also, even if the test did not execute anything in those classes.
Any ideas?
This isn't possible natively, but if you can figure out how to regenerate the HTML from the XML coverage data files you could modify your script to
Copy the coverage XML for a full run to a staging area.
When running a single test, copy over the new XML files to that area. This will necessarily not merge coverage from two tests that cover the same class, but I'm guessing from your description that you're covering a single class from each test and vice versa.
Rebuild the HTML from the XML. You might be able to figure out how to do this by looking at the source, but I doubt it's possible natively either.

Do we need to unit test the GUI when using proper abstraction?

With a good design pattern like MVP, MVC, etc we aim to move all logic out of the GUI. That leaves us with a light weight GUI which ideally just need to "bind" its buttons and fields to properties in some business logic layer. This is a great approach as this layer will be free from GUI stuff, and we can easily write unit tests for it.
My question is: Is this enough? Or should we still unit test the GUI layer?
IMHO if you remove whole logic from GUI, you don't need to test it automatically. Of course you still need to run it to see if it looks like it should :)
This is about unit tests. For integration tests it is still good to test everything, e.g. by Selenium, if possible.
Sometime the GUI is not really that dumb. For instance there might be drag and drop support, custom components which display their content based on where they are placed and many more. In that case these things need to be specifically tested both in integration tests and individually in unit tests.
Most of the time the integration tests start from the UI layer and we end up testing a lot of UI layer in those scenarios as well. I once read a comment from someone about unit-testing that you don't need to write tests for code that can be easily broken for instance getters/setters can be easily broken (for example getter returns the value it is supposed to do and we can break it easily by not returning the value) so we don't end up writing unit tests for getter and setters unless there is some logic embedded in it (in which case these are not actually getter and setters).
So if the GUI is totally dumb and there is only bindings in it then unit tests are not required.

Determining which tests cover a line of code

Is there a way to determine the set of unit tests that will potentially execute a given line of code? In other words, can you automatically determine not just whether a given line is covered, but the actual set of tests that cover it?
Consider a big code base with, say, 50K unit tests. Clearly, it could take a LONG time to run them all--hours, if not days. Working in such a code base, you'd like to be able to execute some subset of all the unit tests, including only those that cover the line (or lines) that you just touched. Sure, you could find some manually and run those, but I'm looking for a way to do it faster, and more comprehensively.
If I'm thinking about this correctly, it should be possible. A tool could statically traverse all the code paths leading out of each unit test, and come up with a slice of the program reachable from that test. And you should then (theoretically) be able to compute the set of unit tests that include a given line in their slice, meaning that the line could be executed by that test ("could" rather than "will" because the actual code path will only be determined at run time based on the inputs or other conditions). A given line of code could have a massive number of tests that execute it (say, code in a shared library), whereas other lines might have few (or no) tests covering them.
So:
Is my reasoning sound on this idea? Could it theoretically be done, or is there something I'm leaving out?
Is there already a tool out there that can do this? Or, is this a common thing with a name I haven't run into? Pointers to tools in the java world, or to general research on the subject, would be appreciated.
JetBrains's dotCover also now has this feature for .NET code. It can be accessed from the dotCover menu with the option "Show covering tests" or by pressing Ctrl + Alt + K.
I'm pretty sure Clover will show you which tests validate each line of code. So you could manually execute the tests by looking at the coverage reports. They also have a new API which you might be able to use to write an IDE plugin that would let you execute the tests that cover a line of code.
The following presentation discusses how to compute the program slice executed by a unit test. It answers the question of, "can you determine the test coverage without executing the program?" and basically sketches the idea you described... with the additional bit of work to actually implement it.
You might note that computing a program slice isn't a computationally cheap task. I'd guess that computing a slice (a symbolic computation) is generally slower than executing a unit test, so I'm not sure that you'll save any time doing this. And a slice is a conservative approximation of the affected part of the program, so the answer you get back will include program parts that actually don't get executed.
You might be better off instead to run all those 50,000 unit tests once, and collect the coverage data for each one. In this case, when some code fragment is updated, it is possible to determine statically whether the code a particular test executes includes the code you changed or not, and thus you can identify tests that have to be executed again. You can skip executing the rest of the tests.
My company builds a family of test coverage tools. Our next release of these tools will have this kind of incremental regression testing capability.
This is a feature that the JMockit Coverage tool (for Java) provides, although it shows the tests that did cover a given line of production code in the last run, not the tests "that will potentially execute a given line of code".
Typically, however, you would have a Jenkins (or whatever) build of the project, where all tests are executed and an HTML coverage report is generated. Then it would just be a matter of examining the report to see which tests are currently covering a given line of code.
A sample coverage report showing the list of tests for each line of production code is available online.

Should I unit-test my grid rendering logic?

I have a simple project, mostly consisting of back-end service code. I have this fully unit-tested, including my DAL layer...
Now I have to write the front-end. I re-use what business objects I can in my front-end, and at one point I have a grid that renders some output. I have my DAL object with some function called DisplayRecords(id) which displays the records for a given ID...
All of this DAL objects are unit tested. But is it worth it to write a unit test for the DisplayRecords() function? This function is calling a stored proc, which is doing some joins. This means that my unit-test would have to set-up multiple tables, one with 15 columns, and its return value is a DataSet (this is the only function in my DAL that returns a datset - because it wasnt worth it to create an object just for this one grid)...
Is stuff like this even worth testing? What about front-end logic in general - do people tend to skip unit tests for the ASP.NET front-end, similar to how people 'skip' the logic for private functions? I know the latter is a bit different - testing behavior vs implementation and all... but, am just curious what the general rule-of-thumb is?
Thanks very much
There are a few things that weigh into whether you should write tests:
It's all about confidence. You build tests so that you have confidence to make changes. Can you confidently make changes without tests?
How important is this code to the consumers of the application? If this is critical and central to everything, test it.
How embarrassing is it if you have regressions? On my last project, my goal was no regressions-- I didn't want the client to have to report the same bug twice. So every important bug got a test to reproduce it before it was fixed.
How hard is it to write the test? There are many tools that can help ease the pain:
Selenium is well understood and straightforward to set up. Can be a little expensive to maintain a large test suite in selenium. You'll need the fixture data for this to work.
Use a mock to stub out your DAL call, assuming its tested elsewhere. That way you can save time creating all the fixture data. This is a common pattern in testing Java/Spring controllers.
Break the code down in other ways simply so that it can be tested. For example, extract out the code that formats a specific grid cell, and write unit tests around that, independent of the view code or real data.
I tend to make quick Selenium tests and just sit and watch the app do its thing - that's a fast validation method which avoids all the manual clicking.
Fully automated UI testing is tedious and should IMO only be done in more mature apps where the UI won't change much. Regarding the 'in-between' code, I would test it if it is reused and/or complicated/ introducing new logic, but if its just more or less a new sequence of DAL method calls and specific to a single view I would skip it.