Unit testing in multiple environments during a build - unit-testing

Currently I have some unit tests that I would like to be executed during a build. Problem is that we our software is going to be installed on a variety of systems, and the code that I'm testing is system dependent.
Seems like if I really want my unit tests run other environments, I would need some VMs running onto which I could dump my unit tests and dependencies, then run the tests and respond somehow.
If that's the case... is there anything currently available to facilitate that?
If there's a better solution, what do you recommend?
These are VS2008/2010 solutions.
Thanks,
Mark

Can't say what you best option is from here but SCVMM, VMWare Lab Manager, VS2010 Lab manager, are options to look at. You can definitely do what you want, the most feature rich environments tend to be on the expensive side though.

dejagnu facilitates multi-platform testing and is pretty good at it, even though very sparsely documented. Getting it to work with a MS tool chain might be much integration work, though.

Related

Go cross-platform tests

Problem description
I've developed a CLI in Go. My dev environment is Linux. I wrote unit-tests and only produce releases of executable files when tests pass. When I test or use my tool in a Linux environment, everything works fine.
My CI/CD pipeline is built around goreleaser to produce multi-platform executables. Since my app doesn't use exotic cross-platform functionalities, I was quite confident that windows executable should work as expected. But it didn't.
Long story short, always normalize paths with filepath.ToSlash(). But this is not my question.
Question
Hence my question is: "since behavior might change on different platforms for such little mistakes, is it possible to run go test for a list of os/architecture ?" I can't imagine rebooting on windows to test every commit manually and I don't think discipline is the answer. If it was, we wouldn't test things at all.
Search attempts
A fast search on Google and Stack Overflow for "golang cross-platform tests" didn't return any results. Am I missing something or is my approach to this problem wrong ?
Fist edit
Most comments pointed out that the only way to test the behavior of an executable on a given platform is... to test it on this platform (in a multi-stage CI/CD for example). This is so obvious that there might not be a way to achieve it otherwise, I know.
But triggering a parallel CI/CD job on every platform for every commit (of partially untested code) doesn't sound satisfying to me. It IS the only way to know for sure that the code behave as expected on every targeted platform but I'm wondering if anyone stumbled on this issue and found a pre-CI/CD solution to this problem.
Though it might be the only way to get conclusive test results, it implies triggering CI/CD with parallel tests on each platform. I was looking for some solution on the developer machine, before committing untested code
You can install a local CICD tool which would, on (local) commit, trigger those tests.
A Local GitLab, for instance, can run test in multiple platforms simultaneously (since GitLab 11.5)
But that implies at least a Docker image in order to test on Windows from your Linux dev environment.
With Go alone however, as mentioned in "Design and unit-test cross-platform application"
It's not possible to run go test for a target system that's different from the current system.
If you are trying to test a different CPU architecture, you should be able to do it with QEMU. This blog post explains how.
If you are trying to test a different OS, you can probably do everything you need to do in Docker containers. You could use a tool like VSCode’s remote development toolkit to easily dev/build/test a project in a specific container, or you could write a custom Makefile that calls the appropriate Docker commands when you run make test (allowing you to run tests in multiple OSes with one command).

Any suggestions on software development along the lines of quality assurance?

SUMMARY
The software I am working is an engineering analysis tool for renewable energy systems. It tests power outputs and things of this nature.
My current job task has been to test different configurations of said systems (i.e. use a evaporative cooling condenser instead of the default radiative cooling condenser via a drop down menu on the user interface) to ensure that the release version of this software functions across all spectrums as it should.
So far the process has been very simple to test: changing one factor of the base model and testing if the code functions properly as it should via the outputs generated.
How I need to pivot
But now we want to start testing combinations of tests. This change of workflow will be very tedious and time consuming for we are considering a factorial design approach in order to map out my plan of attack for what configurations to test, to ensure that all the code functions properly when changing multiple things. This could create many different types of configurations I will need to manually test.
All in all, does anyone have any suggestions for an alternative approach? I've been reading up on alternative software testing methods, but am not entirely sure if things like: regression testing, smoke testing, or sanity checks are better for this scenario or not.
Thanks!
EDIT
The software I am testing is being used on Visual Studios where I am utilizing the Google Test framework to manually test my system configurations.
Currently, each test that I create for a certain concentrated solar power system configuration demands that I manually find the difference in code (via WinMerge) between the default configuration (no changes made) to the alternative configuration. I use the code differences in Google Test Framework to simulate what that alternative config. should output by testing it against the accepted output values.
It's only going to get more complicated, with an aspect of manual user interface needed ... or so it seems to me.
How can I automate such a testing suite, when I'm needed to do so much back end work?
As per what I understand, to avoid the manual effort of testing too many combinations, an automation testing tool is needed in this case. If the software that you are testing is browser based, then Selenium is a good candidate. But if the tool is run as an application on Windows or Mac, then some other automation testing tool that supports Win/Mac apps would be needed. The idea is to create test suites with the different combinations and set the expected results for one time. Once the suite is ready, it can be run after any change to the software to verify that all the combinations work as expected without doing any manual work. However there would be an effort involved to create the test suite in the first place and then maintain it if the new scenarios occur or the expected results need to be modified.
It would be a pain to test all the many combinations manually each time, automation testing can surely ease that.

How Do I Know That I'm Not Breaking Anything During Refactoring?

I've started my first experience in refactoring on huge system and writing unit tests for it, but I am just scared that I'm breaking the code without knowing it.
I studied the "the art of unit testing" and "working efficiently with legacy code" to find a solution, and my next plan is just stop refactoring for a while and write some integration testing(I have selected Fitnesse tool for integration testing purpose) to run them every time after I change some thing.
I am just wondering is there any other one with same experience? Do you think inetegration testing can be a good solution for this issue? Do you have any better idea?
I also checked this question (How can I check that I didn't break anything when refactoring?) but my situation is different with that, because there is no unit test available and I am here to write unit tests.
Integration testing is part of a good solution for refactoring. However some problems introduced by the refactoring will only show up when you have deployed the project.
A better idea would be to incorporate the integration testing into a continuous delivery strategy. This means you should have a clean and practical approach to build and deploy the project as often as possible to a near identical environment while refactoring it. The book Continuous Delivery: Reliable Software Releases through Build, Test, and Deployment is a good resource. Here is one of the antipatterns it describes (Pages 7-9):
Antipattern: Deploying to a Production-like Environment Only after Development Is Complete
In this pattern, the first time the software is deployed to a
production-like environment (for example, staging) is once most of
the development work is done...
Once the application is deployed into staging, it is common for new
bugs to be found...
The remedy is to integrate the testing, deployment, and release
activities into the development process. Make them a normal and
ongoing part of development so that by the time you are ready to
release your system into production there is little to no risk,
because you have rehearsed it on many different occasions in a
progressively more production-like sequence of test environments. Make
sure everybody involved in the software delivery process, from the
build and release team to testers to developers, work together from
the start of the project.
At the end of the day, this is the problem of working with Legacy Code.
Integration Tests are your best bet, but to write those to correctly meet your needs, you would need to know the original intent of the original code, which often isn't as clear, because there are often hidden requirements.
There are no ideal solutions.
Although previous answers are very good, I'd like to add that unit tests are exactly for this. In our test project when we refactor each other components, its mandatory to run already existing unit tests prepared from initial developer + new ones before commit to the Version control. Besides - its a good approach to have smoke tests running on every check-in. An ofcourse - Integration, Regression etc. afterwards.
UPDATE
I'm in the exact same situation - chained to maintenance. Tools can vary greatly - depending of the needs. Starting from Web-, -Unit-Testing up to SOA- and Server-testing. If you provide more detailed info about your SUT I'll gladly try to help.

Unit testing installation of services

Our installer program is going to be installing a number of system services, under both Windows and UNIX, using JavaServiceWrapper. There will be a class responsible for creating JavaServiceWrapper config files, installing the services, etc.
Can I have some suggestions on how to unit-test this class?
I would not struggle too much with unit testing such a class, rather I would go for integration / smoke tests. You need these anyway to verify that your installation works properly - preferably not only on your own machine, but also in the target environment, in real life, before you are about to demonstrate it to your boss and most important client :-)
Update: I assume that the class in question would not contain much complicated logic, rather just gluing together different pieces supplied by other APIs. However, if this is not the case, and you feel you can't easily test a significant part of its functionality via integration tests, you can still try unit testing with good ol' mocks and/or dependency injection.
Lol! Found this last night. Environmentally Friendly Deployment. I really think as more complex your deployment, the more you need to validate your environment.

Is anyone actually successfully using MSTest across the team?

I've been using MSTest so far for my unit-tests, and found that it would sometimes randomly break my builds for no reason. The builds would fail in VS but compile fine in MSBuild - with error like 'option strict does not allow IFoo to cast to type IFoo'. I believe I have finally fixed it, but after the bug coming back and struggling to make it go away again, and little help from MS, it left a bad taste in my mouth. I also noticed when looking at this forum and other blogs and such, that most people are using NUnit, xUnit, or MBUnit.. We are on VS2008 at work BTW.. So now I am looking to explore other options..
I'm working on moving our team to start doing TDD and real unit testing and have some training planned, but first would like to come up with a set of standard tools & best practices. To this end I've been looking online to come up with the right infrastructure for both a build server and dev machines...I was looking at the typemock website as I've heard great things about their mocking framework, and noticed that it seems like they promote MSTest, and even have some links of people moving TO MSTest from NUnit..
This is making me re-think my decision.. so I guess I'm asking - is anyone using MSTest as part of their TDD infrastructure? Any known limitiations it has, if I want to integrate with a build / CI server, or code coverage or any other kind of TDD tool I may need? I did search these forums and mostly find people comparing the 3rd party frameworks to eachother and not even giving MSTest much of a chance... Is there a good reason why.. ?
Thanks for the advice
EDIT: Thanks to the replies in this thread, I've confirmed MSTest works for my purposes and integreated gracefully with CI tools and build servers.
But does anyone have any experience with FinalBuilder?? This is the tool that I'd like us to use for the build scripts to prevent having to write a ton of XML compared to other build tools. Any limitiations here that I should be aware of before committing to MS Test?
I should also note - we are using VSS =(. I'm hoping we can ax this soon - hopefully as part of, maybe even the first step, of setting up all of this infrastructure.
At Safewhere we currently use MSTest for TDD, and it works out okay.
Personally, I love the IDE integration, but dislike the API. If it ever becomes possible to integrate xUnit.NET with the VS test runner, we will migrate very soon thereafter.
At least with TFS, MSTest works pretty well as part of our CI.
All in all I find that MSTest works adequately for me, but I don't cling to it.
If you are evaluating mock libraries, take a look at this comparison.
I've been using MS Test since VS 2008 came out, but I haven't managed to strong-arm anything like TDD or CI here at work, although I've messed with Cruise Control a little in an attempt to build a CI server on my local box.
In general I've found MS Test to be pretty decent for testing locally, but there are some pain points for institutional use.
First, MS Test adds quite a few things that probably don't belong in source control. The .VSMDI files are particularly annoying; just running MS Test creates anywhere from 1 to 5 of them and adds them to the solution file. Which means churn on your .SLN in source control, and churn of that sort is bad.
I understand the supposed point behind these extra files -- tracking test run history and such -- but I don't find them particularly useful for anything but a single developer. You should use your build service and CI for that sort of thing!
Second, you either must have Team Foundation Server to run your unit tests as part of CI, or you have to have a copy of Visual Studio installed on your build server if you use, for example, Cruise Control.NET. See this Stack Overflow question for details.
In general, there's nothing wrong with MS Test. But going CI will not be as smooth as it could be.
I have been using MSTest very successfully in our company. We are currently setting up standardised build processes within our company and so far, we have had good success with TeamCity. For Continuous integration, we use out the box TeamCity configurations. For the actual release builds, we set up large msbuild scripts that automate the entire process.
I really like mstest because of the IDE integration and also that all our devs automatically can use it without installing any 3rd party dependencies. I would not recommend switching just because of the problem you are experiencing. I have come full circle, where we went over to nunit and then came back again. These frameworks are all the same at the end of the day so pick the one that is easiest for most your devs to get access to and start using.
What I suspect your problem might be... sounds like an obscure problem I have had before where incorrect references of dll's (eg: adding explicit references (via browse) to projects in your solution, and not using the project reference) leads to out-of-date problems that only come up after clean checkouts or builds.
The other really suspect issue that I have found before is if you have some visual component or control that has a public property of some custom type that is being serialised in the forms .resx file. I typically need to flag them with an attribute that says SerializationVisibility.Hidden. This means that the IDE will not try to generate setters for the property value (which is typically some object graph). Just a thought. Could be way out.
I trust the tools and they don't really lie about there being a genuine problem. They only misrepresent them or report them as something completely obscure. It sounds to me like you have this. I suspect this because the error message doesn't make sense if all is in order, but it does make sense if some piece of code has loaded up an out of date or modified version of the dll at that point.
I have successfully deployed several FinalBuilder installations and the customers have been very happy with the outcome. I can highly recommend it.