I have started using google tests and google mocks in my embedded project.
Everything works fine, but currently I use it on my host machine when I develop code. It is the best option for TDD (it's fast), but I have to ocassionally run all code on the target board (STM32), because on my PC I use GCC, on the target ARMclang. So I need to validate that everything works fine on the target as well.
Here comes my question regarding Google Tests:
Is it possible to easily redirect all debug messages (test results) to a different peripherels than std::ostream, i.e. UART or USB so I could send test results to my host computer back?
I guess so, but there is quite a lot of code, it will take me weeks to dig into that and make it working. Maybe someone did that before and could give me hints which files need to be modified and replaced with my hardware wrappers? And does it makes a sense, or it is too complex (maybe there is to many OS dependencies) and it would be better to use another framework like CppuTest?
Thanks in advance.
Related
Problem description
I've developed a CLI in Go. My dev environment is Linux. I wrote unit-tests and only produce releases of executable files when tests pass. When I test or use my tool in a Linux environment, everything works fine.
My CI/CD pipeline is built around goreleaser to produce multi-platform executables. Since my app doesn't use exotic cross-platform functionalities, I was quite confident that windows executable should work as expected. But it didn't.
Long story short, always normalize paths with filepath.ToSlash(). But this is not my question.
Question
Hence my question is: "since behavior might change on different platforms for such little mistakes, is it possible to run go test for a list of os/architecture ?" I can't imagine rebooting on windows to test every commit manually and I don't think discipline is the answer. If it was, we wouldn't test things at all.
Search attempts
A fast search on Google and Stack Overflow for "golang cross-platform tests" didn't return any results. Am I missing something or is my approach to this problem wrong ?
Fist edit
Most comments pointed out that the only way to test the behavior of an executable on a given platform is... to test it on this platform (in a multi-stage CI/CD for example). This is so obvious that there might not be a way to achieve it otherwise, I know.
But triggering a parallel CI/CD job on every platform for every commit (of partially untested code) doesn't sound satisfying to me. It IS the only way to know for sure that the code behave as expected on every targeted platform but I'm wondering if anyone stumbled on this issue and found a pre-CI/CD solution to this problem.
Though it might be the only way to get conclusive test results, it implies triggering CI/CD with parallel tests on each platform. I was looking for some solution on the developer machine, before committing untested code
You can install a local CICD tool which would, on (local) commit, trigger those tests.
A Local GitLab, for instance, can run test in multiple platforms simultaneously (since GitLab 11.5)
But that implies at least a Docker image in order to test on Windows from your Linux dev environment.
With Go alone however, as mentioned in "Design and unit-test cross-platform application"
It's not possible to run go test for a target system that's different from the current system.
If you are trying to test a different CPU architecture, you should be able to do it with QEMU. This blog post explains how.
If you are trying to test a different OS, you can probably do everything you need to do in Docker containers. You could use a tool like VSCode’s remote development toolkit to easily dev/build/test a project in a specific container, or you could write a custom Makefile that calls the appropriate Docker commands when you run make test (allowing you to run tests in multiple OSes with one command).
I'm developing a network client library in C# that can run either in SSL or plain-text mode. Both need to be supported. I've found a few quirks when working in one mode or another that can appear unexpectedly when testing something else. I've found it very useful to run the tests in either plain-text or SSL mode to diagnose problems.
I would like to run the library in both plain-text AND SSL modes on my CI server. I can do so quite easily by starting dotnet test with an environment variable to describe which mode to run but this then yields duplicate test outputs to my CI software, which is currently Appveyor but will soon be TeamCity. If a unit test failed in the SSL but not in the plain-text mode, it would not be easy to tell them apart.
At the moment, I'm thinking the best way to do this would be to prefix the name of each test case reported to the CI software, with the run type, but I couldn't find a way to do that. I'm open to other suggestions however.
Other ideas I've had:
Rewriting all tests to use [Theory] - this seems like a great deal of work and I wouldn't be able to set up using constructors. I also have a number of [Theory] tests which I'd have to figure out somehow.
Setting up a different CI configuration for each distinct test run - this would be fine now but if I have to run the same tests against different versions of the same software, I could end up with 10 or more configurations.
In this case, neither feels like the correct way to solve the problem and I hope there's a more elegant solution as I'm sure I'm not the only one with this issue.
With xUnit you can use traits or category to achieve this
http://www.brendanconnolly.net/organizing-tests-with-xunit-traits/
I know it can definitely be done in TeamCity
After a very inspiring training about TDD and BDD, I try to implement the methodology, using MSTest and Specflow. But I have a question where I'm stuck to:
I've written Acceptance Tests to validate a subsystem that we are working on.
Our system is a little distributed:
there is a 3rd party computer
with its own application running freely
with a third party database that we are accessing through tcp/ip
However my Specflow Scenario seems too much specialized for my own development set-up: it contains inputs that are valid only for me. In the example below, the ip adress is accessible mostly from me. And the target directory is namely a directory on my machine.
The accredited Tester/Validator, or the Product owner are likely not be able to launch the same test scenario, since they won't have access to this machine. But my developper colleages may not either.
#lastOne
Scenario: Get lattest 3rdParty OCR Data into specified directory
Given I indicate 'database' as the databaseName of third party computer
And I indicate '12.126.42.21' as the ipAddress of the third party computer
And I indicate 'user' as the databaseUser in third party computer
And I indicate 'c:\Temp\test_ocr\' as the destination path where to put the ocr data
And I indicate '2013020800009E' as the truck identifier to be associated with ocr data
When I call the OCR Application program
Then the destination path should contain correctly named xml file, with validated xml data, and jpg files about ocr data.
I am afraid I have some misconceptions about BDD. I am too specific in my scenario ?
If yes, where should I stop ?
I'm not sure your question is BDD specific but its still a good one.
I would normally recommend that all development is done with a continuous integration server running your tests every time you checkin, even for a private project that you work alone on. Even my own personal projects get this because TeamCity is free and the kids desktop at home is idle when I check in. The importance if this is more obvious if you work in a team, because it stops there ever being any doubt when you get the latest source code that it will build.
But it also stops the problem you have. You can tell very quickly when something is too specific because it doesn't work on both your own personal machine and the build machine. These problems exist whether you work in BDD, TDD, ATDD or any kind of testing.
Looking at your above example, I'd say it is very specific and also very brittle. If the third party PC is switched off one day, all your tests fail. If you were using Specflow to run unit tests then I'd recommend mocking most of your code so you can test without calling the test pc, but your example reads more like you are trying to do system/integration testing.
Instead of specifying all your parameters individually, why not give a name to whole bundle
Given Im using the test pc
you can then set many of them up in the binding and if you need to then tailor them so the tests still pass
[Given]
public void GivenImUsingTheTestPc()
{
if (Environment.ComputerName == "d1234")
{
ipadress = 1.2.3.4;
....
This obviously only moves the brittleness but at least it keeps you going for now
I've been using MSTest so far for my unit-tests, and found that it would sometimes randomly break my builds for no reason. The builds would fail in VS but compile fine in MSBuild - with error like 'option strict does not allow IFoo to cast to type IFoo'. I believe I have finally fixed it, but after the bug coming back and struggling to make it go away again, and little help from MS, it left a bad taste in my mouth. I also noticed when looking at this forum and other blogs and such, that most people are using NUnit, xUnit, or MBUnit.. We are on VS2008 at work BTW.. So now I am looking to explore other options..
I'm working on moving our team to start doing TDD and real unit testing and have some training planned, but first would like to come up with a set of standard tools & best practices. To this end I've been looking online to come up with the right infrastructure for both a build server and dev machines...I was looking at the typemock website as I've heard great things about their mocking framework, and noticed that it seems like they promote MSTest, and even have some links of people moving TO MSTest from NUnit..
This is making me re-think my decision.. so I guess I'm asking - is anyone using MSTest as part of their TDD infrastructure? Any known limitiations it has, if I want to integrate with a build / CI server, or code coverage or any other kind of TDD tool I may need? I did search these forums and mostly find people comparing the 3rd party frameworks to eachother and not even giving MSTest much of a chance... Is there a good reason why.. ?
Thanks for the advice
EDIT: Thanks to the replies in this thread, I've confirmed MSTest works for my purposes and integreated gracefully with CI tools and build servers.
But does anyone have any experience with FinalBuilder?? This is the tool that I'd like us to use for the build scripts to prevent having to write a ton of XML compared to other build tools. Any limitiations here that I should be aware of before committing to MS Test?
I should also note - we are using VSS =(. I'm hoping we can ax this soon - hopefully as part of, maybe even the first step, of setting up all of this infrastructure.
At Safewhere we currently use MSTest for TDD, and it works out okay.
Personally, I love the IDE integration, but dislike the API. If it ever becomes possible to integrate xUnit.NET with the VS test runner, we will migrate very soon thereafter.
At least with TFS, MSTest works pretty well as part of our CI.
All in all I find that MSTest works adequately for me, but I don't cling to it.
If you are evaluating mock libraries, take a look at this comparison.
I've been using MS Test since VS 2008 came out, but I haven't managed to strong-arm anything like TDD or CI here at work, although I've messed with Cruise Control a little in an attempt to build a CI server on my local box.
In general I've found MS Test to be pretty decent for testing locally, but there are some pain points for institutional use.
First, MS Test adds quite a few things that probably don't belong in source control. The .VSMDI files are particularly annoying; just running MS Test creates anywhere from 1 to 5 of them and adds them to the solution file. Which means churn on your .SLN in source control, and churn of that sort is bad.
I understand the supposed point behind these extra files -- tracking test run history and such -- but I don't find them particularly useful for anything but a single developer. You should use your build service and CI for that sort of thing!
Second, you either must have Team Foundation Server to run your unit tests as part of CI, or you have to have a copy of Visual Studio installed on your build server if you use, for example, Cruise Control.NET. See this Stack Overflow question for details.
In general, there's nothing wrong with MS Test. But going CI will not be as smooth as it could be.
I have been using MSTest very successfully in our company. We are currently setting up standardised build processes within our company and so far, we have had good success with TeamCity. For Continuous integration, we use out the box TeamCity configurations. For the actual release builds, we set up large msbuild scripts that automate the entire process.
I really like mstest because of the IDE integration and also that all our devs automatically can use it without installing any 3rd party dependencies. I would not recommend switching just because of the problem you are experiencing. I have come full circle, where we went over to nunit and then came back again. These frameworks are all the same at the end of the day so pick the one that is easiest for most your devs to get access to and start using.
What I suspect your problem might be... sounds like an obscure problem I have had before where incorrect references of dll's (eg: adding explicit references (via browse) to projects in your solution, and not using the project reference) leads to out-of-date problems that only come up after clean checkouts or builds.
The other really suspect issue that I have found before is if you have some visual component or control that has a public property of some custom type that is being serialised in the forms .resx file. I typically need to flag them with an attribute that says SerializationVisibility.Hidden. This means that the IDE will not try to generate setters for the property value (which is typically some object graph). Just a thought. Could be way out.
I trust the tools and they don't really lie about there being a genuine problem. They only misrepresent them or report them as something completely obscure. It sounds to me like you have this. I suspect this because the error message doesn't make sense if all is in order, but it does make sense if some piece of code has loaded up an out of date or modified version of the dll at that point.
I have successfully deployed several FinalBuilder installations and the customers have been very happy with the outcome. I can highly recommend it.
I've recently started work on the Compact Framework and I was wondering if anyone had some recommendations for unit testing beyond what's in VS 2008. MSTest is ok, but debugging the tests is a nightmare and the test runner is so slow.
I see that NUnitLite on codeplex is an option, but it doesn't look very active; it's also in the roadmap for NUnit 3.0, but who knows when that will come out. Has anyone had any success with it?
What we've done that really improves our efficiency and quality is to multi target our mobile application. That is to say with a very little bit of creativity and a few conditional compile tags and custom project configurations it is possible to build a version of your mobile application that also runs on the desktop.
If you put all your business logic you need tested in a separate project/assembly then this layer can be very effectively tested using any of the desktop tools you are already familiar with.
We use NUnitLite, although I think we did have had to add some code to it in order for it to work.
One of the problems we found is that if you are using parts of the platform that only exist in CF, then you can only run those tests in NUnitLite on an emulator or Windows Mobile device, which makes it hard to run the tests as part of an integrated build process. We got round this by added a new test attribute allowing you to disable the tests what would only run on the CF (typically these would be p/invoking out to some windows mobile only dll).