My unit test needs a remote server address to startup. The server address is not fixed.
If I put the address in my .go test source, I will change them everytime when I run it.
If I put them in system environment variable, it is very inconvenience to change it in VSCode GUI. (I mean I will start the test in VSCode menu.)
I known I can put environment variable in launch.json to setup before run or debug my program. But here I just want to run unit test.
Is there a good way to change the parameters without restarting the VSCode?
You can add following snippets to VSCode settings.json to specify environment variables just for go test runs:
Defining variables directly:
"go.testEnvVars": {
"MY_VAR": "my value"
},
Or using dedicated file (in my example called test.env in root of project workspace) containing the environment variables in MY_VAR="my value" format with one variable per line:
"go.testEnvFile": "${workspaceFolder}/test.env",
Also note that unit tests (as name suggests they test one unit of code) should generally not depend on any external services or resources. Everything except the logic under test should be provided in form of mocks.
Related
We have Github actions set for PRs to automatically run Unit tests and reject merge if any test fails.
I added an additional environment variable in a new PR, and the Github action does not pick this new environment variable as it doesn't exist in the main branch.
It got me thinking since Unit tests should run as independently as possible, do we explicitly set the $_ENV values, somewhere in the setUp possibly?
I have a test project I'm using to practice deploying to Azure
Located:
https://github.com/EdLichtman/HelloAzureCI
When I use Resharper to run the NUnit tests, All of them pass except for the Environment-Specific test case, as should be expected.
However when I run deploy.cmd on my local computer All 4 tests fail because "Object reference not set to an instance of an object."
One of my unit tests is "Assert.AreEqual(1,1)" and that throws a nullReference exception, which leads me to think that Assert is not an instance of an object.
Why is this such a problem? Can anyone else recreate?
There are a few odd things here, but the main one is that you are trying to run NUnit 3.7.1 tests using an NUnit V2 console runner (2.6.2). This is never going to work. My suggestion is that whenever you have trouble with running NUnit in a remote environment or using a third-party runner, you fall back to using the console runner locally. Even if that's not your preferred mode of working, you will usually be able to figure out what's wrong more easily if you eliminate as many middlemen as possible.
If you actually want to run under vsconsole, then you need to install the nunit3-vs-adapter nuget package and point to it's location in your command-line. Note that the adapter, even though it is ours, constitutes another middleman, so debugging using nunit3-console is still a good choice.
I've got some unit tests (c++) running in the Visual Studio 2012 test framework.
From what I can tell, the tests are running in parallel. In this case the tests are stepping on each other - I do not want to run them in parallel!
For example, I have two tests in which I have added breakpoints and they are hit in the following order:
Test1 TEST_CLASS_INITIALIZE
Test2 TEST_CLASS_INITIALIZE
Test2 TEST_METHOD
Test1 TEST_METHOD
If the init for Test1 runs first then all of its test methods should run to completion before anything related to Test2 is launched!
After doing some internet searches I am sufficiently confused. Everything I am reading says Visual Studio 2012 does not run tests concurrently by default, and you have to jump through hoops to enable it. We certainly have not enabled it in our project.
Any ideas on what could be happening? Am I missing something fundamental here?
Am I missing something fundamental here?
Yes.
Your should never assume that another test case will work as expected. This means that it should never be a concern if the tests execute synchronously or asynchronously.
Of course there are test cases that expect some fundamental part code to work, this might be own code or a part of the framework/library you work with. When it comes to this, the programmer should know what data or object to expect as a result.
This is where Mock Objects come into play. Mock objects allow you to mimic a part of code and assure that the object provides exactly what you expect, so you don't rely on other (time consuming) services, such as HTTP requests, file stream etc.
You can read more here.
When project becomes complex, the setup takes a fair number of lines and code starts duplicating. Solution to this are Setup and TearDown methods. The naming convention differs from framework to framework, Setup might be called beforeEach or TestInitialize and TearDown can also appear as afterEach or TestCleanup. Names for NUnit, MSTest and xUnit.net can be found on xUnit.net codeplex page.
A simple example application:
it should read a config file
it should verify if config file is valid
it should update user's config
The way I would go about building and testing this:
have a method to read config and second one to verify it
have a getter/setter for user's settings
test read method if it returns desired result (object, string or however you've designed it)
create mock config which you're expecting from read method and test if method accepts it
at this point, you should create multiple mock configs, which test all possible scenarios to see if it works for all possible scenarios and fix it accordingly. This is also called code coverage.
create mock object of accepted config and use the setter to update user's config, then use to check if it was set correctly
This is a basic principle of Test-Driven Development (TDD).
If the test suite is set up as described and all tests pass, all these parts, connected together, should work perfectly. Additional test, for example End-to-End (E2E) testing isn't necessarily needed, I use them only to assure that whole application flow works and to easily catch the error (e.g. http connection error).
I am using CodeBlocks to write my programs in C++ and I noticed the following. Both my main class and one my Unit Test class are in the same folder (say FolderName). From both of them, I call a method that inputs a file which is in the same folder (FileName.txt). From main I call it like this, and it works fine.
obj.("FileName.txt");
From the test file, I need to give the whole address of the file for it to work.
obj.("/home/myName/FolderName/FileName.txt");
I know there must be a way of setting the Unit Test file so that it works like the main but I could not figure it out. I am not sure if this is important but I am working on Linux
My apologies if you've already figured this out, but I'll answer for anyone else who may be wondering.
CodeBlocks creates an executable for your unit test and stores it in /home/myName/FolderName/bin/unitTest/. CodeBlocks runs this executable when you execute your unit test. Therefore, your pwd is not /home/myName/FolderName/ but /home/myName/FolderName/bin/unitTest/.
You're using gtest, but regardless of which framework you use, there are a few ways to do what you're asking:
The best option is to use the address obj.("../../FileName.txt")
The other option is to copy FileName.txt to /home/myName/FolderName/bin/unitTest/ (or whatever you named your unit test build option). You can then simply use "FileName.txt" in your unit test.
Cheers.
I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button.
All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes.
I'm sure I'm missing something. I'd like to be able to easily:
Run a single test method
Run the tests in an entire class
Run all tests
Any of those would call the appropriate setup / teardown functions.
EDIT: Bounty now available. There's got to be a better way, or a GUI test runner that handles it for you or something. If you are using QtTest in a test-driven environment, let me know what is working for you. (Scripts, test runners, etc.)
You can run only selected test cases (test methods) by passing test names as command line arguments :
myTests.exe myCaseOne myCaseTwo
It will run all inits/cleanups too. Unfortunately there is no support for wildcards/pattern matching, so to run all cases beginning with given string (I assume that's what you mean by "running the tests in an entire class"), you'd have to create script (windows batch/bash/perl/whatever) that calls:
myTests.exe -functions
parses the results and runs selected tests using first syntax.
To run all cases, just don't pass any parameter:
myTests.exe
The three features requested by the OP, are nowadays integrated in to the Qt Creator.
The project will be automatically scanned for tests and they apear on the Test pane. Bottom left in the screenshot:
Each test and corresponding data can be enabled by clicking the checkbox.
The context menu allows to run all tests, all tests of a class, only the selected or only one test.
As requested.
The test results will be available from the Qt Creator too. A color indicator will show pass/fail for each test, along additional information like debug messages.
In combination with the Qt Creator, the use of the QTEST_MAIN macro for each test case will work well, as each compiled executable is invoked by the Qt Creator automatically.
For a more detailed overview, refer to the Running Autotests section of the Qt Creator Manual.