How to unregister a testcase and how to register a particular test case into Gtest - c++

in gtets as we know, the moment control finds TEST or TEST_F function, it registers the test cases into gtest. But according to my requirement, after gtest registers all the testcases
I need to search whether the testcasename is there in the list or not?
If the tesetcasename is there then I need to unregister all the test cases and register only
the found testcasename.
How to do that???
Suppose
TEST_F(testcasename, testname){}
TEST_F(testcasename1, testname1){}
TEST_F(testcasename3, testname3){}
..
..
TEST_F(testcasenameN, testnameN){}
Suppose I am searching for "testcasename3" in the registered testcasename. and it's available.
Now I want gtest to execute only the found testcase not all...
How to do that?
Any answer is appreciated

This can be done using the command line as described in the advanced guide, so
./foo_test --gtest_filter=testcasename3.*
would only run testcasename3 and all its tests. The commandline syntax is extensive, tests and be included and excluded using wildcards. See the advanced documentation for more information

Related

Why test assemblies are not filtering in VSTS azure build pipeline despite putting test assembly patterns?

Here is my test assembly patterns (configuration)
**\$(BuildConfiguration)\*test*.dll
!**\obj\**
!**\$(BuildConfiguration)\*Integration*
After triggering build, here is the log where integration test assembly is also there (this file must be filtered and should be here)
2019-04-23T13:10:33.6689787Z C:\VSTSAgent\A1\_work\1\s\myapp\myapp.Services.Test\bin\Release\myapp.Services.Test.dll
2019-04-23T13:10:33.6690018Z C:\VSTSAgent\A1\_work\1\s\myapp\myapp.Services.Integration.Test\bin\Release\myapp.Services.Integration.Test.dll
Becuase of this integration test cases are also running and I want to run only unit test cases.
Any idea?
I've found the solution, here is my latest configuration for the same which working absolutely as expected now.
**\$(BuildConfiguration)\*test*.dll
!**\obj\**
!**\myapp\*Integration*\**
!**\*Microsoft.Owin.Testing.dll*
!**\$(BuildConfiguration)\*Integration.Test*.dll
!**\$(BuildConfiguration)\*Microsoft.VisualStudio.TestPlatform*
!**\$(BuildConfiguration)\*MSTest*
!**\$(BuildConfiguration)\*Microsoft.Owin.Testing.dll*
!**\$(BuildConfiguration)\*Microsoft.VisualStudio.QualityTools.UnitTestFramework.dll*
If you notice the line which says that exclude path which contains this pattern;
!**\myapp\*Integration*\**
and below pattern matches and will not be included in the result.
2019-04-23T13:10:33.6690018Z C:\VSTSAgent\A1\_work\1\s\myapp\myapp.Services.Integration.Test\bin\Release\myapp.Services.Integration.Test.dll

how to execute a single unit test case using activator?

we are using play framework and activator as a build tool,we have many unit test cases written in out project,can you please tell me how to run single unit test case using activator ?
i tried running this command in command line
activator test-only SampleNodeServiceImplTest.
but this command is running all the test cases in the project.
but i want to run only one specific unit test case.
It looks like you have to enclose the argument to activator in quotes, that is, activator "test-only package.SampleNodeServiceImplTest" should work (see https://groups.google.com/forum/#!topic/play-framework/xGDhQSz5cTs).
Instead of test-only you can also use testOnly as follow
activator "testOnly package.SampleNodeServiceImplTest".
Giving the the full package path also very important

Excluding tests from tfs build

I want to exclude some tests from my continuous integration build but I haven't found a way to do so.
One of the things I've tried was to set up the priority of those tests to -2 and then on the build I specified Minimum Test Priority = -1 but it still run those tests.
Any help would be greatly appreciated.
Instead of using "Test Lists" that have been described, you should use the "Test Category" method. The test lists & VSMDI functionality have actually been deprecated in Visual Studio 2010 and Microsoft may remove the feature completely in a future version of Visual Studio.
If you'd like some more information about how to use test categories especially with your automated build process, check out this blog post: http://www.edsquared.com/2009/09/25/Test+Categories+And+Running+A+Subset+Of+Tests+In+Team+Foundation+Server+2010.aspx
You can also exclude test categories from running by specifying the ! (exclamation point) character in front of the category name to further define your filter.
If you are using MSTest you can create a Test List for the tests that you need in you continuous integration.
With MSTest, you can simply create two test projects (assemblies) and only specify one in the build config to use for testing. In MSBuild, this was the way to go. For the new WF-Based build definitions, I currently don't have a sample at hand:
<ItemGroup>
<!-- TEST ARGUMENTS
If the RunTest property is set to true then the following test arguments will be used to run
tests. Tests can be run by specifying one or more test lists and/or one or more test containers.
To run tests using test lists, add MetaDataFile items and associated TestLists here. Paths can
be server paths or local paths, but server paths relative to the location of this file are highly
recommended:
<MetaDataFile Include="$(BuildProjectFolderPath)/HelloWorld/HelloWorld.vsmdi">
<TestList>BVT1;BVT2</TestList>
</MetaDataFile>
To run tests using test containers, add TestContainer items here:
<TestContainer Include="$(OutDir)\AutomatedBuildTests.dll" />
<TestContainer Include="$(SolutionRoot)\TestProject\WebTest1.webtest" />
<TestContainer Include="$(SolutionRoot)\TestProject\LoadTest1.loadtest" />
Use %2a instead of * and %3f instead of ? to prevent expansion before test assemblies are built
-->
</ItemGroup>
<PropertyGroup>
<RunConfigFile>$(SolutionRoot)\LocalTestRun.testrunconfig</RunConfigFile>
</PropertyGroup>
Tip: To use a generic build definition, we name all our Test projects "AutomatedBuildTests", i.e. there is no solution difference. So the build definition can be included in any existing build definition (or even be a common one) that always executes the right set of tests. It would be an easy task to prepend an "if exists" check in order to allow a build definition to only run tests when a Test assembly is present. We do not use this in order to get build errors when no test assembly is found as we absolutely want test with all those builds that use this definition.
My preference would be as above using a Test List, but some people have issued merging/editing the vsmdi files... We end up with separate solutions and use a pattern match to execute all tests in the appropriate DLL.
In Visual Studio 2012 and later you can configure your build definition using the Test case filter setting.
This setting is part of your build definition.
Open the build definition and navigate to the Process tab. In the section 3. Test you can define mutiple test sources. For each test source your can specify a Test case filter.
You can find the details in this MSDN article: Running selective unit tests in VS 2012 RC using TestCaseFilter
I have copied the supported operators and some examples from this article:
Operators supported in RC are:
1.= (equals)
2.!= (not equals)
3.~ (contains or substring only for string values)
4.& (and)
5.| (or)
6.( ) (paranthesis for grouping)
Expresssion can be created using these operators as any valid logical condition. & (and) has higher
precedence over | (or) while evaluating expression.
E.g.
"TestCategory=NAR|Priority=1"
"Owner=vikram&TestCategory!=UI"
"FullyQualifiedName~NameSpace.Class"
"(TestCategory!=UI&(Priority=1|Priority=2))|(TestCategory=UI&Priority=1)"
Another possibility would be to have some test sources in one build definition in some (i.e. more or fewer) test sources in other build definitions.

VS2010 and Create Unit Tests... no tests generated

I'm trying to add some unit tests to an existing code base using Visual Studio 2010's unit test generator. However, in some cases when I open a class, right click --> Create Unit Tests..., after I select the methods to generate tests for it will create what is essentially a blank test. Are there situations where this can happen? In every case I select at least one public method to gen tests for, and all it generates is this:
using TxRP.Controllers; //The location of the code to be tested
using Microsoft.VisualStudio.TestTools.UnitTesting;
That's it. Nothing else. Strange, right?
I should note that this is all MVC 2 controller code, and I have been able to gen tests for other controllers with no problem, and all my controllers follow pretty much the same format. No error seems to be thrown, as it gens the empty page happily and adds it to the project as if everything is just fine.
Has anyone had experience with the same type of thing happening, and was there any answer found as to why?
UPDATE:
There is in fact an error during generation:
While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
After some research, the only possible solution I found is that this error occurrs if you're trying to generate tests to a test file that already exists. However, this solution is not working for me...
If you try to generate tests for a class which already has existing tests in another file in the project, it will just generate an empty file as described above. Changing the filename is not sufficient, nor is using a different location within the project. Basically it seems to enforce the one-testfile-per-class convention across the entire project.
This problem is caused by the previously generated test file having been moved to a folder other than the root folder in the test project.
Resolution
Move the test file into the test project root folder.
Generate the new tests
Move the test file back to the folder location you want in the test project.
I have no clue why they dont call it a BUG! in a typical enterprise level software development it is more than a coincidence where multiple people generate unit tests for different methods of the same class # different points of time.
We always end up with this error and it is not helping us any way! Feels as if the Context Menu "Create Unit Tests" has lil use!
Error description:
"While trying to generate your tests, the following errors occurred:
Value cannot be null.
Parameter name: key
"

CPP unit setup for C++

In CPP unit we run unit test as part of build as part of post build setup. We will be running multiple tests as part of this. In case if any test case fails post build should not stop, it should go ahead and run all the test cases and should report summary how many test cases passed and failed. how can we achieve this.
Thanks!
His question is specific enough. You need a test runner. Encapsulate each test in its own behavior and class. The test project is contained separately from the tested code. Afterwards just configure your XMLOutputter. You can find an excellent example of how to do this in the linux website. http://www.yolinux.com/TUTORIALS/CppUnit.html
We use this way to compile our test projects for our main projects and observe if everything is ok. Now it all becomes the work of maintaining your test code.
Your question is too vague for a precise answer. Usually, a unit test engine return a code to tell it has failed (like a non zero return code in the shell on linux) or generate some output file with results. The calling system handle this. If you have written it (some home made scripts) you have to give the option to go on tests execution even if an error occurred. If you are using some tools like continuous integration server, then you have to go through the doc and find the option that allows you to go on when tests fails.
A workaround is to write a script that return a "OK" result even if the unit test fails, but there you lose some automatic verification ...
Be more specific if you want more clues.
my2c
I would just write your tests this way. Instead of using the CPPUNIT_ASSERT macros or whatever you would write them in regular C++ with some way of logging errors.
You could use a macro for this too of course. Something like:
LOGASSERT( some_expression )
could be defined to execute some_expression and to log the expression together with FILE and LINE if it fails, and you can also log exceptions of course, as well as ones that are not thrown, simply by writing them in your tests (with macros if you want to log the expression that caused them with FILE and LINE).
If you are writing macros I would advise you to limit the content of your macro to calling an inline function with extra parameters.