How to run specific test cases in GoogleTest - c++

I am trying to write a function/method for my project, which will ask to user which all test cases are you going to run?
It looks like below...,
Test_Cases_1
|_TestNo1
|_TestNo2....so on
Test_Cases_2
|_TestNo1
|_TestNo2....so on
....
....so on
Test_Cases_N
|_TestNo1
|_TestNo2....so on
So, now the challenge is while running the project it should prompt me what all test cases you would like to execute?
If I select Test_Cases_1 and Test_Cases_N. Then it should execute these two test cases and should exclude all other from Test_Cases_2 to ..... In result window also I would like to see the results of Test_Cases_1 and Test_Cases_N.
So, if I will see the GoogleTest, there is a method called test_case_to_run_count();
But all the test cases are getting registered with Test_F() method.
So, I did lots of analysis, but still did not find any solution.
Please help me.

You could use advanced options to run Google tests.
To run only some unit tests you could use --gtest_filter=Test_Cases1* command line option with value that accepts the * and ? wildcards for matching with multiple tests. I think it will solve your problem.
UPD:
Well, the question was how to run specific test cases. Integration of gtest with your GUI is another thing, which I can't really comment, because you didn't provide details of your approach. However I believe the following approach might be a good start:
Get all testcases by running tests with --gtest_list_tests
Parse this data into your GUI
Select test cases you want ro run
Run test executable with option --gtest_filter

Summarising Rasmi Ranjan Nayak's and nogard's answers and adding another option:
On the console
You should use the flag --gtest_filter, like
--gtest_filter=Test_Cases1*
(You can also do this in Properties|Configuration Properties|Debugging|Command Arguments)
On the environment
You should set the variable GTEST_FILTER like
export GTEST_FILTER = "Test_Cases1*"
On the code
You should set a flag filter, like
::testing::GTEST_FLAG(filter) = "Test_Cases1*";
such that your main function becomes something like
int main(int argc, char **argv) {
::testing::InitGoogleTest(&argc, argv);
::testing::GTEST_FLAG(filter) = "Test_Cases1*";
return RUN_ALL_TESTS();
}
See section Running a Subset of the Tests for more info on the syntax of the string you can use.

Finally I got some answer,
::test::GTEST_FLAG(list_tests) = true; //From your program, not w.r.t console.
If you would like to use --gtest_filter =*; /* =*, =xyz*... etc*/ // You need to use them in Console.
So, my requirement is to use them from the program not from the console.
Updated:-
Finally I got the answer for updating the same in from the program.
::testing::GTEST_FLAG(filter) = "*Counter*:*IsPrime*:*ListenersTest.DoesNotLeak*";//":-:*Counter*";
InitGoogleTest(&argc, argv);
RUN_ALL_TEST();
So, Thanks for all the answers.
You people are great.

Related

How to test a command line options parser using GTest

I am developing a command line options processor for my app. I have decided to use GTest to test it. It's implementation has been shown below in brief:
int main(int argv, char **argv)
{
if (!ProcessOptions(argc, argv)
{
return 1;
}
// Some more code here
return 0;
}
int ProcessOptions(int argc, char **argv)
{
for (int i = 1; i < argc; ++i)
{
CheckOption(argv[i]);
CheckArgument();
if (Success)
{
EnableOption();
}
}
}
The code runs as expected, but the problem is: I want to test this using GTest by supplying different options (valid and invalid) to it. The GTest manual reads:
The ::testing::InitGoogleTest() function parses the command line for
googletest flags, and removes all recognized flags. This allows the
user to control a test program's behavior via various flags, which
we'll cover in the AdvancedGuide. You must call this function before
calling RUN_ALL_TESTS(), or the flags won't be properly initialized.
But this way, I will be able to test just one sequence. I want to do this multiple times for different options. How do I do that?
Is there any better strategy to achieve this? Can I do this using test fixtures?
Have you considered a value-parameterized test? They sound perfect for your situation:
Value-parameterized tests allow you to test your code with different parameters without writing multiple copies of the same test. This is useful in a number of situations, for example:
You have a piece of code whose behavior is affected by one or more command-line flags.
You want to test different implementations of an OO interface.
You want to make sure your code performs correctly for various values of those flags.
You could write one or more test(s) which define the expected behaviour of the command line argument parser and then pass the command line flags down to it this way.
A full code example is shown in the link to the Google Test GitHub docs, but here's a quick outline:
Create a test class inheriting testing::TestWithParam<T>.
Use TEST_P and within it, the GetParam() method to access the parameter value.
You can instantiate your tests with INSTANTIATE_TEST_SUITE_P. Use the testing::Values method to supply values.

Is there a way to specify an NUnit test as "extra credit"?

I have a few tests for an API, and I would like to be able to express certain tests that reflect "aspirational" or "extra credit" requirements - in other words, it's great if they pass, but fine if they don't. For instance:
[Test]
public void RequiredTest()
{
// our client is using positive numbers in DoThing();
int result = DoThing(1);
Assert.That( /* result is correct */ );
}
[Test]
public void OptionalTest()
{
// we do want to handle negative numbers, but our client is not yet using them
int result = DoThing(-1);
Assert.That( /* result is correct */ );
}
I know about the Ignore attribute, but I would like to be able to mark OptionalTest in such a way that it still runs on the CI server, but is fine if it does not pass - as soon as it does, I would like to take notice and perhaps make it a requirement. Is there any major unit test framework that supports this?
I would use a Warnings to achieve this. That way - your test will print a 'warning' output, but not be a failure, and not fail your CI build.
See: https://github.com/nunit/docs/wiki/Warnings
as soon as it does, I would like to take notice and perhaps make it a requirement.
This part's a slightly separate requirement! Depends a lot on how you want to 'take notice'! Consider looking at Custom Attributes - it may be possible to write an IWrapSetUpTearDown attribute, which sends an email when the relevant test passes. See the docs, here: https://github.com/nunit/docs/wiki/ICommandWrapper-Interface
The latter is a more unusual requirement - I would expect to have to do something custom to fit your needs there!

Spigot/Bukkit plugin not showing up

So this is my code:
http://pastebin.com/26vTQqrZ
And this is my plugin.yml:
name: Testing Plugin
version: 1.0.0
main: me.TechnicPR.Main
Commands:
Cameron
description: Says the best developer in the world!
But for some reason when I do /cameron It says un known command, and when I do /pl it show nothing.
You must indent your informations.
e.g
name: Name
version: 1.0
main: my.main.class
commands:
example:
description: Example command
If it's just StackOverflow, say it in the comments.
And please post your log as well.
Ok, first off you have some major issues. I don't know if you're teaching yourself or using tutorials, but I would recommend using these tutorials.
So I'll start with your plugin.yml file, as the setup for that is very specific. It should look exactly like this:
name: TestingPlugin
version: 1.0.0
main: me.TechnicPR.Main
commands:
cameron:
description: Says the best developer in the world!
usage: /<command>
These sections can be in any order; name does not have to go first, for instance. But every section must be lowercase. If it is capitalized as you had "Commands:", then it will throw an error. Also, you should always provide the "usage" section in each command. Remember to not use tabs; you must use spaces, or it will throw an error for that too.
As for your code, I first off would like to very strongly urge you to NOT put your onCommand method in your main class. You main class should only ever deal with file loading and saving, and your onEnable/onDisable methods. Having the plugin in your main class gets messy really quick, and complicates simple things.
But, whether or not you only use a Main class, you still need to register your command. Make a new method, above your onCommand method, called "onEnable()". Inside that, use
getCommand("Cameron").setExecutor(this, this);
It should look something like this:
public void onEnable(){
getCommand("Cameron").setExecutor(this, this);
}
public boolean onCommand(CommandSender sender, Command cmd, String label, String[] args){
if(cmd.getName().equalsIgnoreCase("Cameron")){
sender.sendMessage("Hello!");
}
return true;
}
Notice that the onCommand returns true. You will need to have it set to true to get a proper result.
That's not everything that should be corrected, but you seem somewhat new to this and I don't want to seem heavy-handed. ;) Hope this helped!
If you want or need more help with making commands, the above mentioned link will give you any information you will need. Request any tutorial to that guy, and He'll make it asap.
It may be that you are not setting the executor class for the command, I'm not sure if it applies to the JavaPlugin class. Try adding this to your onEnable,
getCommand("Cameron").setExecutor(this);

How do I run some code after every build in scons?

I'm looking for a way to register somthing like an end-build callback in scons. For example, I'm doing something like this right now:
def print_build_summary():
failures = SCons.Script.GetBuildFailures()
notifyExe = 'notify-send '
if len(failures) > 0:
notifyExe = notifyExe + ' --urgency=critical Build Failed'
else:
notifyExe = notifyExe + ' --urgency=normal Build Succeed'
os.system(notifyExe)
atexit.register(print_build_summary)
This only works in non-interactive mode. I'd like to be able to pop up something like this at the end of every build, specifically, when running multiple 'build' commands in an interactive scons session.
The only suggestions I've found, looking around, seem to be to use the dependency system or the AddPostAction call to glom this on. It doesn't seem quite right to me to do it that way, since it's not really a dependency (it's not even really a part of the build, strictly speaking) - it's just a static bit of code that needs to be run at the end of every build.
Thanks!
I don't think there's anything wrong with using the dependency system to resolve this. This is how I normally do it:
def finish( target, source, env ):
raise Exception( 'DO IT' )
finish_command = Command( 'finish', [], finish )
Depends( finish_command, DEFAULT_TARGETS )
Default( finish_command )
This creates a command that depends on the default targets for it's execution (so you know it'll always run last - see DEFAULT_TARGETS in scons manual). Hope this helps.
Ive been looking into this and havent found that SCons offers anything that would help. This seems like quite a usefull feature, maybe the SCons developers are watching these threads and will take the suggestion...
I looked at the source code and figured out how to do it. I'll try to suggest this change to the SCons developers on scons.org.
If you're interested, the file is engine/SCons/Script/Main.py, and the function is _build_targets(). At the end of this funcion, you would simply need to add a call to a user supplied callback. Of course this solution would not be very useful if you build on several different machines in your network, since you would have to port the change everywhere its needed, but if you're only building on one machine, then maybe you could make the change until/if SCons officially provides a solution.
Let me know if you need help implementing the change, and I'll see what I can do.
Another option would be to wrap the call to SCons, and have the wrapper script perform the desired actions, but that wouldnt help in SCons interactive mode.
Hope this helps,
Brady
EDIT:
I create a feature request for this: http://scons.tigris.org/issues/show_bug.cgi?id=2834

How to check if Google Test is running in my code

I have a section of code that I would not like to run if it is being unit tested. I was hoping to find some #defined flag that is set by the gtest library that I can check. I couldn't find one that is used for that purpose, but after looking through the gtest header, I found one I thought I could use like this:
SomeClass::SomeFunctionImUnitTesting() {
// some code here
#ifndef GTEST_NAME
// some code I don't want to be tested here
#endif
// more code here
}
This doesn't seem to work as all the code runs regardless. Is there another flag I can check that might work?
Google Test doesn't need or provide its own build wrapper. You don't even have to recompile your source files sometimes. You can just link them along with your test code. Your test code calls your already-compiled library code. Your library code probably doesn't even include and Gtest headers.
If you want your library code to run differently under test, then you first need to make sure that your library code is compiled differently under test. You'll need another build target. When compiling for that build target, you can define a symbol that indicates to your code that it's in test mode. I'd avoid the GTEST prefix for that symbol; leave for use by Google's own code.
Another way to achieve what you're looking for is to use dependency injection. Move your special code into another routine, possibly in its own class. Pass a pointer to that function or class into your SomeFunctionImUnitTesting function and call it. When you're testing that code, you can have your test harness pass a different function or class to that code, therefore avoiding the problematic code without having to compile your code multiple times.
In main():
int main(int argc, char** argv)
{
testing::InitGoogleTest(&argc, argv);
setenv("GTEST_RUNNING", "1", true);
ros::init(argc, argv, "tester");
return RUN_ALL_TESTS();
}
Somewhere else:
bool gtestRunning = strcmp(getenv("GTEST_RUNNING"), "1") == 0;
if (gtestRunning)
{
}
else
{
}