I have a robot-framework test suite that runs all OK.
I have it running with pabot and selenium Grid, so parallel testing is all good.
My question is, can I run my test suite against multiple browsers without having to manually run the same scripts for each browser or duplicating my test suite for each browser.
Essentially, using a "Resource.txt" file to tell the test to instantiate the browser the grid node is set up for.
For example, in a TestNG project (Using POM method) I use the "if" and "else" methods to tell the test to use the browser that the selenium grid node is set up for.
Python 2.7
RF 3.0.2
Grid 3.5
The common way to do this is to use a variable to hold the name of the browser, and then set the variable from the command line
In your test case:
open browser ${ROOT_URL} ${BROWSER}
From the command line:
robot --variable BROWSER:firefox ...
-or-
robot --variable BROWSER:chrome ...
An alternative to setting the variable on the command line is to have your tests use a variable file which dynamically sets the value of the variable based on runtime conditions.
Related
My unit test needs a remote server address to startup. The server address is not fixed.
If I put the address in my .go test source, I will change them everytime when I run it.
If I put them in system environment variable, it is very inconvenience to change it in VSCode GUI. (I mean I will start the test in VSCode menu.)
I known I can put environment variable in launch.json to setup before run or debug my program. But here I just want to run unit test.
Is there a good way to change the parameters without restarting the VSCode?
You can add following snippets to VSCode settings.json to specify environment variables just for go test runs:
Defining variables directly:
"go.testEnvVars": {
"MY_VAR": "my value"
},
Or using dedicated file (in my example called test.env in root of project workspace) containing the environment variables in MY_VAR="my value" format with one variable per line:
"go.testEnvFile": "${workspaceFolder}/test.env",
Also note that unit tests (as name suggests they test one unit of code) should generally not depend on any external services or resources. Everything except the logic under test should be provided in form of mocks.
I am trying to mock a firestore local client to run my unit integration tests in my Go project, but I have packaged the project so that, firestore is initialized in a separate package and used throughout the other packages.
So I am kind of confused about where should I define the local firestore client and the TestMain(m *testing.M) function. Below is the basic idea of my file structure.
main.go
main_test.go (This could also need firestore local connection)
pkg
|____datastore (where firestore clients are defined)
|___datastore.go
|___testing.go (I intended to have my TestMain(m *testing.M) function to run the emulator)
|___ .....
|____pkg2
|___myfile.go (go file that I use the firestore client to deal with firestore db)
|___myfile_test.go ( tests I am going to write that I need to use the local emulator)
|___ .....
So I am wondering how this kind of a testing can be achieved. Waitng for help. Also I got the idea of firestore emulator using this link
I've encountered the same issue. Because go doesn't have a test function that runs before all tests, there's no good way to set up the emulator before running.
There are two options from what I've seen: (I would love to hear more suggestions).
Use the emulators:exec scriptpath command from here: https://firebase.google.com/docs/emulator-suite/install_and_configure. This will run the emulator and then run your script, in this case, your go tests.
Write a utility function that initializes the firebase emulator. Then, in every package, use the TestMain function to set up the firebase environment. This will enable you to run every test individually but fail when running your entire test suite. The solution for the last problem is to remove parallel processing for your go suite, something like this: go test ./... -v -race -p 1
Run the emulator externally, outside of the tests, and then run the tests. The downside to this approach is that you have to make sure the emulator is manually running before starting the test suite. You would also have to clean up after every test which might become annoying.
I went with the second solution, but you will have to ensure the emulator is not already running and terminates it after every package test.
so something like this:
shutdownCmd := exec.Command("bash", "-c", fmt.Sprintf("lsof -i tcp:%d | grep LISTEN | awk '{print $2}' | xargs kill -9", firestorePort))
Then you will need to run the emulator:
cmd := exec.Command("firebase", "emulators:start", "--only", "firestore")
You'll have to listen for the standard output of the emulator and parse it to check when:
emulator is ready
emulator could not start
emulator stopped
Then, after the emulator started, you can run
result = m.Run()
Which will run the test suite of the current package.
Finally, after tests are done for the package, you would have to send a signal to the emulator to shutdown (SIGINT):
syscall.Kill(cmd.Process.Pid, syscall.SIGINT)
I considered releasing the solution open source but didn't have enough time to make it nice. Feel free to reach out if you want me to share the full solution with you
I am using Karma through Grunt. We have around 1000 tests and it is a bit painful to have them all run whenever we change a file (autoWatch = true).
This is what we are doing now:
Start Karma with singleRun=false, autoWatch=false.
Open the debug page and grep for a specific suite (using mocha html reporter).
Change a test or file related to that suite.
Refresh the debug page to run the set of tests again.
My changes in (3) haven't been picked up by Karma so the tests still behave as if nothing had changed.
This is what I need:
Start Karma with singleRun=false, magicOption=true.
Open the debug page and grep for a specific suite (using mocha html reporter).
Change a test or file related to that suite.
Refresh the debug page to run the set of tests again.
My changes are porperly picked up and only the grepped tests are run.
If I set autoWatch=true I get what I need but the whole suite of 1000 tests is run in the background whenever I change a file, which soon collapses my environment.
I don't think there is anything equivalent to magicOption according to Karma docs but, is there any way to achieve the same behaviour?
Thanks a lot.
I'm new to Xcode (and Macs in general) and am trying to port some of my code base over to run on both OS X and iOS. I have a large set of unit tests written against the Google C++ Testing Framework (Google Test). I successfully compiled the framework and I can run some tests, but I'm unsure how to view the colorized output from within Xcode.
I'm used to hitting "Run" in Visual Studio and immediately seeing a console window (with colors) letting me know at a glance if the tests passed or failed.
I've managed to set up a "Run Script" "Build Phase" but that seems to only output to the Log Navigator which obliterates the colors and even the fixed-width output making it very difficult to see at a glance if the tests pass. I also can't find a way to display the log after running the tests. When I do this nothing appears in the "All Output" window.
I've played around with XcodeColors but that doesn't seem to work with scripts that use the ANSI color codes.
At this point I wouldn't be surprised if this simply can't be done within Xcode. That would be ideal, but if it isn't, is it possible to create a "Run Script" that will run the tests in an independent Terminal window? Colors work fine there.
Thanks for any help!
Here are links to a tool that colorizes the text in the Log window. It's free and the source is in github so you can figure out how it works. The first link says that it just uses simple ANSI codes to do the job.
http://deepitpro.com/en/articles/XcodeColors/info
https://github.com/robbiehanson/XcodeColors#readme
To kick off the execution from within Xcode, you will probably need to add a new target to your project. To add a Target, click on your project and then there is an Add Target button on the bottom of the screen. I don't know exactly what you're executing but here are my best guesses based on your question:
MacOSX/Application/Cocoa-AppleScript or Command Line Tool - Create a simple script or program that will execute your units tests.
MacOSX/Other/External Build System - Allows for execution of an external "make" program with args.
Once you have a way to execute your unit tests, you just need to figure out how to route the output from the unit tests to the Log window. If you can edit the Google Test project and make it use NSLog(), that would seem to be the easiest solution. You could create your own logging method, perform the ANSI colorization, and then send the final text to NSLog().
ADDED: OK. Interesting findings... Read all before starting. Here's what to do:
Start AppleScript Editor. This is in LaunchPad. Paste the following script into it:
tell application "Terminal"
activate
do script "<your commands>" in window 1
end tell
You can repeat the "do script" line as needed. Use this to execute your unit tests. In Script Editor, do Save As.../File Format=Script and save it to a safe location for now like your Documents directory. This will create a file like "UnitTests.scpt".
Now go to your iOS project. Select the project at the top-left. Select the Build Phases tab top-middle. Click the Add Build Phase button on the bottom right. Here's the interesting part.
Leave Shell as is ("/bin/sh"). Add one line:
osascript ~/Documents/UnitTests.scpt
That will execute the script after every build.
But here's the interesting part I found. Click on Build Settings (top-middle). Make sure All is selected (not Basic). Scroll down the list to find Unit Testing. Open Test Host. Hit the + next to Debug. You can also put the above osascript command here. You might be able to execute your unit tests here and if you can, the output will likely show up in the Log! Let me know what happens.
I am familiar in Java: JUnit + JCodecoverage, at mobile applications: Android and iPhone I was to lazy to develop with TDD, but if I would like to start than :
I would create a Hello Word app, with JUnitTesting options turned on:
Include Unit Test checked
That will create a Test App / target whatever, and you will be able to run that.
The same thing it is at Android too: you have to create a "test project"
Once I did and forgot how is working, but, there are other stuff too:
- long press the Play button on Xcode ( 4.4 ) and you will have a dropdown menu with: Run, Test, Profile,Analyze.
I can't present those, because if I press the Shift+ Cmd + 4 to screenshot it it is changing, but here it look like the changed menu:
IMHO: for banking, forex, other financial or military (high security software) I would use test driven development, with over 99% code coverage, but those simple 3-4 web-service call mobile apps, which display public data available in browsers are just waste of time to develop tests and upkeep it!
Many times I need to test with internet connection and without.
to be worse case with WI-FI connection , but router doesn't give IP or let go out the phone, but if I ask the phone state: it is connected...
The GUI flow hard to test from unit testing, where is / would be usefully for me: the data got from web-service and synchronization with internal cache. As I see it is still cheaper to do it with manu testing.
I'm just starting to use QTestLib. I have gone through the manual and tutorial. Although I understand how to create tests, I'm just not getting how to make those tests convenient to run. My unit test background is NUnit and MSTest. In those environments, it was trivial (using a GUI, at least) to alternate between running a single test, or all tests in a single test class, or all tests in the entire project, just by clicking the right button.
All I'm seeing in QTestLib is either you use the QTEST_MAIN macro to run the tests in a single class, then compile and test each file separately; or use QTest::qExec() in main() to define which objects to test, and then manually change that and recompile when you want to add/remove test classes.
I'm sure I'm missing something. I'd like to be able to easily:
Run a single test method
Run the tests in an entire class
Run all tests
Any of those would call the appropriate setup / teardown functions.
EDIT: Bounty now available. There's got to be a better way, or a GUI test runner that handles it for you or something. If you are using QtTest in a test-driven environment, let me know what is working for you. (Scripts, test runners, etc.)
You can run only selected test cases (test methods) by passing test names as command line arguments :
myTests.exe myCaseOne myCaseTwo
It will run all inits/cleanups too. Unfortunately there is no support for wildcards/pattern matching, so to run all cases beginning with given string (I assume that's what you mean by "running the tests in an entire class"), you'd have to create script (windows batch/bash/perl/whatever) that calls:
myTests.exe -functions
parses the results and runs selected tests using first syntax.
To run all cases, just don't pass any parameter:
myTests.exe
The three features requested by the OP, are nowadays integrated in to the Qt Creator.
The project will be automatically scanned for tests and they apear on the Test pane. Bottom left in the screenshot:
Each test and corresponding data can be enabled by clicking the checkbox.
The context menu allows to run all tests, all tests of a class, only the selected or only one test.
As requested.
The test results will be available from the Qt Creator too. A color indicator will show pass/fail for each test, along additional information like debug messages.
In combination with the Qt Creator, the use of the QTEST_MAIN macro for each test case will work well, as each compiled executable is invoked by the Qt Creator automatically.
For a more detailed overview, refer to the Running Autotests section of the Qt Creator Manual.