I am running the latest version of Codeception on a WAMP platform - My acceptance is very basic however works fine (see below):
$I = new WebGuy($scenario);
$I->wantTo('Log in to the website');
$I->amOnPage('/auth/login');
$I->fillField('identity','admin#admin.com');
$I->fillField('password','password');
$I->click('Login');
In a nutshell - it checks the page is 'auth/login' fills out 2 form fields and clicks the login button. This works without any problems.
Here is my identical functional test:
$I = new TestGuy($scenario);
$I->wantTo('perform actions and see result');
$I->amOnPage('/auth/login');
$I->fillField('identity','admin#admin.com');
$I->fillField('password','password');
$I->click('Login');
When I run this from the command line I get the following error (not the full error but enough to understand the problem):
1) Couldn't <-[35;1mperform actions and see result<-
[0m in <-[37;1LoginCept.php<-[0m <-41;37mRuntimeException:
Call to undefined method TestGuy::amOnPage<-[0m.......
My Acceptance suite has 'PhpBrowser' & 'WebHelper' modules enabled, the Functional suite has 'FileSystem' & 'TestHelper' enabled (within the acceptance.suite.yml & functional.suite.yml files)
Obviously the amOnPage() function is the problem - however I am led to believe amOnPage() should work in acceptance and functional test? Or I am wrong - also - can someone explain what the numbers mean e.g '<-[35;1m' that appear
UPDATE: I tried adding the 'WebHelper' module to the functional.suite.yml but I do not see the amOnPage() being auto-generated in the TestGuy.php file - any ideas?
My config files are below:
WebGuy
class_name: WebGuy
modules:
enabled:
- PhpBrowser
- WebHelper
config:
PhpBrowser:
url: 'http://v3.localhost/'
TestGuy
class_name: TestGuy
modules:
enabled: [Filesystem, TestHelper, WebHelper]
Well, this is so, because of TestGuy don't have those methods. All of those methods are in the PhpBrowser, Selenium2 modules or other that inherits from Codeception Mink implementation. So you need to add PhpBrowser in your functional suite in modules section, and then run codecept build command.
Also note that it is better to use Selenium2 module for acceptance test and PhpBrowser for functional tests. The main idea is that acceptance(Selenium2) tests must cover those part of your application, that can not be covered by functional (PhpBrowser) tests, for example some js-interactions.
About '<-[35;1m' start script codecept run --no-colors to remove '<-[35;1m' from console output
Related
I am creating another test module for an application service which is using different DbContext.
Is there any way to bypass permission checking for the application service in Unit Test?
I've tried
Configuration.ReplaceService<IPermissionChecker, NullPermissionChecker>(DependencyLifeStyle.Transient);
in Test Module PreInitialize() but it is still checking the permissions.
Please help! Thank you!
services.AddAbpIdentity replaces IPermissionChecker again. You can use AddPermissionChecker<NullPermissionChecker> extension method after services.AddAbpIdentity.
For AspNet Zero, it's inside IdentityRegistrar class. Example (add the last line):
services.AddAbpIdentity<Tenant, User, Role>(options =>
{
...
})
...
.AddPermissionChecker<NullPermissionChecker>();
Notice that: You probably want to make this conditional and apply only for unit tests.
When you build on a TFS build server, failed unit tests cause the build to show an orange alert state but they still "succeed". Is there any way to tag a unit test as critical such that if it fails, the whole build will fail?
I've Googled for it and didn't find anything, and I don't see any attribute in the framework, so I'm guessing the answer is no. But maybe I'm just looking in the wrong place.
There is a way to do this, but you need to create multiple test runs and then filter your tests. On your tests, set a TestCategory attribute:
[TestCategory("Critical")]
[TestMethod]
public void MyCriticalTest {}
For NUnit you should be able to use [Category("Critical")]. There are multiple attributes of a test you can filter on, including the name.
Name = TestMethodDisplayNameName
FullyQualifiedName = FullyQualifiedTestMethodName
Priority = PriorityAttributeValue
TestCategory = TestCategoryAttributeValue
ClassName = ClassName
And these operators:
= (equals)
!= (not equals)
~ (contains or substring only for string values)
& (and)
| (or)
( ) (paranthesis for grouping)
XUnit .NET currently does not support TestCaseFilters.
Then in your build definition you can create two test runs, one that runs Critical tests, one that runs everything else. You can use the Filter option of the Test Run.
Open the Test Runs window using this hard to find button:
Create 2 test runs:
On your first run set the options as follows:
On your second run set the options as follows:
This way Team Build will run any test with the "Ciritical" category in the first run and will fail. If the first run succeeds it will kick off the non-critical tests and will Partially Succeed, even when a test fails.
Update
The same process explained for Azure DevOps Pipelines.
Yes.
Using the TFS2013 Default Template:
Under the "Process" tab, go to section 2, "Basic".
Expand the Automated Tests section.
For "Test Source", click the ellipsis ("...").
This will open a new window that has a "Fail build when tests fail" check box.
We are currently using CodeCeption to do some acceptance testing, but I would like to also add unit testing to the current setup.
Has anyone done Unit testing in Zend Framework 2 project using CodeCeption? And if so, how did you set up the environment?
Found this in codeception docs: http://codeception.com/docs/modules/ZF2
There is an example in this github project: https://github.com/Danielss89/zf2-codeception
I solved this problem by using PHPUnit. I followed the Zend PHPUnit guide, http://framework.zend.com/manual/current/en/tutorials/unittesting.html, which only got me so far. But, I was able to bootstrap Zend using a phpunit.xml file and pointing that to _bootstrap.php. Codeception is also able to bootstrap my tests using the _bootstrap.php file. So, now I can run my tests using both:
phpunit --bootstrap tests/unit/_bootstrap.php
and
php vendor/bin/codecept run unit
I was surprised to see how few examples there are for this. I was writing tests for a job queue client. Here is a bit of my test:
protected function setUp()
{
$this->serviceManager = new ServiceManager(new ServiceManagerConfig());
$config = Bootstrap::getConfig();
$this->serviceManager->setService('ApplicationConfig', $config);
$this->serviceManager->get('ModuleManager')->loadModules();
$this->client = $this->serviceManager->get('jobclient');
$this->client->addServers(array(array('ip' => '127.0.0.1', 'port' => '666')));
}
public function testServiceCreateSuccess() {
$this->client->setQueue($this->testData['queue']);
$this->assertEquals($this->client->getQueue(), $this->testData['queue'], 'We get the expected queue');
}
I mocked the job client and had a local config. My _bootstrap.php file looks basically the same as the Zend example except that I am using factories defined in the service config to manage some objects. So, I performed the "setService" method in the test setup rather then in the bootsrap init.
Is there an established best practice for separating unit tests and integration tests in GoLang (testify)? I have a mix of unit tests (which do not rely on any external resources and thus run really fast) and integration tests (which do rely on any external resources and thus run slower). So, I want to be able to control whether or not to include the integration tests when I say go test.
The most straight-forward technique would seem to be to define a -integrate flag in main:
var runIntegrationTests = flag.Bool("integration", false
, "Run the integration tests (in addition to the unit tests)")
And then to add an if-statement to the top of every integration test:
if !*runIntegrationTests {
this.T().Skip("To run this test, use: go test -integration")
}
Is this the best I can do? I searched the testify documentation to see if there is perhaps a naming convention or something that accomplishes this for me, but didn't find anything. Am I missing something?
#Ainar-G suggests several great patterns to separate tests.
This set of Go practices from SoundCloud recommends using build tags (described in the "Build Constraints" section of the build package) to select which tests to run:
Write an integration_test.go, and give it a build tag of integration. Define (global) flags for things like service addresses and connect strings, and use them in your tests.
// +build integration
var fooAddr = flag.String(...)
func TestToo(t *testing.T) {
f, err := foo.Connect(*fooAddr)
// ...
}
go test takes build tags just like go build, so you can call go test -tags=integration. It also synthesizes a package main which calls flag.Parse, so any flags declared and visible will be processed and available to your tests.
As a similar option, you could also have integration tests run by default by using a build condition // +build !unit, and then disable them on demand by running go test -tags=unit.
#adamc comments:
For anyone else attempting to use build tags, it's important that the // +build test comment is the first line in your file, and that you include a blank line after the comment, otherwise the -tags command will ignore the directive.
Also, the tag used in the build comment cannot have a dash, although underscores are allowed. For example, // +build unit-tests will not work, whereas // +build unit_tests will.
To elaborate on my comment to #Ainar-G's excellent answer, over the past year I have been using the combination of -short with Integration naming convention to achieve the best of both worlds.
Unit and Integration tests harmony, in the same file
Build flags previously forced me to have multiple files (services_test.go, services_integration_test.go, etc).
Instead, take this example below where the first two are unit tests and I have an integration test at the end:
package services
import "testing"
func TestServiceFunc(t *testing.T) {
t.Parallel()
...
}
func TestInvalidServiceFunc3(t *testing.T) {
t.Parallel()
...
}
func TestPostgresVersionIntegration(t *testing.T) {
if testing.Short() {
t.Skip("skipping integration test")
}
...
}
Notice the last test has the convention of:
using Integration in the test name.
checking if running under -short flag directive.
Basically, the spec goes: "write all tests normally. if it is a long-running tests, or an integration test, follow this naming convention and check for -short to be nice to your peers."
Run only Unit tests:
go test -v -short
this provides you with a nice set of messages like:
=== RUN TestPostgresVersionIntegration
--- SKIP: TestPostgresVersionIntegration (0.00s)
service_test.go:138: skipping integration test
Run Integration Tests only:
go test -run Integration
This runs only the integration tests. Useful for smoke testing canaries in production.
Obviously the downside to this approach is if anyone runs go test, without the -short flag, it will default to run all tests - unit and integration tests.
In reality, if your project is large enough to have unit and integration tests, then you most likely are using a Makefile where you can have simple directives to use go test -short in it. Or, just put it in your README.md file and call it the day.
I see three possible solutions. The first is to use the short mode for unit tests. So you would use go test -short with unit tests and the same but without the -short flag to run your integration tests as well. The standard library uses the short mode to either skip long-running tests, or make them run faster by providing simpler data.
The second is to use a convention and call your tests either TestUnitFoo or TestIntegrationFoo and then use the -run testing flag to denote which tests to run. So you would use go test -run 'Unit' for unit tests and go test -run 'Integration' for integration tests.
The third option is to use an environment variable, and get it in your tests setup with os.Getenv. Then you would use simple go test for unit tests and FOO_TEST_INTEGRATION=true go test for integration tests.
I personally would prefer the -short solution since it's simpler and is used in the standard library, so it seems like it's a de facto way of separating/simplifying long-running tests. But the -run and os.Getenv solutions offer more flexibility (more caution is required as well, since regexps are involved with -run).
I was trying to find a solution for the same recently.
These were my criteria:
The solution must be universal
No separate package for integration tests
The separation should be complete (I should be able to run integration tests only)
No special naming convention for integration tests
It should work well without additional tooling
The aforementioned solutions (custom flag, custom build tag, environment variables) did not really satisfy all the above criteria, so after a little digging and playing I came up with this solution:
package main
import (
"flag"
"regexp"
"testing"
)
func TestIntegration(t *testing.T) {
if m := flag.Lookup("test.run").Value.String(); m == "" || !regexp.MustCompile(m).MatchString(t.Name()) {
t.Skip("skipping as execution was not requested explicitly using go test -run")
}
t.Parallel()
t.Run("HelloWorld", testHelloWorld)
t.Run("SayHello", testSayHello)
}
The implementation is straightforward and minimal. Although it requires a simple convention for tests, but it's less error prone. Further improvement could be exporting the code to a helper function.
Usage
Run integration tests only across all packages in a project:
go test -v ./... -run ^TestIntegration$
Run all tests (regular and integration):
go test -v ./... -run .\*
Run only regular tests:
go test -v ./...
This solution works well without tooling, but a Makefile or some aliases can make it easier to user. It can also be easily integrated into any IDE that supports running go tests.
The full example can be found here: https://github.com/sagikazarmark/modern-go-application
I encourage you to look at Peter Bourgons approach, it is simple and avoids some problems with the advice in the other answers: https://peter.bourgon.org/blog/2021/04/02/dont-use-build-tags-for-integration-tests.html
There are many downsides to using build tags, short mode or flags, see here.
I would recommend using environment variables with a test helper that can be imported into individual packages:
func IntegrationTest(t *testing.T) {
t.Helper()
if os.Getenv("INTEGRATION") == "" {
t.Skip("skipping integration tests, set environment variable INTEGRATION")
}
}
In your tests you can now easily call this at the start of your test function:
func TestPostgresQuery(t *testing.T) {
IntegrationTest(t)
// ...
}
Why I would not recommend using either -short or flags:
Someone who checks out your repository for the first time should be able to run go test ./... and all tests are passing which is often not the case if this relies on external dependencies.
The problem with the flag package is that it will work until you have integration tests across different packages and some will run flag.Parse() and some will not which will lead to an error like this:
go test ./... -integration
flag provided but not defined: -integration
Usage of /tmp/go-build3903398677/b001/foo.test:
Environment variables appear to be the most flexible, robust and require the least amount of code with no visible downsides.
Is there a way to inject providers when writing unit tests using Karma(Testacular) and Jasmine in angular?
Our team decided recently to use angularjs $log to write debugging details to the console. This way we can leverage the ability to disable the logging via the $logProvider.debugEnabled() method.
angular.module("App", ["prismLogin", "ui.bootstrap"])
.config(["$routeProvider", "$logProvider",
function ($routeProvider, $logProvider) {
$routeProvider
//routes here edited for brevity
//This is the offending line, it breaks several pre-existing tests
$logProvider.debugEnabled(true);
}]);
However after adding the $logProvider.debugEnabled(true); line several of our tests no longer execute successfully, failing with the following message:
TypeError: Object doesn't support property or method 'debugEnabled' from App
So my question again, is it possible to mock the $logProvider? Or should I provide my own configuration block for the test harness?
I attempted searching for a way to mock the app module with no luck. It seems to me that using the concrete app module instead of a mock is very brittle. I would like to avoid reworking tests associated with the app module every time a change is made in the app or run configuration blocks.
The tests that are failing are units of code with no relation to the $logProvider? I feel as if I a missing something here and making things much harder than they should be. How should one go about writing tests that are flexible and are not affected by other side effects introduced in your application?
It appears that this is a know issue with angular-mocks.
Until the issue is addressed , I was able to resolve the issue by adding the following method to the angular.mock.$LogProvider definition in angular-mocks.js at line 295.
this.debugEnabled = function(flag) {
return this;
};