In Go, how to unit test private method with receiver? For example, how to unit test code segment as below.
If initiate an instance of srv , then isShare is hidden through handle of the instance. So in test code can't call isShare.
Have search and read some posts, but they are all about private functions without receiver specified.
package service
func (s *srv) isShare(id string) ( bool, error) {
record := s.db.Get(id)
if record != nil {
return true, nil
}
return false, errors.New("record not found.")
}
One extra question may be, if having one DB type field instantiated in srv instance, then in unit test, how to bind a mock DB field to srv instead of the real DB type?
You can call un-exported methods from the same package, so just add your tests to the package. It's very common (basically the norm) for tests files to be in the same package as the code they are testing.
Related
I am new to Golang and have been exploring but not clear about mocking in unit tests. Can anyone explain following specific questions ?
Question1: For writing unit tests in Golang, why we need to have interfaces to mock methods, why not only struct ?
Question2: Why we inject the interface in struct(where we call external method)
With struct -
type GlobalData struct {}
var (
GlobalObj = GlobalData{}
)
func (g GlobalData) GetGlobalData(a string) string{
return a
}
With interface definition-
type GlobalInterface interface {
GetGlobalData(a string) string
}
type GlobalData struct {}
var (
GlobalObj = GlobalData{}
)
func (g GlobalData) GetGlobalData(a string) string{
return a
}
Thanks
Question 1: For writing unit tests in Golang, why we need to have interfaces to mock methods, why not only struct ?
Answer: Its not mandatory
Question 2: Why we inject the interface in struct(where we call external method)
Answer: Because, it helps you to replace the actual function call (that might trigger some out of scope actions as a part of unit test , such as database call, some API call etc) by injecting a MockStruct (which will be implementing the same interface that is there in the actual code). Polymorphism in simple words.
So, you create a MockStruct and define your own mockMethods to it. As polymorphism, your unit test pick MockStruct without complaining. Calling actual DB or http endpoints do not come under unit testing.
Just for reference, I can point you to one of my github codebase where I wrote a small test case for a file. As you can see I mocked :
GuestCartHandler interface , that allowed me to not call the actual implementation
Mocked sql connection using "github.com/DATA-DOG/go-sqlmock" package. This helped me to avoid establishing actual db client (so, no dependency of database while unit testing)
Let me know if you get the idea conceptually or do you need some more clarification.
If you have methods on types in package user let's say, ex.
package user
type User struct {
name string
}
func (u *User) GetUserProfile() UserProfile{}
And now on import in catalog package :
package catalog
import user
func getUserCatalog(user user.User) []catalog {
user.GetUserProfile()
}
Now to test getUserCatalog method there are 2 ways:
1. var getUserProfileFunc = user.GetUserProfile
using this approach mock can be easily passed at test run time like:
getUserProfile = func() UserProfile {
return fakeUserProfile
}
this is the easiest way to test it.
Now there is another way using interface, in package user add an interface like
type UserInterface interface {
GetUserProfile() UserProfile
}
if User package is a library on which you don't have control then create your own interface, type and use this.
In this case testing in catalog package will become like:
because now methods will be invoked from UserInterface type not from UserType, hence while testing :
UserInterface = fakeUserStruct
and follow below steps
//1. define type of func to return
type typeGetUserProfile func() UserProfile
//2. create a var to return
var mockedGetUserProfile typeGetUserProfile
//3. create a type
type FakeUser struct{}
//4. implement method interface
func (user *FakeUserStruct) GetUserProfile() UserProfile{
return mockedGetUserProfile
}
now when running test :
mockerGetUserProfile = func() UserProfile {
return fakeUserProfile
}
There is mock library which helps in creating boilerplate code for mocking. Check this https://github.com/stretchr/testify
There are many other mock library, but I had used this one, this was really cool.
I hope this helps.
if not please let me know, i'll give some example code and push it to Github.
Also please check https://levelup.gitconnected.com/utilizing-the-power-of-interfaces-when-mocking-and-testing-external-apis-in-golang-1178b0db5a32
I have a service that only make queries ( read / write ) to influxDB.
I want to unit test this, but I'm not sure how to do it, I've read a bunch of tutos talking about mocking. A lot deals with components like go-sqlmock. But as I am using influxDB, I could not use it.
I also find out other components I've tried to use like goMock or testify to be over-complicated.
What I think to do is to create a Repository Layer, an interface that should implement all the methods I need to run / test, and pass concrete classes with dependency injection.
I think it could work, but is it the easiest way to do it ?
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I can give you code if needed, but I think my question is a bit more theorical than practical. It is about the easiest way to mock a custom DB for unit testing.
To expand on #Markus W Mahlberg answer:
If the goal is to verify the queries are valid and actually execute against influx there's no shortcut for actually performing these against influx. These are usually considered to be "integration" tests. I have found with docker-compose that these tests can be just as reliable as unit tests, and fast enough to be integrated into CI. Having the tests execute in CI enables local engineers to easily run these tests to verify their query changes as well.
I guess having Repositories everywhere, even for small services, just for them to be testable, seems to be over-engineered.
I have found this to be pretty polarizing discussion. A test implementation IS a concrete implementation and paves the way for reliable, repeatable tests that support easily isolating and exercising specific components of your code.
I want to unit test this, but I'm not sure how to do it,
I think this is pretty nuanced, IMO unit testing queries provides negative value. Value comes from using a repository interface to allow your unit tests to explicitly configure responses that you would receive from influx in order to fully exercise your application code. This provides no feedback on influx, which is why the integration tests are essential in order to verify that your application can validly configure, connect, and query against influx. This validation implicitly happens when you deploy your application, at which point it becomes way more expensive in terms of feedback than verifying it locally and in CI with integration tests.
I created a diagram to try and illustrate these differences:
Unit tests with repository are focused on your application code and provide little feedback/value on anything to do with influx. Integration tests are useful for verifying your client (perhaps being extended to your application depending on where the tests are exercising but I prefer to bound it to the client since you already have the static feedback from go on the interfaces and calls). Then finally, as #Markus points out, the step to e2e tests is pretty small from integration tests, and allow you to test your full service.
By its very definition, if you test your integration with an external resource, we are talking of integration tests, not unit tests. So we have two problems to solve here.
Unit tests
What you typically do is to have a data access layer which accepts interfaces, which in turn are easy to mock and you can unittest your application logic.
package main
import (
"errors"
"fmt"
)
var (
values = map[string]string{"foo": "bar", "bar": "baz"}
Expected = errors.New("Expected error")
)
type Getter interface {
Get(name string) (string, error)
}
// ErrorGetter implements Getter and always returns an error to test the error handling code of the caller.
// ofc, you could (and prolly should) use some mocking here in order to be able to test various other cases
type ErrorGetter struct{}
func (e ErrorGetter) Get(name string) (string, error) {
return "", Expected
}
// MapGetter implements Getter and uses a map as its datasource.
// Here you can see that you actually get an advantage: you decouple your logic from the data source,
// making refactoring (and debugging) **much** easier WTSHTF.
type MapGetter struct {
data map[string]string
}
func (m MapGetter) Get(name string) (string, error) {
if v, ok := m.data[name]; ok {
return v, nil
}
return "", fmt.Errorf("No value found for %s", name)
}
type retriever struct {
g Getter
}
func (r retriever) retrieve(name string) (string, error) {
return r.g.Get(name)
}
func main() {
// Assume this is test code. No tests possible on playground ;)
bad := retriever{g: ErrorGetter{}}
s, err := bad.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
// Needs to fail as well, as "baz" is not in values
good := retriever{g: MapGetter{values}}
s, err = good.retrieve("baz")
if s != "" || err == nil {
panic("Something went seriously wrong")
}
s, err = good.retrieve("foo")
if s != "bar" || err != nil {
panic("Something went seriously wrong")
}
}
In the example above, I actually had to implement two Getters to cover all test cases, since I could not use a mocking library, but you get the picture.
As for the over engineering: Plain and simple, no, that is not overengineering. It is what I personally call proper craftsmanship. It will pay in the long run to get used to it. Maybe not in this project, but in one to come.
Integration tests
Dodgy. What I tend to do is to make sure my queries are correct before I commit them ;)
In the rare case I really want to verify my queries in a CI for example, I usually create a Makefile which in turn spins up a docker(-compose) which provides the stuff I want to integrate against and then runs the tests.
I am having issues while testing my grails controllers, as it depends on one service which seems not to be injected. I tried several ways (for ex. Extending classess like grailsunitestcase, specification) but I keep getting errors. The thing is that that service variable is null and I cant test my controller index method (which calls a render view) due to the exception...
I really need to know how to do this but I don't have a clue where to start...
Unit tests are just that. There is no grails 'environment' surrounding your controller. If the controller makes use of a service which is normally injected, you have to mock that service yourself.
#TestFor(SomeController)
#Mock([SomeService])
class SomeControllerSpec extends Specification
def "test some method"() {
given:
def mockService = mockFor(SomeService)
mockService.demand.someServiceMethod() { ->
return something
}
controller.someService = mockService.createMock()
when:
controller.someControllerMethod()
then:
// whatever checks are appropriate
}
}
In my Kotlin JUnit tests, I want to start/stop embedded servers and use them within my tests.
I tried using the JUnit #Before annotation on a method in my test class and it works fine, but it isn't the right behaviour since it runs every test case instead of just once.
Therefore I want to use the #BeforeClass annotation on a method, but adding it to a method results in an error saying it must be on a static method. Kotlin doesn't appear to have static methods. And then the same applies for static variables, because I need to keep a reference to the embedded server around for use in the test cases.
So how do I create this embedded database just once for all of my test cases?
class MyTest {
#Before fun setup() {
// works in that it opens the database connection, but is wrong
// since this is per test case instead of being shared for all
}
#BeforeClass fun setupClass() {
// what I want to do instead, but results in error because
// this isn't a static method, and static keyword doesn't exist
}
var referenceToServer: ServerType // wrong because is not static either
...
}
Note: this question is intentionally written and answered by the author (Self-Answered Questions), so that the answers to commonly asked Kotlin topics are present in SO.
Your unit test class usually needs a few things to manage a shared resource for a group of test methods. And in Kotlin you can use #BeforeClass and #AfterClass not in the test class, but rather within its companion object along with the #JvmStatic annotation.
The structure of a test class would look like:
class MyTestClass {
companion object {
init {
// things that may need to be setup before companion class member variables are instantiated
}
// variables you initialize for the class just once:
val someClassVar = initializer()
// variables you initialize for the class later in the #BeforeClass method:
lateinit var someClassLateVar: SomeResource
#BeforeClass #JvmStatic fun setup() {
// things to execute once and keep around for the class
}
#AfterClass #JvmStatic fun teardown() {
// clean up after this class, leave nothing dirty behind
}
}
// variables you initialize per instance of the test class:
val someInstanceVar = initializer()
// variables you initialize per test case later in your #Before methods:
var lateinit someInstanceLateZVar: MyType
#Before fun prepareTest() {
// things to do before each test
}
#After fun cleanupTest() {
// things to do after each test
}
#Test fun testSomething() {
// an actual test case
}
#Test fun testSomethingElse() {
// another test case
}
// ...more test cases
}
Given the above, you should read about:
companion objects - similar to the Class object in Java, but a singleton per class that is not static
#JvmStatic - an annotation that turns a companion object method into a static method on the outer class for Java interop
lateinit - allows a var property to be initialized later when you have a well defined lifecycle
Delegates.notNull() - can be used instead of lateinit for a property that should be set at least once before being read.
Here are fuller examples of test classes for Kotlin that manage embedded resources.
The first is copied and modified from Solr-Undertow tests, and before the test cases are run, configures and starts a Solr-Undertow server. After the tests run, it cleans up any temporary files created by the tests. It also ensures environment variables and system properties are correct before the tests are run. Between test cases it unloads any temporary loaded Solr cores. The test:
class TestServerWithPlugin {
companion object {
val workingDir = Paths.get("test-data/solr-standalone").toAbsolutePath()
val coreWithPluginDir = workingDir.resolve("plugin-test/collection1")
lateinit var server: Server
#BeforeClass #JvmStatic fun setup() {
assertTrue(coreWithPluginDir.exists(), "test core w/plugin does not exist $coreWithPluginDir")
// make sure no system properties are set that could interfere with test
resetEnvProxy()
cleanSysProps()
routeJbossLoggingToSlf4j()
cleanFiles()
val config = mapOf(...)
val configLoader = ServerConfigFromOverridesAndReference(workingDir, config) verifiedBy { loader ->
...
}
assertNotNull(System.getProperty("solr.solr.home"))
server = Server(configLoader)
val (serverStarted, message) = server.run()
if (!serverStarted) {
fail("Server not started: '$message'")
}
}
#AfterClass #JvmStatic fun teardown() {
server.shutdown()
cleanFiles()
resetEnvProxy()
cleanSysProps()
}
private fun cleanSysProps() { ... }
private fun cleanFiles() {
// don't leave any test files behind
coreWithPluginDir.resolve("data").deleteRecursively()
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties"))
Files.deleteIfExists(coreWithPluginDir.resolve("core.properties.unloaded"))
}
}
val adminClient: SolrClient = HttpSolrClient("http://localhost:8983/solr/")
#Before fun prepareTest() {
// anything before each test?
}
#After fun cleanupTest() {
// make sure test cores do not bleed over between test cases
unloadCoreIfExists("tempCollection1")
unloadCoreIfExists("tempCollection2")
unloadCoreIfExists("tempCollection3")
}
private fun unloadCoreIfExists(name: String) { ... }
#Test
fun testServerLoadsPlugin() {
println("Loading core 'withplugin' from dir ${coreWithPluginDir.toString()}")
val response = CoreAdminRequest.createCore("tempCollection1", coreWithPluginDir.toString(), adminClient)
assertEquals(0, response.status)
}
// ... other test cases
}
And another starting AWS DynamoDB local as an embedded database (copied and modified slightly from Running AWS DynamoDB-local embedded). This test must hack the java.library.path before anything else happens or local DynamoDB (using sqlite with binary libraries) won't run. Then it starts a server to share for all test classes, and cleans up temporary data between tests. The test:
class TestAccountManager {
companion object {
init {
// we need to control the "java.library.path" or sqlite cannot find its libraries
val dynLibPath = File("./src/test/dynlib/").absoluteFile
System.setProperty("java.library.path", dynLibPath.toString());
// TEST HACK: if we kill this value in the System classloader, it will be
// recreated on next access allowing java.library.path to be reset
val fieldSysPath = ClassLoader::class.java.getDeclaredField("sys_paths")
fieldSysPath.setAccessible(true)
fieldSysPath.set(null, null)
// ensure logging always goes through Slf4j
System.setProperty("org.eclipse.jetty.util.log.class", "org.eclipse.jetty.util.log.Slf4jLog")
}
private val localDbPort = 19444
private lateinit var localDb: DynamoDBProxyServer
private lateinit var dbClient: AmazonDynamoDBClient
private lateinit var dynamo: DynamoDB
#BeforeClass #JvmStatic fun setup() {
// do not use ServerRunner, it is evil and doesn't set the port correctly, also
// it resets logging to be off.
localDb = DynamoDBProxyServer(localDbPort, LocalDynamoDBServerHandler(
LocalDynamoDBRequestHandler(0, true, null, true, true), null)
)
localDb.start()
// fake credentials are required even though ignored
val auth = BasicAWSCredentials("fakeKey", "fakeSecret")
dbClient = AmazonDynamoDBClient(auth) initializedWith {
signerRegionOverride = "us-east-1"
setEndpoint("http://localhost:$localDbPort")
}
dynamo = DynamoDB(dbClient)
// create the tables once
AccountManagerSchema.createTables(dbClient)
// for debugging reference
dynamo.listTables().forEach { table ->
println(table.tableName)
}
}
#AfterClass #JvmStatic fun teardown() {
dbClient.shutdown()
localDb.stop()
}
}
val jsonMapper = jacksonObjectMapper()
val dynamoMapper: DynamoDBMapper = DynamoDBMapper(dbClient)
#Before fun prepareTest() {
// insert commonly used test data
setupStaticBillingData(dbClient)
}
#After fun cleanupTest() {
// delete anything that shouldn't survive any test case
deleteAllInTable<Account>()
deleteAllInTable<Organization>()
deleteAllInTable<Billing>()
}
private inline fun <reified T: Any> deleteAllInTable() { ... }
#Test fun testAccountJsonRoundTrip() {
val acct = Account("123", ...)
dynamoMapper.save(acct)
val item = dynamo.getTable("Accounts").getItem("id", "123")
val acctReadJson = jsonMapper.readValue<Account>(item.toJSON())
assertEquals(acct, acctReadJson)
}
// ...more test cases
}
NOTE: some parts of the examples are abbreviated with ...
Managing resources with before/after callbacks in tests, obviously, has it's pros:
Tests are "atomic". A test executes as a whole things with all the callbacks One won't forget to fire up a dependency service before the tests and shut it down after it's done. If done properly, executions callbacks will work on any environment.
Tests are self-contained. There is no external data or setup phases, everything is contained within a few test classes.
It has some cons too. One important of them is that it pollutes the code and makes the code violate single responsibility principle. Tests now not only test something, but perform a heavyweight initialization and resource management. It can be ok in some cases (like configuring an ObjectMapper), but modifying java.library.path or spawning another processes (or in-process embedded databases) are not so innocent.
Why not treat those services as dependencies for your test eligible for "injection", like described by 12factor.net.
This way you start and initialize dependency services somewhere outside of the test code.
Nowadays virtualization and containers are almost everywhere and most developers' machines are able to run Docker. And most of the application have a dockerized version: Elasticsearch, DynamoDB, PostgreSQL and so on. Docker is a perfect solution for external services that your tests need.
It can be a script that runs is run manually by a developer every time she wants to execute tests.
It can be a task run by build tool (e.g. Gradle has awesome dependsOn and finalizedBy DSL for defining dependencies). A task, of course, can execute the same script that developer executes manually using shell-outs / process execs.
It can be a task run by IDE before test execution. Again, it can use the same script.
Most CI / CD providers have a notion of "service" — an external dependency (process) that runs in parallel to your build and can be accessed via it's usual SDK / connector / API: Gitlab, Travis, Bitbucket, AppVeyor, Semaphore, …
This approach:
Frees your test code from initialization logic. Your tests will only test and do nothing more.
Decouples code and data. Adding a new test case can now be done by adding new data into dependency services with it's native toolset. I.e. for SQL databases you'll use SQL, for Amazon DynamoDB you'll use CLI to create tables and put items.
Is closer to a production code, where you obviously do not start those services when your "main" application starts.
Of course, it has it's flaws (basically, the statements I've started from):
Tests are not more "atomic". Dependency service must be started somehow prior test execution. The way it is started may be different in different environments: developer's machine or CI, IDE or build tool CLI.
Tests are not self-contained. Now your seed data may be even packed inside an image, so changing it may require rebuilding a different project.
This is a tough one because not too many people use Pex & Moles or so I think (even though Pex is a really great product - much better than any other unit testing tool)
I have a Data project that has a very simple model with just one entity (DBItem). I've also written a DBRepository within this project, that manipulates this EF model. Repository has a method called GetItems() that returns a list of business layer items (BLItem) and looks similar to this (simplified example):
public IList<BLItem> GetItems()
{
using (var ctx = new EFContext("name=MyWebConfigConnectionName"))
{
DateTime limit = DateTime.Today.AddDays(-10);
IList<DBItem> result = ctx.Items.Where(i => i.Changed > limit).ToList();
return result.ConvertAll(i => i.ToBusinessObject());
}
}
So now I'd like to create some unit tests for this particular method. I'm using Pex & Moles. I created my moles and stubs for my EF object context.
I would like to write parametrised unit test (I know I've first written my production code, but I had to, since I'm testing Pex & Moles) that tests that this method returns valid list of items.
This is my test class:
[PexClass]
public class RepoTest
{
[PexMethod]
public void GetItemsTest(ObjectSet<DBItem> items)
{
MEFContext.ConstructorString = (#this, name) => {
var mole = new SEFContext();
};
DBRepository repo = new DBRepository();
IList<BLItem> result = repo.GetItems();
IList<DBItem> manual = items.Where(i => i.Changed > DateTime.Today.AddDays(-10));
if (result.Count != manual.Count)
{
throw new Exception();
}
}
}
Then I run Pex Explorations for this particular parametrised unit test, but I get an error path bounds exceeded. Pex starts this test by providing null to this test method (so items = null). This is the code, that Pex is running:
[Test]
[PexGeneratedBy(typeof(RepoTest))]
[Ignore("the test state was: path bounds exceeded")]
public void DBRepository_GetTasks22301()
{
this.GetItemsTest((ObjectSet<DBItem>)null);
}
This was additional comment provided by Pex:
The test case ran too long for these inputs, and Pex stopped the analysis. Please notice: The method Oblivious.Data.Test.Repositories.TaskRepositoryTest.b__0 was called 50 times; please check that the code is not stuck in an infinite loop or recursion. Otherwise, click on 'Set MaxStack=200', and run Pex again.
Update attribute [PexMethod(MaxStack = 200)]
Question
Am I doing this the correct way or not? Should I use EFContext stub instead? Do I have to add additional attributes to test method so Moles host will be running (I'm not sure it does now). I'm running just Pex & Moles. No VS test or nUnit or anything else.
I guess I should probably set some limit to Pex how many items should it provide for this particular test method.
Moles is not designed to test the parts of your application that have external dependencies (e.g. file access, network access, database access, etc). Instead, Moles allows you to mock these parts of your app so that way you can do true unit testing on the parts that don't have external dependencies.
So I think you should just mock your EF objects and queries, e.g., by creating in-memory lists and having query methods return fake data from those lists based on whatever criteria is relevant.
I am just getting to grips with pex also ... my issues surrounded me wanting to use it with moq ;)
anyway ...
I have some methods similar to your that have the same problem. When i increased the max they went away. Presumably pex was satisfied that it had sufficiently explored the branches. I have methods where i have had to increase the timeout on the code contract validation also.
One thing that you should probably be doign though is passing in all the dependant objects as parameters ... ie dont instantiate the repo in the method but pass it in.
A general problem you have is that you are instantiating big objects in your method. I do the same in my DAL classes, but then i am not tryign to unit test them in isolation. I build up datasets and use this to test my data access code against.
I use pex on my business logic and objects.
If i were to try and test my DAL code id have to use IOC to pass the datacontext into the methods - which would then make testing possible as you can mock the data context.
You should use Entity Framework Repository Pattern: http://www.codeproject.com/KB/database/ImplRepositoryPatternEF.aspx