I'm trying to mock a method written with sqlboiler but I'm having massive trouble building the mock-query.
The model I'm trying to mock looks like this:
type Course struct {
ID int, Name string, Description null.String, EnrollKey string, ForumID int,
CreatedAt null.Time, UpdatedAt null.Time, DeletedAt null.Time,
R *courseR, L courseL
}
For simplicity I want to test the GetCourse-method
func (p *PublicController) GetCourse(id int) (*models.Course, error) {
c, err := models.FindCourse(context.Background(), p.Database, id)
if err != nil {
return nil, err
}
return c, nil
}
with this test
func TestGetCourse(t *testing.T) {
db, mock, err := sqlmock.New()
if err != nil {
t.Fatalf("an error '%s' was not expected", err)
}
oldDB := boil.GetDB()
defer func() {
db.Close()
boil.SetDB(oldDB)
}()
boil.SetDB(db)
ctrl := &PublicController{db}
rows := sqlmock.NewRows([]string{"ID", "Name", "Description", "EnrollKey", "ForumID"}).AddRow(42, "Testkurs", "12345", 33)
query := regexp.QuoteMeta("SELECT ID, Name, Description, EnrollKey, ForumID FROM courses WHERE ID = ?")
//mockQuery := regexp.QuoteMeta("SELECT * FROM `courses` WHERE (`course AND (`courses`.deleted_at is null) LIMIT 1;")
mock.ExpectQuery(query).WithArgs(42).WillReturnRows(rows)
course, err := ctrl.GetCourse(42)
assert.NotNil(t, course)
assert.NoError(t, err)
}
But running this test only returns
Query: could not match actual sql: "select * from course where id=? and deleted_at is null" with expected regexp "SELECT ID, Name, Description, EnrollKey, ForumID FROM courses WHERE ID = ?"
bind failed to execute query
And I can't really find out how to construct it correctly.
How do I correctly mock the sqlboiler-query for running unit tests?
UPDATE
I managed to solve this by using different parameters in AddRow()
.AddRow(c.ID, c.Name, null.String{}, c.EnrollKey, c.ForumID)
and building the query differently
query := regexp.QuoteMeta("select * from `course` where `id`=? and `deleted_at` is null")
Now my issue is that in contrast to this method the others have a very large complexity in comparison with a large amount of complex queries (mainly insert-operations). From my understanding, sqlboiler-tests needs to mimic every single interaction made with the database.
How do I extract the necessary queries for the large amount of database interactions? I solved my problem by just using the "actual sql-query" instead of the previously used one but I'm afraid this procedure is the opposite of efficient testing.
Related
I'm having a hard time to get the current Cognito user attributes from within my lambda function, that is written in Go. I'm currently doing:
userAttributes = request.RequestContext.Authorizer["claims"]
And if I want to get the email:
userEmail = request.RequestContext.Authorizer["claims"].(map[string]interface{})["email"].(string)
I don't think this is a good way or even an acceptable way - it must have a better way to do it.
You can use 3rd party library to convert map[string]interface{} to a concrete type. Check the mitchellh/mapstructure library, it will help you to implement in a better way.
So, you could improve your code with this code :
import "github.com/mitchellh/mapstructure"
type Claims struct {
Email string
// other fields
ID int
}
func claims(r request.Request) (Claims, error) {
input := r.RequestContext.Authorizer["claims"]
output := Claims{}
err := mapstructure.Decode(input, &output)
if err != nil {
return nil, err
}
return output, nil
}
And somewhere in your handlers, you could get your claims by calling this method
func someWhere(){
userClaims, err := claims(request)
if err != nil {
// handle
}
// you can now use : userClaims.Email, userClaims.ID
}
Don't forget to change func claims request parameter type according to yours (r parameter).
There's a method in the code under test, that simply tries to get database connection, or returns error if unable to.
It, and the structs involved are defined as follows:
type DatabaseContext struct {
Context
Database DatabaseSt
}
// //GetInfo Returns the context.
// func (c *DatabaseContext) GetInfo() *Context {
// return &c.Context
// }
//GetDB Gets the database connection from the connection string.
func (c *DatabaseContext) GetDB() (*sql.DB, *errors.ErrorSt) {
var errSt *errors.ErrorSt
if c.Database.dbConnection == nil {
c.Database.dbConnection, errSt = c.openDB()
if errSt != nil {
return nil, errSt
}
c.Database.dbConnection.SetMaxOpenConns(50)
}
return c.Database.dbConnection, nil
}
The other methods, in the same file, which it may hit, are as follows:
//openDB opens the database with the connection string.
func (c *DatabaseContext) openDB() (*sql.DB, *errors.ErrorSt) {
if c.Database.DBConnectionStr == "" {
c.GetDatabase()
}
return db.OpenConnection(c.Database.DBConnectionStr, c.Database.InterpolateParams)
}
//CloseDB Closes the database.
func (c *DatabaseContext) CloseDB() {
if c.Database.dbConnection != nil {
c.Database.dbConnection.Close()
}
}
//SetDatabaseString Sets the database string into the session.
func (c *DatabaseContext) SetDatabaseString(str string) {
c.Database.DBConnectionStr = str
i := strings.Index(str, ")/") + 2
c.Database.DBName = str[i:]
c.SetDatabase()
}
//GetDatabaseString Gets the database string from the session.
func (c *DatabaseContext) GetDatabase() {
if dbIntf := c.GetFromSession("Database"); dbIntf != nil {
c.Database = dbIntf.(DatabaseSt)
}
}
//SetDatabaseString Sets the database string into the session.
func (c *DatabaseContext) SetDatabase() {
c.SetToSession("Database", c.Database)
}
Fortunately, DatabaseContext implements DatabaseContextIntf, which I want to use for testing. My instinct is to straight up mock DatabaseContext, but that won't work because it's not an interface (in Golang, you can only mock interfaces).
How would I go about testing this, without hitting a real database, which can fail beyond my control (thus creating false fails in the test)?
UPDATE My question differs from the suspected duplicate as their question is about database entries, and not connections. The flagged duplicate refers to this library as the answer, however, there is no method in it to return a "connection" that is nil, for the sake of the test. The best it has is New which creates a test double connection, and there's no way to control the state of the returned value (I need it to be nil in one test ("No Connection") and non-nil in another ("Sanity Test"))
I ended up making the package of the test the same as that of the code under test (this allows the test generator in Visual Studio Code to place the generated test right in the test file, and not get confused, as well as give me access to unexported fields, which I used), and just straight up made a fake DatabaseContext
My test case looks like this:
t.Run("SanityTest", func(t *testing.T) {
c := new(DatabaseContext)
assert.Nil(t, c.Database.dbConnection)
database, err := c.GetDB()
defer database.Close()
assert.NotNil(t, database)
if !assert.Nil(t, err) {
t.Error(err.ToString(false))
}
})
I am trying to learn how to write tests for my code in order to write better code, but I just seem to have the hardest time figuring out how to actually test some code I have written. I have read so many tutorials, most of which seem to only cover functions that add two numbers or mock some database or server.
I have a simple function I wrote below that takes a text template and a CSV file as input and executes the template using the values of the CSV. I have "tested" the code by trial and error, passing files, and printing values, but I would like to learn how to write proper tests for it. I feel that learning to test my own code will help me understand and learn faster and better. Any help is appreciated.
// generateCmds generates configuration commands from a text template using
// the values from a CSV file. Multiple commands in the text template must
// be delimited by a semicolon. The first row of the CSV file is assumed to
// be the header row and the header values are used for key access in the
// text template.
func generateCmds(cmdTmpl string, filename string) ([]string, error) {
t, err := template.New("cmds").Parse(cmdTmpl)
if err != nil {
return nil, fmt.Errorf("parsing template: %v", err)
}
f, err := os.Open(filename)
if err != nil {
return nil, fmt.Errorf("reading file: %v", err)
}
defer f.Close()
records, err := csv.NewReader(f).ReadAll()
if err != nil {
return nil, fmt.Errorf("reading records: %v", err)
}
if len(records) == 0 {
return nil, errors.New("no records to process")
}
var (
b bytes.Buffer
cmds []string
keys = records[0]
vals = make(map[string]string, len(keys))
)
for _, rec := range records[1:] {
for k, v := range rec {
vals[keys[k]] = v
}
if err := t.Execute(&b, vals); err != nil {
return nil, fmt.Errorf("executing template: %v", err)
}
for _, s := range strings.Split(b.String(), ";") {
if cmd := strings.TrimSpace(s); cmd != "" {
cmds = append(cmds, cmd)
}
}
b.Reset()
}
return cmds, nil
}
Edit: Thanks for all the suggestions so far! My question was flagged as being too broad, so I have some specific questions regarding my example.
Would a test table be useful in a function like this? And, if so, would the test struct need to include the returned cmds string slice and the value of err? For example:
type tmplTest struct {
name string // test name
tmpl string // the text template
filename string // CSV file with template values
expected []string // expected configuration commands
err error // expected error
}
How do you handle errors that are supposed to be returned for specific test cases? For example, os.Open() returns an error of type *PathError if an error is encountered. How do I initialize a *PathError that is equivalent to the one returned by os.Open()? Same idea for template.Parse(), template.Execute(), etc.
Edit 2: Below is a test function I came up with. My two question from the first edit still stand.
package cmd
import (
"testing"
"strings"
"path/filepath"
)
type tmplTest struct {
name string // test name
tmpl string // text template to execute
filename string // CSV containing template text values
cmds []string // expected configuration commands
}
var tests = []tmplTest{
{"empty_error", ``, "", nil},
{"file_error", ``, "fake_file.csv", nil},
{"file_empty_error", ``, "empty.csv", nil},
{"file_fmt_error", ``, "fmt_err.csv", nil},
{"template_fmt_error", `{{ }{{`, "test_values.csv", nil},
{"template_key_error", `{{.InvalidKey}}`, "test_values.csv", nil},
}
func TestGenerateCmds(t *testing.T) {
for _, tc := range tests {
t.Run(tc.name, func(t *testing.T) {
cmds, err := generateCmds(tc.tmpl, filepath.Join("testdata", tc.filename))
if err != nil {
// Unexpected error. Fail the test.
if !strings.Contains(tc.name, "error") {
t.Fatal(err)
}
// TODO: Otherwise, check that the function failed at the expected point.
}
if tc.cmds == nil && cmds != nil {
t.Errorf("expected no commands; got %d", len(cmds))
}
if len(cmds) != len(tc.cmds) {
t.Errorf("expected %d commands; got %d", len(tc.cmds), len(cmds))
}
for i := range cmds {
if cmds[i] != tc.cmds[i] {
t.Errorf("expected %q; got %q", tc.cmds[i], cmds[i])
}
}
})
}
}
You basically need to have some sample files with the contents you want to test, then in your test code you can call the generateCmds function passing in the template string and the files to then verify that the results are what you expect.
It is not so much different as the examples you probably saw for simpler cases.
You can place the files under a testdata folder inside the same package (testdata is a special name that the Go tools will ignore during build).
Then you can do something like:
func TestCSVProcessing(t *testing.T) {
templateStr := `<your template here>`
testFile := "testdata/yourtestfile.csv"
result, err := generateCmds(templateStr, testFile)
if err != nil {
// fail the test here, unless you expected an error with this file
}
// compare the "result" contents with what you expected
// failing the test if it does not match
}
EDIT
About the specific questions you added later:
Would a test table be useful in a function like this? And, if so, would the test struct need to include the returned cmds string slice and the value of err?
Yes, it'd make sense to include both the expected strings to be returned as well as the expected error (if any).
How do you handle errors that are supposed to be returned for specific test cases? For example, os.Open() returns an error of type *PathError if an error is encountered. How do I initialize a *PathError that is equivalent to the one returned by os.Open()?
I don't think you'll be able to "initialize" an equivalent error for each case. Sometimes the libraries might use internal types for their errors making this impossible. Easiest would be to "initialize" a regular error with the same value returned in its Error() method, then just compare the returned error's Error() value with the expected one.
I'm currently looking into creating some unit tests for my service in Go, as well as other functions that build up on top of that functionality, and I'm wondering what is the best way to unit test that in Go? My code looks like:
type BBPeripheral struct {
client *http.Client
endpoint string
}
type BBQuery struct {
Name string `json:"name"`
}
type BBResponse struct {
Brand string `json:"brand"`
Model string `json:"model"`
...
}
type Peripheral struct {
Brand string
Model string
...
}
type Service interface {
Get(name string) (*Peripheral, error)
}
func NewBBPeripheral(config *peripheralConfig) (*BBPeripheral, error) {
transport, err := setTransport(config)
if err != nil {
return nil, err
}
BB := &BBPeripheral{
client: &http.Client{Transport: transport},
endpoint: config.Endpoint[0],
}
return BB, nil
}
func (this *BBPeripheral) Get(name string) (*Peripheral, error) {
data, err := json.Marshal(BBQuery{Name: name})
if err != nil {
return nil, fmt.Errorf("BBPeripheral.Get Marshal: %s", err)
}
resp, err := this.client.Post(this.endpoint, "application/json", bytes.NewBuffer(data))
if resp != nil {
defer resp.Body.Close()
}
if err != nil {
return nil, err
}
if resp.StatusCode != http.StatusOK {
return nil, fmt.Errorf(resp.StatusCode)
}
var BBResponse BBResponse
body, err := ioutil.ReadAll(resp.Body)
if err != nil {
return nil, err
}
err = json.Unmarshal(body, &BBResponse)
if err != nil {
return nil, err
}
peripheral := &Peripheral{}
peripheral.Model = BBResponse.Model
if peripheral.Model == "" {
peripheral.Model = NA
}
peripheral.Brand = BBResponse.Brand
if peripheral.Brand == "" {
peripheral.Brand = NA
}
return peripheral, nil
}
Is the most efficient way of testing this code and the code that uses these functions to spin up a separate goroutine to act like the server, use http.httptest package, or something else? that's the first time that i try to write a test then i don't realy know how.
It really completely depends. Go provides pretty much all the tools you need to test your application at every single level.
Unit Tests
Design is important because there aren't many tricks to dynamically provide mock/stub objects. You can override variables for tests, but it unlocks all sorts of problems with cleanup. I would focus on IO free unit tests to check that your specific logic works.
For example, you could test BBPeripheral.Get method by making client an interface, requiring it during instantiation, and providing a stub one for the test.
func Test_BBPeripheral_Get_Success(*testing.T) {
bb := BBPeripheral{client: &StubSuccessClient, ...}
p, err := bb.Get(...)
if err != nil {
t.Fail()
}
}
Then you could create a stub error client that exercises error handling in the Get method:
func Test_BBPeripheral_Get_Success(*testing.T) {
bb := BBPeripheral{client: &StubErrClient, ...}
_, err := bb.Get(...)
if err == nil {
t.Fail()
}
}
Component/Integration Tests
These tests can help exercise that each individual unit in your package can work together in unison. Since your code talks over http, Go provides the httptest package that could be used.
To do this the test could create an httptest server with a handler registered to provide the response that this.endpoint expects. You could then exercise your code using its public interface by requesting a NewBBPeripheral, passing in this.endpoint corresponding to the Server.URL property.
This allows you to simulate your code talking to a real server.
Go Routine Tests
Go makes it so easy to write concurrent code, and makes it just as easy to test it. Testing the top level code that spawns a go routine that exercises NewBBPeripheral could look very much like the test above. In addition to starting up a test server your test will have to wait your your asynchronous code to complete. If you don't have a service wide way to cancel/shutdown/signal complete then one may be required to test it using go routines.
RaceCondition/Load Testing
Using go's built in bechmark test combined with -race flag, you can easily exercise your code, and profile it for race conditions, leveraging the tests you wrote above.
One thing to keep in mind, if the implementation of your application is still in flux, writing unit tests may cost a large amount of time. Creating a couple tests, which exercise the public interface of your code, should allow you to easily verify that your application is working, while allowing the implementation to change.
I reimplement my project with golang recently. The project was implemented with C++. When I finished the code and have a performance test. I'm shocked by the result. When I query the database with C++, I can get the 130 million rows result in 5 mins. But with golang, it's almost 45 mins. But when I separate the code from the project and build the code snippet, it's finished in 2mins. Why does they have so much huge difference performance result?
My code snippet :
https://gist.github.com/pyanfield/2651d23311901b33c5723b7de2364148
package main
import (
"database/sql"
"fmt"
"runtime"
"strconv"
"time"
_ "github.com/go-sql-driver/mysql"
)
func main() {
runtime.GOMAXPROCS(runtime.NumCPU())
// defer profile.Start(profile.CPUProfile, profile.ProfilePath(".")).Stop()
dbRead, err := connectDB("test:test#tcp(127.0.0.1:3306)/test_oltp?charset=utf8&readTimeout=600s&writeTimeout=600s")
if err != nil {
fmt.Printf("Error happend when connecting to DB. %s\n", err.Error())
return
}
defer dbRead.Close()
dbRead.SetMaxIdleConns(0)
dbRead.SetMaxOpenConns(100)
query := fmt.Sprintf("WHERE company_id in (11,22,33,44,55,66,77,88,99,00,111,222,333,4444,555,666,777,888,999)")
relations := getRelations(dbRead, query)
}
func connectDB(addr string) (*sql.DB, error) {
db, err := sql.Open("mysql", addr)
if err != nil {
return nil, err
}
if err = db.Ping(); err != nil {
return nil, err
}
return db, nil
}
type Relation struct {
childId int64
parentId int64
}
func getRelations(db *sql.DB, where string)[]Relation {
begin := time.Now()
var err error
var rows *sql.Rows
query := fmt.Sprintf("SELECT `child_id`, `parent_id` FROM `test_relations` %s", where)
rows, err = db.Query(query)
if err != nil {
fmt.Println("query error:", err.Error())
return nil
}
defer rows.Close()
columns, err := rows.Columns()
buffer := make([]sql.RawBytes, len(columns))
scanArgs := make([]interface{}, len(buffer))
for i := range scanArgs {
scanArgs[i] = &buffer[i]
}
relations := []Relation{}
relation := Relation{}
for rows.Next() {
if err = rows.Scan(scanArgs...); err != nil {
fmt.Println("scan:", err.Error())
return nil
}
relation.parentId, _ = strconv.ParseInt(string(buffer[1]), 10, 64)
relation.childId, _ = strconv.ParseInt(string(buffer[0]), 10, 64)
relations = append(relations, relation)
}
if err = rows.Err(); err != nil {
fmt.Println("next error:", err.Error())
return nil
}
fmt.Printf(">>> getRelations cost: %s\n", time.Since(begin).String())
// output :>>> getRelations cost:1m45.791047s
return relations
// len(relations): 131123541
}
Update:
My go version is 1.6. The cpu profile I got are as below:
The Code Snippet profile top20:
75.67s of 96.82s total (78.16%)
Dropped 109 nodes (cum <= 0.48s)
Showing top 20 nodes out of 82 (cum >= 12.04s)
flat flat% sum% cum cum%
11.85s 12.24% 12.24% 11.85s 12.24% runtime.memmove
10.28s 10.62% 22.86% 20.01s 20.67% runtime.mallocgc
5.82s 6.01% 28.87% 5.82s 6.01% strconv.ParseUint
5.79s 5.98% 34.85% 5.79s 5.98% runtime.futex
3.42s 3.53% 38.38% 10.28s 10.62% github.com/go-sql-driver/mysql.(*buffer).readNext
3.42s 3.53% 41.91% 6.38s 6.59% runtime.scang
3.37s 3.48% 45.39% 36.97s 38.18% github.com/go-sql-driver/mysql.(*textRows).readRow
3.37s 3.48% 48.87% 3.37s 3.48% runtime.memclr
3.20s 3.31% 52.18% 3.20s 3.31% runtime.heapBitsSetType
3.02s 3.12% 55.30% 7.36s 7.60% database/sql.convertAssign
2.96s 3.06% 58.36% 3.02s 3.12% runtime.(*mspan).sweep.func1
2.53s 2.61% 60.97% 2.53s 2.61% runtime._ExternalCode
2.39s 2.47% 63.44% 2.96s 3.06% runtime.readgstatus
2.24s 2.31% 65.75% 8.06s 8.32% strconv.ParseInt
2.21s 2.28% 68.03% 5.24s 5.41% runtime.heapBitsSweepSpan
2.15s 2.22% 70.25% 7.68s 7.93% runtime.rawstring
2.06s 2.13% 72.38% 3.18s 3.28% github.com/go-sql-driver/mysql.readLengthEncodedString
1.95s 2.01% 74.40% 12.23s 12.63% github.com/go-sql-driver/mysql.(*mysqlConn).readPacket
1.83s 1.89% 76.29% 79.42s 82.03% main.Relations
1.81s 1.87% 78.16% 12.04s 12.44% runtime.slicebytetostring
The project cpu profile top20:
(pprof) top20
38.71mins of 42.82mins total (90.40%)
Dropped 334 nodes (cum <= 0.21mins)
Showing top 20 nodes out of 76 (cum >= 1.35mins)
flat flat% sum% cum cum%
12.02mins 28.07% 28.07% 12.48mins 29.15% runtime.addspecial
5.95mins 13.89% 41.96% 15.08mins 35.21% runtime.pcvalue
5.26mins 12.29% 54.25% 5.26mins 12.29% runtime.readvarint
2.60mins 6.08% 60.32% 7.87mins 18.37% runtime.step
1.98mins 4.62% 64.94% 19.45mins 45.43% runtime.gentraceback
1.65mins 3.86% 68.80% 1.65mins 3.86% runtime/internal/atomic.Xchg
1.57mins 3.66% 72.46% 2.93mins 6.84% runtime.(*mspan).sweep
1.52mins 3.54% 76.01% 1.78mins 4.15% runtime.findfunc
1.41mins 3.30% 79.31% 1.42mins 3.31% runtime.markrootSpans
1.13mins 2.64% 81.95% 1.13mins 2.64% runtime.(*fixalloc).alloc
0.64mins 1.50% 83.45% 0.64mins 1.50% runtime.duffcopy
0.46mins 1.08% 84.53% 0.46mins 1.08% runtime.findmoduledatap
0.44mins 1.02% 85.55% 0.44mins 1.02% runtime.fastrand1
0.42mins 0.97% 86.52% 15.49mins 36.18% runtime.funcspdelta
0.38mins 0.89% 87.41% 36.02mins 84.13% runtime.mallocgc
0.30mins 0.7% 88.12% 0.78mins 1.83% runtime.scanobject
0.26mins 0.6% 88.72% 0.32mins 0.74% runtime.stkbucket
0.26mins 0.6% 89.32% 0.26mins 0.6% runtime.memmove
0.23mins 0.55% 89.86% 0.23mins 0.55% runtime.heapBitsForObject
0.23mins 0.53% 90.40% 1.35mins 3.15% runtime.lock
I got my answer and want to share it. This is caused by my mistake. Sometimes ago, I tried to add memory profile and set runtime. MemProfileRate=1 in my init method. But I forgot to reset it to a reasonable value. I ignored this method when I checked my code every time. After removing this setting from my project, it returns to normal, and spend almost 5~6mins to query these 130M datas. The speed is pretty close to the C++ version. My advise is that please carefully when you set runtime.MemProfileRate=1 unless you make sure you want to do that, and remember to reset it back.
Golang is likely running the DB query processing more in parallel for the snippet alone. Your complete application is almost certainly using some of those cores for other things.
The loop where you process all 130M rows seems the likely culprit.
Try setting the max procs to 1 in the snippet if you want to test this theory.