How to mock method to pass test in golang - unit-testing

I'm setting unit test for api(golang).
It seems to use mocking. But I don't understand how to code to success.
article
├ client
├ api
│ ├ main.go
│ ├ contoroller
│ │ ├ contoroller.go
│ │ └ contoroller_test.go
│ ├ service
│ │ ├ service.go
│ │ └ service_test.go
│ ├ dao
│ │ ├ dao.go
│ │ └ dao_test.go
│ ├ s3
│ │ ├ s3.go
│ │ └ s3_test.go
│ ├ go.mod
│ ├ go.sum
│ └ Dockerfile
├ nginx
└ docker-compose.yml
Now I'm trying to set dao_test.go
But it fails because dao.go calls method from s3.dao.
dao_test.go
package dao
// import
type DaoSuite struct {
suite.Suite
db *sql.DB
mock sqlmock.Sqlmock
dao *Dao
s3 *s3.S3
}
func (s *DaoSuite) SetupTest() {
var err error
s.db, s.mock, err = sqlmock.New()
s.Require().NoError(err)
s.dao = NewDao(s.db, s.s3)
}
func (s *DaoSuite) TestDeleteArticleDao() {
// some method
// here test fails because DeleteArticleDao calls method from another package.
s.dao.DeleteArticleDao("1")
}
func (s *DaoSuite) TearDownTest() {
s.db.Close()
s.Assert().NoError(s.mock.ExpectationsWereMet())
}
dao.go
package dao
// import
type Dao struct {
database *sql.DB
s3 *s3.S3
}
func NewDao(database *sql.DB, s3 *s3.S3) *Dao {
objs := &Dao{database: database, s3: s3}
return objs
}
func (d *Dao) DeleteArticleDao(id string) {
//generate imageName
//here calls method in package s3
//here test fails
d.s3.DeleteS3Image(imageName)
}
s3.go
package s3
//import
type S3 struct {
APPID string
SECRET string
}
type DaoInterface interface {
DeleteS3Image(imageName util.ImageName) error
}
func NewS3(appid, secret string) *S3 {
objs := &S3{APPID: appid, SECRET: secret}
return objs
}
func (objs *S3) DeleteS3Image(imageName util.ImageName) error {
// method
}
The full source code is here(fix-test-dao):
https://github.com/jpskgc/article/tree/fix-test-dao
I expect the test success in dao_test.go.
But the actual is it fails because dao.go calls method from s3 package.
I want to know how to mock DeleteS3Image in package s3 to avoid error and success test.
Here is the error when running go test -v at dao_test.go.
$ go test -v
--- FAIL: TestDaoSuite (0.00s)
--- FAIL: TestDaoSuite/TestDeleteArticleDao (0.00s)
dao_test.go:221:
Error Trace: dao_test.go:221
suite.go:122
panic.go:522
panic.go:82
signal_unix.go:390
s3.go:66
dao.go:74
dao_test.go:156
Error: Received unexpected error:
there is a remaining expectation which was not matched: ExpectedBegin => expecting database transaction Begin
Test: TestDaoSuite/TestDeleteArticleDao
suite.go:61: test panicked: runtime error: invalid memory address or nil pointer dereference

In your setup you do call s.dao = NewDao(s.db, s.s3) however you've never initialized s.s3 to anything, so s.dao.s3 remains nil and that's why d.s3.DeleteS3Image(imageName) panics.
In Go to be able to mock a method, the value on which the method is called must be an interface, not a concrete type. Put another way, it is not possible to mock a concrete method in Go.
So with a type like this:
type Dao struct {
database *sql.DB
s3 *s3.S3
}
you simply cannot mock s3.
What you can do, is change the type of the s3 field to an interface type, you already have one ready (s3.DaoInterface).
type Dao struct {
database *sql.DB
s3 s3.DaoInterface
}
now you can mock the s3 field.
What's left is for you to implement your mock and make sure that the s3 field is set to an instance of the mock implementation during test setup.
type MockS3 struct{}
func (MockS3) DeleteS3Image(imageName util.ImageName) error {
// do whatever
return nil
}
func (s *DaoSuite) SetupTest() {
var err error
s.db, s.mock, err = sqlmock.New()
s.Require().NoError(err)
s.dao = NewDao(s.db, s.s3)
s.dao.s3 = MockS3{} // <- don't forget about me
}
It is up to you how you implement the mock, but if you're new to mocks I would recommend you take a look at https://github.com/golang/mock to help you with generating mocks.

Related

Pass gcp service account json file

We are creating confluent kafka sink connector (https://registry.terraform.io/providers/confluentinc/confluent/latest/docs/resources/confluent_connector) using terraform.
resource "confluent_connector" "gcs-sink" {
for_each = { for topic in var.topics : "${topic.name} ${topic.tasks}" => topic }
environment {
id = var.env_id
}
kafka_cluster {
id = var.cluster_id
}
config_nonsensitive = {
"name" = "${each.value.name}-gcs-connector"
"connector.class" = "GcsSink"
"topics" = "${each.value.name}"
"kafka.auth.mode" = "SERVICE_ACCOUNT"
"kafka.service.account.id" = "${var.connector_sa}"
"gcs.bucket.name" = "${var.gcs_bucket_name}"
"input.data.format" = "AVRO"
"output.data.format" = "AVRO"
"time.interval" = "HOURLY"
"flush.size" = "1000"
"tasks.max" = "${each.value.tasks}"
"topics.dir" = "avro-hourly"
"path.format" = "'process_date'=YYYY-MM-dd/'hour'=HH"
"rotate.schedule.interval.ms" = "60000"
"gcs.credentials.config" = var.gcs_sa_json
}
}
We need to pass service json key file to gcs.credentials.config, so i'm placing the json file in gcp secret manager and reading it from secret manger on runtime and storing it in variable gcs_sa_json, but I'm running into the below issue
Error: error waiting for Connector "g-gg-prod-gcs-connector" to provision: connector "display_name"="g-gg-prod-gcs-connector" provisioning status is "FAILED": Unable to validate configuration. If an update was made to the configuration, this means that the configuration was invalid, and the connector continues to operate on a previous configuration that passed validation. Errors:
│ gcs.credentials.config: Unable to retrieve credentials
│ gcs.bucket.name: Unable to retrieve credentials
│ . You might need to remove Connector manually before retrying.
│
│ with module.sink_connector.confluent_connector.gcs-sink["g-gg-prod-topic 2"],
│ on ../../terraform-modules/confluent-kafka/sink-connector/main.tf line 1, in resource "confluent_connector" "gcs-sink":
│ 1: resource "confluent_connector" "gcs-sink" {
I used the jsondecode function as well, but in vain.

Unable to assign LF-tags to lake formation database using Terraform

I prepared the following terraform scripts to assign an LF-tag to a database in lake formation.
resource "aws_lakeformation_resource_lf_tags" "gm_access" {
count = length(var.db_config)
database {
name = "gm_${var.db_config[count.index].name}_${terraform.workspace}"
}
lf_tag {
key = "access"
value = var.db_config[count.index].access
}
}
The LF Tag access has already been created in AWS manually (historically) with values defined.
I received errors:
│ Error: creating AWS Lake Formation Resource LF Tags (): attempted to add 1 tags, 1 failures
│
│ with aws_lakeformation_resource_lf_tags.gm_access[0],
│ on self_serve.tf line 72, in resource "aws_lakeformation_resource_lf_tags" "gm_access":
│ 72: resource "aws_lakeformation_resource_lf_tags" "gm_access" {
│
Any advice, please?

Why can't I deploy to Cloud Functions?

I've been able to deploy for months and now suddenly this morning I am getting this error.
│ Error: Error while updating cloudfunction configuration: Error waiting for Updating CloudFunctions Function: Error code 3, message: Build failed: curl: (22) The requested URL returned error: 404
│
│ gzip: stdin: unexpected end of file
│ tar: Child returned status 1
│ tar: Error is not recoverable: exiting now; Error ID: 637fe2a4
│
│ with google_cloudfunctions_function.syncFiles,
│ on functions.tf line 396, in resource "google_cloudfunctions_function" "syncFiles":
│ 396: resource "google_cloudfunctions_function" "syncFiles" {
│
This is the terraform configuration. We zip the directory and give this to cloud functions to deploy
data "archive_file" "source-zip" {
type = "zip"
source_dir = "${path.root}/../dist/"
output_path = "${path.root}/../dist/files/${var.app_name}.zip"
excludes = ["files/**"]
}
resource "google_storage_bucket_object" "deploy-zip" {
name = "${var.app_name}/${var.app_name}-${data.archive_file.source-zip.output_md5}.zip"
bucket = "${var.env_name}-deploy"
source = "${path.root}/../dist/files/${var.app_name}.zip"
depends_on = [data.archive_file.source-zip]
}
output "deploy_zip" {
value = google_storage_bucket_object.deploy-zip.name
}
What could cause this error?
Is this an internal problem?
I have a ticket open with Google support but nothing useful yet.
Please go to Cloud build, select your region, look at history/logs, that should tell you what is failing.
Possibly a package issue.

GCP + Terraform : Service account access is granted to a user at project level

I have an issue when trying to execute this terraform file on GCP.
Results #1-2 MEDIUM Service account access is granted to a user at project level. (2 similar results)
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
iam.tf Line 18
───────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
9 │ resource "google_project_iam_member" "permissions" {
10 │ for_each = toset([
11 │ "logging.logWriter",
12 │ "errorreporting.writer",
13 │ "iam.serviceAccountUser",
14 │ "iam.serviceAccountTokenCreator",
15 │ "workflows.invoker"
16 │ ])
17 │ provider = google-beta
18 │ role = "roles/${each.key}"
19 │ member = "serviceAccount:${google_service_account.default.email}"
20 │ }
───────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
Individual Causes
- /Users/oussamafathallah/git/btdp/modules/00-oauth2-relay/iac/iam.tf:9-20 (google_project_iam_member.permissions)
- /Users/oussamafathallah/git/btdp/modules/00-oauth2-relay/iac/iam.tf:9-20 (google_project_iam_member.permissions)
──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
ID google-iam-no-project-level-service-account-impersonation
Impact Privilege escalation, impersonation of any/all services
Resolution Provide access at the service-level instead of project-level, if required
More Information
- https://aquasecurity.github.io/tfsec/v1.0.2/checks/google/iam/no-project-level-service-account-impersonation/
- https://registry.terraform.io/providers/hashicorp/google/latest/docs/resources/google_project_iam
Can you help me on this issue please. I'm new to gcp and terraform. Thank you !
Granting this role "iam.serviceAccountTokenCreator" causes the warning.
That role allows users to use a service account for Privilege Escalation
Protecting against privilege-escalation threats

TypeORM Cli: migrationsDir seems to be ignored by scripts

I am trying to accomplish a simple migration - renaming a column in the users table.
I cannot get the cli to use the migrationsDir to create OR run migrations from.
MIGRATION CREATION
When I run
npm run typeorm:cli -- migration:create -n UserFullName -d 'server/migration, there is no problem creating the file in the migrations folder.
Creating migrations without the -d argument just creates the files in the folder root, it ignores the migrationsDir in the Connection Options (see ormconfig.ts down below).
RUNNING MIGRATIONS
Running npm run typeorm:cli -- migration:run yields exit status 1, My guess is that it can't find the migrations, but I really don't know.
Error during migration run:
Error: No connection options were found in any of configurations file.
at ConnectionOptionsReader.<anonymous> (/Users/matthewshields/Documents/Code/Projects/Sumo/dohyo-dreams/src/connection/ConnectionOptionsReader.ts:41:19)
at step (/Users/matthewshields/Documents/Code/Projects/Sumo/dohyo-dreams/node_modules/tslib/tslib.js:133:27)
at Object.next (/Users/matthewshields/Documents/Code/Projects/Sumo/dohyo-dreams/node_modules/tslib/tslib.js:114:57)
at fulfilled (/Users/matthewshields/Documents/Code/Projects/Sumo/dohyo-dreams/node_modules/tslib/tslib.js:104:62)
at process._tickCallback (internal/process/next_tick.js:68:7)
at Function.Module.runMain (internal/modules/cjs/loader.js:745:11)
at Object.<anonymous> (/Users/matthewshields/Documents/Code/Projects/Sumo/dohyo-dreams/node_modules/ts-node/src/bin.ts:157:12)
at Module._compile (internal/modules/cjs/loader.js:689:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:700:10)
at Module.load (internal/modules/cjs/loader.js:599:32)
package.json
{
"name": "xxxxxxxxx",
"version": "0.1.0",
"private": true,
"main": "./server/server.ts",
"dependencies": {
"axios": "^0.19.0",
"bcrypt": "^3.0.6",
"body-parser": "^1.18.3",
"breakpoint-sass": "^2.7.1",
"chroma-js": "^2.0.3",
"class-transformer": "^0.2.0",
"class-validator": "^0.9.1",
"dotenv": "^6.2.0",
"envalid": "^4.1.4",
"express": "^4.16.4",
"express-session": "^1.16.1",
"http-server": "^0.11.1",
"lodash": "^4.17.15",
"lodash.isequal": "^4.5.0",
"massive": "^5.7.7",
"node-sass": "^4.11.0",
"pg": "^7.11.0",
"react": "^16.8.6",
"react-dom": "^16.8.6",
"react-router-dom": "^5.0.0",
"react-scripts": "2.1.8",
"reflect-metadata": "^0.1.13",
"sumo-rank": "^1.0.2",
"tsconfig-paths": "^3.9.0",
"typeorm": "^0.2.18"
},
"devDependencies": {
"#types/express": "^4.16.1",
"#types/node": "^10.12.11",
"husky": "^1.2.0",
"nodemon": "^1.18.7",
"ts-node": "^7.0.1",
"tslint": "^5.11.0",
"tslint-config-airbnb": "^5.11.1",
"typescript": "^3.2.1"
},
"scripts": {
"dev": "ts-node ./server/server.ts",
"start": "react-scripts start",
"build": "react-scripts build",
"test": "react-scripts test",
"eject": "react-scripts eject",
"start-sw": "express ./build",
"lint": "tslint -p tsconfig.json -c tslint.json",
"typeorm:cli": "ts-node ./node_modules/typeorm/cli.js"
},
"eslintConfig": {
"extends": "react-app"
},
"husky": {
"hooks": {
"pre-commit": "npm run lint"
}
},
"browserslist": [
">0.2%",
"not dead",
"not ie <= 11",
"not op_mini all"
]
}
server.ts
require('dotenv').config();
import { } from 'reflect-metadata';
import { createConnection } from 'typeorm';
import App from './app';
import * as config from './ormconfig';
import RankingsController from './rankings/rankings.controller';
import RankChartsController from './rankCharts/rankCharts.controller';
import TournamentsController from './tournaments/tournaments.controller';
import UsersController from './users/users.controller';
import validateEnv from './utils/validateEnv';
import WrestlersController from './wrestlers/wrestlers.controller';
validateEnv();
(async () => {
try {
await createConnection(config);
} catch (error) {
console.log('Error while connecting to the database', error);
return error;
}
const app = new App(
[
new TournamentsController(),
new WrestlersController(),
new RankingsController(),
new RankChartsController(),
new UsersController(),
],
);
app.listen();
})();
apps.ts
import * as bodyParser from 'body-parser';
import * as express from 'express';
import Controller from './interfaces/interface.controller';
import errorMiddleware from './middleware/error.middleware';
class App {
public app: express.Application;
constructor(controllers: Controller[]) {
this.app = express();
this.initializeMiddlewares();
this.initializeErrorHandling();
this.initializeControllers(controllers);
}
public listen() {
this.app.listen(process.env.PORT, () => {
console.log(`App listening on the port ${process.env.PORT}`);
});
}
private initializeMiddlewares() {
this.app.use(bodyParser.json());
}
private initializeErrorHandling() {
this.app.use(errorMiddleware);
}
private initializeControllers(controllers: Controller[]) {
controllers.forEach((controller) => {
this.app.use('/', controller.router);
});
}
}
export default App;
ormconfig.ts
import { ConnectionOptions } from 'typeorm';
const config: ConnectionOptions = {
type: 'postgres',
host: process.env.POSTGRES_HOST,
port: Number(process.env.POSTGRES_PORT),
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
entities: [
__dirname + '/../**/*.entity{.ts,.js}',
],
cli: {
migrationsDir: 'server',
}
}
export = config;
(timestamp)-UserFullName.ts
import { MigrationInterface, QueryRunner } from "typeorm";
export class UserFullName1574403715918 implements MigrationInterface {
public async up(queryRunner: QueryRunner): Promise<any> {
await queryRunner.query(`ALTER TABLE "user" RENAME "fullName" to "name"`);
}
public async down(queryRunner: QueryRunner): Promise<any> {
await queryRunner.query(`ALTER TABLE "user" RENAME "name" to "fullName"`);
}
}
I suspect my file structure may be related to the issue, so I have listed it briefly. I just listed some the basics, there are more controllers and entities for Tournaments, Wrestlers, Rankings, Rankcharts.
├── docker-compose.yaml
├── package.json
├── src
├── server
│ ├── ormconfig.ts
│ ├── server.ts
│ ├── app.ts
│ ├── users
│ │ ├── users.controller.ts
│ │ ├── users.dto.ts
│ │ ├── users.entity.ts
│ ├── migration
First time poster, any constructive criticism on my format or explanation is appreciated.
For anyone coming across similar issues, these were good nuggets I valued:
The typeorm CLI reads the cli.migrationsDir (from ormconfig.ts) only when creating migrations (not reading). You can see that subtle distinction here in the docs - it reads:
"..."cli": { "migrationsDir": "migration" } - indicates that the
CLI must create new migrations in the "migration" directory."
This was confusing - why would it need a separate config just for writing? Wouldn't reading/writing migrations be the same config? I don't know, still don't know - but I confirmed this by reading the source as well (unless I whiffed something).
Final conclusion: It's likely those configs (migrations: [...] & cli.migrationsDir) should point to the same location on the filesystem unless you have a good reason not to.
Cheers.
I think the problem here's because you use async connection. I had the same issue and managed to solve it after I've added the ormconfig.ts (should work with .js also) file with the synchronous connection.
In the config file you should add the cli propertycli: {migrationsDir: "server/migration"}
To be able to run migrations using cli another property is needed: migrations: [join(__dirname, 'server/migration/*{.ts,.js}')],
Also when running cli you should indicate where this config file is located --config path/to/ormconfig.ts flag.
The full command example with ts: ts-node ./node_modules/typeorm/cli.js migration:generate --config server/ormconfig.ts
For more info you can check this example https://github.com/ambroiseRabier/typeorm-nestjs-migration-example I found it very useful.
As it seems by your file structure the config should look like that:
ormconfig.ts
export const config: TypeOrmModuleOptions = {
...
migrations: ['server/migration/*.js', 'server/migration/*.ts'],
cli: {
migrationsDir: 'server/migration',
},
};
You might need get the ormconfig.ts file out of the server folder to sit on the same level as package.json.
Did you reach the correct migration file path?
{
cli: {
migrationsDir: "src/migration"
}
}
https://github.com/typeorm/typeorm/blob/master/docs/using-cli.md