k8s controller watch other controller CR - amazon-web-services

I’ve k8s operator which works as expected, I need to add a “watch” to other operator CRD (not mine), to make it simple lets call it extCR and our operator cr called inCR,
I tried the following but there is an issue how its right to trigger the reconcile.
func (r *Insiconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&Inv1alpha1.Iget{}}).
Watches(&source.Kind{Type: &ext.Se{}}, handler.EnqueueRequestsFromMapFunc(r.FWatch)).
Complete(r)
}
func (r *Insiconciler) FWatch(c client.Object) []reconcile.Request {
val := c.(*ivi.Srv)
req := reconcile.Request{NamespacedName: types.NamespacedName{Name: val.Name, Namespace: val.Namespace}}
return []reconcile.Request{req}
}
The problem here that I trigger the reconcile with the extCR , I want inside the FWatch to update the inCR and start the reconcile with inCR and not with extCR, how can I do it ?
I mean, to avoid something like the following code as sometimes the reconcile is done for the inCR and sometimes for the extCR and than I can get some ugly if's
func (r *Insiconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
var inCR FOO
var extCR BAR
if err := r.Get(ctx, req.NamespacedName, &inCR); err != nil {
return ctrl.Result{}, err
}
if err := r.Get(ctx, req.NamespacedName, &extCR); err != nil {
return ctrl.Result{}, err
}
I want to know what is the right/clean way to handle such case
case when you need to listen to externalCR (not part of your controller) and also internalCR (from your controller) .
One more thing - the CR are different GVK but the exteranlCR contain lot of fields which is not required, just some of them. but the required fields is having the same names on both cr's
update
type inCR struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec inSpec `json:"spec,omitempty"` / / ————————here is the difference
Status InsightTargetStatus `json:"status,omitempty"`
}
//————— This is defined on other program which is not owned by us, therefore cannot “reuse”
type Bar struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec extSpec `json:"spec,omitempty"` // ———————here is the difference
Status ServiceStatus `json:"status,omitempty"`
}
And inSpec is having the following fields (subset of extSpec)
type inSpec struct {
name string
age int
}
and extSpec have those fields and many more which is not related
type extSpec struct {
name string
age int
foo string // not relevant
bar string // not relevant
bazz string // not relevant
}
at the end, Inside the reconcile I need to move the relevant fields to some functions. exactly same functions just take sometime the fields from extCR and sometimes for inCR, according to the event that happens (like updating the extCR or update the inCR by users )
Update2
func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
in := c.(*inCR)
vPass , e := vps.Get(ctx, r.Client, in.Spec.foo, in.Spec.bar)
return ctrl.Result{}, nil
}
But for extCR I should do the following
func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
ext := c.(*extCR)
vPass , e := vps.Get(ctx, r.Client, ext.Spec.val.foo, ext.Spec.val.bar)
return ctrl.Result{}, nil
}

Few things to keep in mind:
Each controller is responsible for exactly one resource.
Reconcile request contains the information necessary to reconcile a Kubernetes object. This includes the information to uniquely identify the object - its Name and Namespace. It does NOT contain information about any specific Event or the object contents itself.
You can create a second controller without the resource definition. In your main file, both controllers will be registered.
This could be useful if the CRDs are not related at all or if the external resource references the internal one, so you can make changes to the internal resource in the external reconciler.
kubebuilder create api --group other --version v2 --kind External \
--resource=false --controller=true
This gives you a controller with a SetupWithManager method that looks like the below.
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// Uncomment the following line adding a pointer to an instance of the controlled resource as an argument
// For().
Complete(r)
}
Note how the For method is commented out because you need to import the resource to watch from somewhere else and reference it.
import (
...
otherv2 "other.io/external/api/v2"
)
...
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&otherv2.External{}).
Complete(r)
}
If you cannot import the external resource you could fall back to mocking it yourself but this is probably not a very clean way. You should really try to import it from the other controller project.
kubebuilder edit --multigroup=true
kubebuilder create api --group=other --version v2 --kind External \
--resource --controller
Another way is when the resources are related to each other such that the internal resource has a reference in its spec to the external resource and knows how to get the external resource in its spec, when it reconciles. An example of this can be found here https://book.kubebuilder.io/reference/watching-resources/externally-managed.html
type InternalSpec struct {
// Name of an external resource
ExternalResource string `json:"externalResource,omitempty"`
}
This means that in each reconciliation loop, the controller will look up the external resource and use it to manage the internal resource.
func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
internal := examplev1.Internal{}
if err := r.Get(context.TODO(), types.NamespacedName{
Name: req.Name,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
external := otherv2.External{}
if err := r.Get(context.TODO(), types.NamespacedName{
// note how the name is taken from the internal spec
Name: internal.Spec.ExternalResource,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
// do something with internal and external here
return ctrl.Result{}, nil
}
The problem with this is, that when the internal resource does not change, no reconciliation event will be triggered, even when the external resource has changed. To work around that, we can trigger the reconciliation by watching the external resource. Note the Watches method:
func (r *InternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&examplev1.Main{}).
Watches(
&source.Kind{Type: &otherv2.ExternalResource{}},
handler.EnqueueRequestsFromMapFunc(r.triggerReconcileBecauseExternalHasChanged),
builder.WithPredicates(predicate.ResourceVersionChangedPredicate{}),
).
Complete(r)
}
In order to know for which internal object we should trigger an event, we use a mapping function to look up all the internal that have a reference to the external resource.
func (r *InternalReconciler) triggerReconcileBecauseExternalHasChanged(o client.Object) []reconcile.Request {
usedByInternals := &examplev1.InternalList{}
listOps := &client.ListOptions{
FieldSelector: fields.OneTermEqualSelector(".spec.ExternalResource", o.GetName()),
Namespace: o.GetNamespace(),
}
err := r.List(context.TODO(), usedByInternals, listOps)
if err != nil {
return []reconcile.Request{}
}
requests := make([]reconcile.Request, len(usedByInternals.Items))
for i, item := range usedByInternals.Items {
requests[i] = reconcile.Request{
NamespacedName: types.NamespacedName{
Name: item.GetName(),
Namespace: item.GetNamespace(),
},
}
}
return requests
}
Since you updated your question, I suggest doing something like below.
I am creating a new project and 2 controllers. Note on the second controller command no resource is created along with the controller. this is because the controller
will watch an external resource.
mkdir demo && cd demo
go mod init example.io/demo
kubebuilder init --domain example.io --repo example.io/demo --plugins=go/v4-alpha
kubebuilder create api --group=demo --version v1 --kind Internal --controller --resource
kubebuilder create api --group=other --version v2 --kind External --controller --resource=false
$ tree controllers
controllers
├── external_controller.go
├── internal_controller.go
└── suite_test.go
Now we need some shared logic, for example by adding this to the controllers package. We will call this from both reconcilers.
// the interface may need tweaking
// depending on what you want to do with
// the reconiler
type reconciler interface {
client.Reader
client.Writer
client.StatusClient
}
func sharedLogic(r reconciler, kobj *demov1.Internal) (ctrl.Result, error) {
// do your shared logic here operating on the internal object struct
// this works out because the external controller will call this passing the
// internal object
return ctrl.Result{}, nil
}
Here is an example for the internal reconciler.
func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
And in the external reconciler we do the same.
func (r *ExternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// note, we can use the internal object here as long as the external object
// does contain the same fields we want. That means when unmarshalling the extra
// fields are dropped. If this cannot be done, you could first unmarshal into the external
// resource and then assign the fields you need to the internal one, before passing it down
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// note the external resource is imported from another project
// you may be able to watch this without import by creating a minimal
// type with the right GKV
For(otherv2.External{}).
Complete(r)
}

Related

AWS Custom Resource ROLLBACK_FAILED because of a catch22-like situation

I have a Custom Resource which gets stuck in a catch22-like situation whenever a ROLLBACK occurs.
The code below is a simplified example of what my code is doing. In case of a create request, it creates a table, for delete it deletes, and in case of an update it compares the old properties with the new, and returns an error when one of the columns has a new value (column updates are not supported yet).
A problematic ROLLBACK_FAILED occurs when
[SOLVED] whenever a create request type fails (due to sql syntax error for example). In this case it will trigger a delete request for the ROLLBACK phase, but this request will fail because the table does not yet exist.
whenever a update request type fails due to a updated column value. In this case it will trigger a new update request for the ROLLBACK phase in which the event.ResourceProperties and event.OldResourceProperties are switched, which will still cause an error.
package main
import (
"context"
"encoding/json"
"fmt"
"strings"
"github.com/aws/aws-lambda-go/cfn"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/rdsdataservice"
"github.com/google/uuid"
)
func main() {
lambda.Start(cfn.LambdaWrap(handler))
}
type properties struct {
TableName string
Columns []struct {
Name string
Value string
}
}
func handler(ctx context.Context, event cfn.Event) (physicalResourceID string, data map[string]interface{}, err error) {
prid := event.PhysicalResourceID
if event.RequestType == cfn.RequestCreate {
prid = strings.ReplaceAll(uuid.New().String(), "-", "")
}
var props properties
b, _ := json.Marshal(event.ResourceProperties)
json.Unmarshal(b, &props)
rds := rdsdataservice.New(nil)
if event.RequestType == cfn.RequestCreate {
rds.ExecuteStatement(&rdsdataservice.ExecuteStatementInput{
Sql: aws.String(fmt.Sprintf("CREATE TABLE %s", props.TableName)),
})
}
if event.RequestType == cfn.RequestDelete {
rds.ExecuteStatement(&rdsdataservice.ExecuteStatementInput{
Sql: aws.String(fmt.Sprintf("CREATE TABLE %s", props.TableName)),
})
}
if event.RequestType == cfn.RequestUpdate {
var oldProps properties
b, _ := json.Marshal(event.OldResourceProperties)
json.Unmarshal(b, &oldProps)
columns := map[string]string{}
for _, column := range props.Columns {
columns[column.Name] = column.Value
}
for _, column := range oldProps.Columns {
if val, ok := columns[column.Name]; ok {
if val != column.Value {
return "", nil, fmt.Errorf("cannot change column type")
}
}
}
// Do some extra stuff here for adding/removing columns
}
return prid, nil, nil
}
I have thought of 2 possible solutions. One of them I could implement, with potential issues. But it seems to me there should be a better way, as I can't be the only one with this problem. Or I am doing something very wrong..
disable rollback for this specific resource only, in some cases (sometimes I still want a rollback)
have access to the last status, so that I can check what to do; in case of delete with last status CREATE_FAILED, don't do anything. In case of an update with last status UPDATE_FAILED, don't do anything.
The second option I could implement by using the code below. But as the number of events grow, this could become very problematic.
events, err := cloud.DescribeStackEvents(&cloudformation.DescribeStackEventsInput{
StackName: &event.StackID,
})
For anyone who runs into the same problem; I have solved this issue by looking at the stack events, and find the last resource status that is being updated. As the current status is always in the list, I ignore IN_PROGRESS values. If the first value after status IN_PROGRESS is FAILED, it means this resource could not be updated, and thus a different ROLLBACK strategie can be applied.
The corresponding function, in GO
func isFailedEvent(sess *session.Session, event cfn.Event) (bool, error) {
cloud := cloudformation.New(sess)
var isFailedResource bool
if err := cloud.DescribeStackEventsPages(&cloudformation.DescribeStackEventsInput{
StackName: aws.String(event.StackID),
}, func(out *cloudformation.DescribeStackEventsOutput, lastPage bool) bool {
for _, e := range out.StackEvents {
if *e.LogicalResourceId == event.LogicalResourceID {
if strings.HasSuffix(*e.ResourceStatus, "IN_PROGRESS") {
continue
}
if strings.HasSuffix(*e.ResourceStatus, "FAILED") && !strings.Contains(*e.ResourceStatusReason, "cancelled") {
isFailedResource = true
}
return false
}
}
return true
}); err != nil {
return false, fmt.Errorf("describe stack events: %s", err)
}
return isFailedResource, nil
}

Golang unittest with *s3.S3 object

How should I unittest following piece of code. I was trying to use coutnerfiter to fake input "*s3.S3" object, but it's not working for me. I am new to coutnerfiter and Go, Can someone please help me on that.
func (l *listContentImp) ListS3Content(client *s3.S3) (bool, error) {
listObj := &s3.ListObjectsV2Input{
Bucket: aws.String(l.bucket),
}
var err error
l.lObj, err = client.ListObjectsV2(listObj)
if err != nil {
return false, err
}
return true, nil
}
You shouldn't pass a reference to the s3.S3 struct. When using the AWS SDK for Go v1 you typically pass the services corresponding interface. For S3 this is s3iface.
The signature of your function would look like this:
func (l *listContentImp) ListS3Content(client s3iface.S3API) (bool, error)
Now every struct that you pass that implements at least one of the methods of s3iface.S3API will work.
At runtime you'll pass the proper service client, but in the unit tests you can just pass a mock:
type mock struct {
s3iface.S3API
output *s3.ListObjectsV2Output
err error
}
func (m mock) ListObjectsV2(*s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
return m.output, m.err
}
In your test you create the mock and pass it to your function:
func Test_ListObject(t *testing.T) {
l := &listContentImp{...}
m := mock{
output: &s3.ListObjectsV2Output{...},
err: nil
}
result, err := l.ListS3Content(m)
[... add checks here...]
}

How to write Unit test in Golang usng echo for end point url using go mock-gen mocking?

I write this go code for login. Now i want to unit test my code. This code is depends on controller to service layer then service to repository layer. I want to use gomock tool for mocking, if any other please suggest me. I'm using echo framework.
Here
serializers.LoginReq =
{
Email string,
Phone string,
Admin bool
}
type auth struct {
authSvc svc.IAuth
userSvc svc.IUsers
}
func NewAuthController(grp interface {}, authSvc svc.IAuth, userSvc svc.IUsers) {
ac: = & auth {
authSvc: authSvc,
userSvc: userSvc,
}
g: = grp.( * echo.Group)
g.POST("/v1/login", ac.Login)
}
func(ctr * auth) Login(c echo.Context) error {
var cred * serializers.LoginReq
var resp * serializers.LoginResp
var err error
if err = c.Bind( & cred) err != nil {
return c.JSON(err.Status, err)
}
if resp, err = ctr.authSvc.Login(cred); err != nil {
return c.JSON(err.Status, err)
}
return c.JSON(http.StatusOK, resp)
}
Use dependency injection. Dependency injection is a design pattern that decouples dependencies between two or more layers of software.
How it works
Pass a dependency to the Login function. In Go, the dependency is often an interface type. Interfaces express generalizations or abstractions about the behaviors of other types. A type satisfies an interface if it has all the methods in the interface. With an interface, you can replace a real object with a fake one (a mock) in your tests. This works without Go's type system complaining as long as a concrete type satisfies the interface.
type Auther interface {
Login(cred *serializers.LoginReq) (*serializers.LoginResp, error)
}
Go Interfaces are satisfied implicitly.
// auth service must implement the Auther interface
type auth struct {
authSvc Auther
}
// route handler
func(ctr *auth) Login(c echo.Context) error {
var cred * serializers.LoginReq
var resp * serializers.LoginResp
var err error
if err = c.Bind( & cred) err != nil {
return c.JSON(err.Status, err)
}
// the function signature of the service-level Login method must match the interface
if resp, err = ctr.authSvc.Login(cred); err != nil {
return c.JSON(err.Status, err)
}
return c.JSON(http.StatusOK, resp)
}
I like using testify/mock library. Create a Mock.
type MockAuth struct {
mock.Mock
}
func (m *MockAuth) Login(cred *serializers.LoginReq) (*serializers.LoginResp, error) {
args := m.Called(cred)
return args.Get(0).(*serializers.LoginResp), args.Error(1)
}
That's it. Just create a test.
func TestLogin (t *testing.T) {
// setup mocks
cred := &serializers.LoginReq{}
mockReturn := &serializers.LoginResp{}
mockAuth := &MockAuth{}
// setup expectation
mockAuth.On("Login", cred).Return(mockReturn, nil)
// setup server
mux := http.NewServeMux()
mux.HandleFunc("/v1/login", func(w http.ResponseWriter, r *http.Request) {
ec := echo.Context{}
ctr: = &auth {
authSvc: mockAuth
}
ctr.Login(ec)
})
// make request
writer := httptest.NewRecorder()
request, _ := http.NewRequest(http.MethodPost, "/v1/login", "password")
mux.ServeHTTP(writer, request)
// make assertions
mockAuth.AssertExpectations(t)
}
The code above is not 100% correct. I don't use echo myself, however it should get you close. Hope this helps.

Mock context.Context to test lambdacontext.FromContext

I'm building an aws lambda using aws-sdk-go and aws-lambda-go and I'm stuck with a little problem.
I want to test my lambda handler. To do so, I need to mock a valid context.Context containing valid attributes for lamdacontext.LambdaContext and satisfy lambdacontext.FromContext.
I cannot seem to find a way to build such mock, since lambdacontext.FromContext always returns me _, false.
Here's my main, with a simple handler on a events.SNSEvent event:
package main
import (
"context"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-lambda-go/lambdacontext"
)
func main() {
lambda.Start(handleRequest)
}
func handleRequest(ctx context.Context, snsEvent events.SNSEvent) error {
lc, ok := lambdacontext.FromContext(ctx); if !ok {
// Always false
...
return someErr
}
. . .
return nil
}
And here's my test for handleRequest:
package main
import (
"context"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambdacontext"
"github.com/stretchr/testify/assert"
"gitlab.easy-network.it/meg/aml-rekognition/testdata"
"testing"
)
const imgMock = `
{
\"some_parameter\": \"some_value\"
}`
func TestHandleRequest(t *testing.T) {
c := context.Background()
ctxV := context.WithValue(c, "", map[string]interface{}{
"AwsRequestID" : "some_aws_id",
"InvokedFunctionArn" : "some_arn",
"Identity" : lambdacontext.CognitoIdentity{},
"ClientContext" : lambdacontext.ClientContext{},
})
snsEventMock := events.SNSEvent{
Records: []events.SNSEventRecord{
{
SNS: events.SNSEntity{
Message: imgMock,
},
},
},
}
err := handleRequest(ctxV, snsEventMock)
assert.NoError(t, err)
}
I also tried other mocks like passing it a struct with these parameters etc, but I always get false. For instance, I tried also:
type TestMock struct {
AwsRequestID string
InvokedFunctionArn string
Identity lambdacontext.CognitoIdentity
ClientContext lambdacontext.ClientContext
}
func TestHandleRequest(t *testing.T) {
c := context.Background()
testMock := TestMock{
AwsRequestID : "some_aws_id",
InvokedFunctionArn : "some_arn",
Identity : lambdacontext.CognitoIdentity{},
ClientContext : lambdacontext.ClientContext{},
}
ctxV := context.WithValue(c, "", testMock)
. . .
}
I checked out the source of FromContext and I've been scratching my head for a while.
// LambdaContext is the set of metadata that is passed for every Invoke.
type LambdaContext struct {
AwsRequestID string
InvokedFunctionArn string
Identity CognitoIdentity
ClientContext ClientContext
}
// An unexported type to be used as the key for types in this package.
// This prevents collisions with keys defined in other packages.
type key struct{}
// The key for a LambdaContext in Contexts.
// Users of this package must use lambdacontext.NewContext and
lambdacontext.FromContext
// instead of using this key directly.
var contextKey = &key{}
// FromContext returns the LambdaContext value stored in ctx, if any.
func FromContext(ctx context.Context) (*LambdaContext, bool) {
lc, ok := ctx.Value(contextKey).(*LambdaContext)
return lc, ok
}
Of course, it returns false even if I just pass a context.Background() to it.
Any idea on how should I build a valid context.Context to let lambdacontext.FromContext return true?
lambda.FromContext() checks if the passed context.Context contains a value with a "private" key hold inside the lambdacontext package:
// An unexported type to be used as the key for types in this package.
// This prevents collisions with keys defined in other packages.
type key struct{}
// The key for a LambdaContext in Contexts.
// Users of this package must use lambdacontext.NewContext and lambdacontext.FromContext
// instead of using this key directly.
var contextKey = &key{}
// FromContext returns the LambdaContext value stored in ctx, if any.
func FromContext(ctx context.Context) (*LambdaContext, bool) {
lc, ok := ctx.Value(contextKey).(*LambdaContext)
return lc, ok
}
You cannot access this key, and you can't "reproduce" it (you can't create a value that will be equal to this "private" key).
But there's an easy way, simply use the lambdacontext.NewContext() to derive a context which will have this key:
// NewContext returns a new Context that carries value lc.
func NewContext(parent context.Context, lc *LambdaContext) context.Context {
return context.WithValue(parent, contextKey, lc)
}
So the solution:
ctx := context.Background()
// Add keys to your liking, then:
lc := new(lambdacontext.LambdaContext)
ctx = lambdacontext.NewContext(ctx, lc)

Golang Mocking with Elastic

I've built a quick and easy API in Go that queries ElasticSearch. Now that I know it can be done, I want to do it correctly by adding tests. I've abstracted some of my code so that it can be unit-testable, but I've been having some issues mocking the elastic library, and as such I figured it would be best if I tried a simple case to mock just that.
import (
"encoding/json"
"github.com/olivere/elastic"
"net/http"
)
...
func CheckBucketExists(name string, client *elastic.Client) bool {
exists, err := client.IndexExists(name).Do()
if err != nil {
panic(err)
}
return exists
}
And now the test...
import (
"fmt"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"testing"
)
type MockClient struct {
mock.Mock
}
func (m *MockClient) IndexExists(name string) (bool, error) {
args := m.Mock.Called()
fmt.Println("This is a thing")
return args.Bool(0), args.Error(1)
}
func TestMockBucketExists(t *testing.T) {
m := MockClient{}
m.On("IndexExists", "thisuri").Return(true)
>> r := CheckBucketExists("thisuri", m)
assert := assert.New(t)
assert.True(r, true)
}
To which I'm yielded with the following error: cannot use m (type MockClient) as type *elastic.Client in argument to CheckBucketExists.
I'm assuming this is something fundamental with my use of the elastic.client type, but I'm still too much of a noob.
This is an old question, but couldn't find the solution either.
Unfortunately, this library is implemented using a struct, that makes mocking it not trivial at all, so the options I found are:
(1) Wrap all the elastic.SearchResult Methods on an interface on your own and "proxy" the call, so you end up with something like:
type ObjectsearchESClient interface {
// ... all methods...
Do(context.Context) (*elastic.SearchResult, error)
}
// NewObjectsearchESClient returns a new implementation of ObjectsearchESClient
func NewObjectsearchESClient(cluster *config.ESCluster) (ObjectsearchESClient, error) {
esClient, err := newESClient(cluster)
if err != nil {
return nil, err
}
newClient := objectsearchESClient{
Client: esClient,
}
return &newClient, nil
}
// ... all methods...
func (oc *objectsearchESClient) Do(ctx context.Context) (*elastic.SearchResult, error) {
return oc.searchService.Do(ctx)
}
And then mock this interface and responses as you would with other modules of your app.
(2) Another option is like pointed in this blog post that is mock the response from the Rest calls using httptest.Server
for this, I mocked the handler, that consist of mocking the response from the "HTTP call"
func mockHandler () http.HandlerFunc{
return func(w http.ResponseWriter, r *http.Request) {
resp := `{
"took": 73,
"timed_out": false,
... json ...
"hits": [... ]
...json ... ,
"aggregations": { ... }
}`
w.Write([]byte(resp))
}
}
Then you create a dummy elastic.Client struct
func mockClient(url string) (*elastic.Client, error) {
client, err := elastic.NewSimpleClient(elastic.SetURL(url))
if err != nil {
return nil, err
}
return client, nil
}
In this case, I've a library that builds my elastic.SearchService and returns it, so I use the HTTP like:
...
ts := httptest.NewServer(mockHandler())
defer ts.Close()
esClient, err := mockClient(ts.URL)
ss := elastic.NewSearchService(esClient)
mockLibESClient := es_mock.NewMockSearcherClient(mockCtrl)
mockLibESClient.EXPECT().GetEmployeeSearchServices(ctx).Return(ss, nil)
where mockLibESClient is the library I mentioned, and we stub the mockLibESClient.GetEmployeeSearchServices method making it return the SearchService with that will return the expected payload.
Note: for creating the mock mockLibESClient I used https://github.com/golang/mock
I found this to be convoluted, but "Wrapping" the elastic.Client was in my point of view more work.
Question: I tried to mock it by using https://github.com/vburenin/ifacemaker to create an interface, and then mock that interface with https://github.com/golang/mock and kind of use it, but I kept getting compatibility errors when trying to return an interface instead of a struct, I'm not a Go expect at all so probably I needed to understand the typecasting a little better to be able to solve it like that. So if any of you know how to do it with that please let me know.
The elasticsearch go client Github repo contains an official example of how to mock the elasticsearch client. It basically involves calling NewClient with a configuration which stubs the HTTP transport:
client, err := elasticsearch.NewClient(elasticsearch.Config{
Transport: &mocktrans,
})
There are primarily three ways I discovered to create a Mock/Dumy ES client. My response does not include integration tests against a real Elasticsearch cluster.
You can follow this article so as to mock the response from the Rest calls using httptest.Server, to eventually create a dummy elastic.Client struct
As mentioned by the package author in this link, you can work on "specifying an interface that has two implementations: One that uses a real ES cluster, and one that uses callbacks used in testing. Here's an example to get you started:"
type Searcher interface {
Search(context.Context, SearchRequest) (*SearchResponse, error)
}
// ESSearcher will be used with a real ES cluster.
type ESSearcher struct {
client *elastic.Client
}
func (s *ESSearcher) Search(ctx context.Context, req SearchRequest) (*SearchResponse, error) {
// Use s.client to run against real ES cluster and perform a search
}
// MockedSearcher can be used in testing.
type MockedSearcher struct {
OnSearch func(context.Context, SearchRequest) (*SearchResponse, error)
}
func (s *ESSearcher) Search(ctx context.Context, req SearchRequest) (*SearchResponse, error) {
return s.OnSearch(ctx, req)
}
Finally, as mentioned by the author in the same link you can "run a real Elasticsearch cluster while testing. One particular nice way might be to start the ES cluster during testing with something like github.com/ory/dockertest. Here's an example to get you started:"
package search
import (
"context"
"fmt"
"log"
"os"
"testing"
"github.com/olivere/elastic/v7"
"github.com/ory/dockertest/v3"
"github.com/ory/dockertest/v3/docker"
)
// client will be initialize in TestMain
var client *elastic.Client
func TestMain(m *testing.M) {
pool, err := dockertest.NewPool("")
if err != nil {
log.Fatalf("unable to create new pool: %v", err)
}
options := &dockertest.RunOptions{
Repository: "docker.elastic.co/elasticsearch/elasticsearch-oss",
Tag: "7.8.0",
PortBindings: map[docker.Port][]docker.PortBinding{
"9200": {{HostPort: "9200"}},
},
Env: []string{
"cluster.name=elasticsearch",
"bootstrap.memory_lock=true",
"discovery.type=single-node",
"network.publish_host=127.0.0.1",
"logger.org.elasticsearch=warn",
"ES_JAVA_OPTS=-Xms1g -Xmx1g",
},
}
resource, err := pool.RunWithOptions(options)
if err != nil {
log.Fatalf("unable to ES: %v", err)
}
endpoint := fmt.Sprintf("http://127.0.0.1:%s", resource.GetPort("9200/tcp"))
if err := pool.Retry(func() error {
var err error
client, err = elastic.NewClient(
elastic.SetURL(endpoint),
elastic.SetSniff(false),
elastic.SetHealthcheck(false),
)
if err != nil {
return err
}
_, _, err = client.Ping(endpoint).Do(context.Background())
if err != nil {
return err
}
return nil
}); err != nil {
log.Fatalf("unable to connect to ES: %v", err)
}
code := m.Run()
if err := pool.Purge(resource); err != nil {
log.Fatalf("unable to stop ES: %v", err)
}
os.Exit(code)
}
func TestAgainstRealCluster(t *testing.T) {
// You can use "client" variable here
// Example code:
exists, err := client.IndexExists("cities-test").Do(context.Background())
if err != nil {
t.Fatal(err)
}
if !exists {
t.Fatal("expected to find ES index")
}
}
The line
func CheckBucketExists(name string, client *elastic.Client) bool {
states that CheckBucketExists expects a *elastic.Client.
The lines:
m := MockClient{}
m.On("IndexExists", "thisuri").Return(true)
r := CheckBucketExists("thisuri", m)
pass a MockClient to the CheckBucketExists function.
This is causing a type conflict.
Perhaps you need to import github.com/olivere/elastic into your test file and do:
m := &elastic.Client{}
instead of
m := MockClient{}
But I'm not 100% sure what you're trying to do.