I'm using Amazon ION to marshal and unmarshal data received from various AWS services.
I need to write a custom unmarshal function, and I found an example of how thats achieved in Amazon ION's official docs, see here
Using the above example, I have written below code:
package main
import (
"bytes"
"fmt"
"github.com/amzn/ion-go/ion"
)
func main() {
UnmarshalCustomMarshaler()
}
type unmarshalMe struct {
Name string
custom bool
}
func (u *unmarshalMe) UnmarshalIon(r ion.Reader) error {
fmt.Print("UnmarshalIon called")
u.custom = true
return nil
}
func UnmarshalCustomMarshaler() {
ionBinary, err := ion.MarshalBinary(unmarshalMe{
Name: "John Doe",
})
if err != nil {
fmt.Println("Error marshalling ion binary: ", err)
panic(err)
}
dec := ion.NewReader(bytes.NewReader(ionBinary))
var decodedResult unmarshalMe
ion.UnmarshalFrom(dec, &decodedResult)
fmt.Println("Decoded result: ", decodedResult)
}
Problem: The above code does not work as expected. The UnmarshalIon function is not called, but according to docs is should get called. What am I doing wrong?
You are probably using the v1.1.3, the one by default which does not contain the feature.
Related
Trying to create a Lambda to interact with my DynamoDB.
This specific Lambda is to put/write an item to the DB:
package main
import (
"fmt"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/dynamodb"
"github.com/aws/aws-sdk-go/service/dynamodb/dynamodbattribute"
)
type Item struct {
Email string `json:"email"`
Password string `json:"password"`
Rname string `json:"rname"`
}
func Put() error {
// Create a session - London Region
session, err := session.NewSession(&aws.Config{
Region: aws.String("eu-west-2")},
)
if err != nil {
fmt.Println(err)
}
svc := dynamodb.New(session)
// Create instance of Item Struct
item := Item{
Email: "123#mail.com",
Password: "12345678",
Rname: "abcde",
}
// Marshall Item
av, err := dynamodbattribute.MarshalMap(item)
if err != nil {
fmt.Println("Got error marshalling map:")
fmt.Println(err)
}
// Create Item
input := &dynamodb.PutItemInput{
Item: av,
TableName: aws.String("accountsTable"),
}
_, err = svc.PutItem(input)
if err != nil {
fmt.Println("Got error calling PutItem:")
fmt.Println(err)
}
return err
}
func main() {
lambda.Start(Put())
}
However getting the error:
{
"errorMessage": "handler is nil",
"errorType": "errorString"
}
I have changed the handler in run time settings to main too so don't think that would be the issue.
Building with:
GOOS=linux GOARCH=amd64 CGO_ENABLED=0 go build -a main.go
and putting the zip of the executable into AWS via console (no IAC's)
Any help would be greatly appreciated to resolve this error! Thanks.
You need to pass function handle not function result to lambda.Start
Please update your main function with👇
func main() {
lambda.Start(Put)
}
How should I unittest following piece of code. I was trying to use coutnerfiter to fake input "*s3.S3" object, but it's not working for me. I am new to coutnerfiter and Go, Can someone please help me on that.
func (l *listContentImp) ListS3Content(client *s3.S3) (bool, error) {
listObj := &s3.ListObjectsV2Input{
Bucket: aws.String(l.bucket),
}
var err error
l.lObj, err = client.ListObjectsV2(listObj)
if err != nil {
return false, err
}
return true, nil
}
You shouldn't pass a reference to the s3.S3 struct. When using the AWS SDK for Go v1 you typically pass the services corresponding interface. For S3 this is s3iface.
The signature of your function would look like this:
func (l *listContentImp) ListS3Content(client s3iface.S3API) (bool, error)
Now every struct that you pass that implements at least one of the methods of s3iface.S3API will work.
At runtime you'll pass the proper service client, but in the unit tests you can just pass a mock:
type mock struct {
s3iface.S3API
output *s3.ListObjectsV2Output
err error
}
func (m mock) ListObjectsV2(*s3.ListObjectsV2Input) (*s3.ListObjectsV2Output, error) {
return m.output, m.err
}
In your test you create the mock and pass it to your function:
func Test_ListObject(t *testing.T) {
l := &listContentImp{...}
m := mock{
output: &s3.ListObjectsV2Output{...},
err: nil
}
result, err := l.ListS3Content(m)
[... add checks here...]
}
I am working with the AWS S3 SDK in GoLang, playing with uploads and downloads to various buckets. I am wondering if there is a simpler way to upload structs or objects directly to the bucket?
I have a struct representing an event:
type Event struct {
ID string
ProcessID string
TxnID string
Inputs map[string]interface{}
}
That I would like to upload into the S3 bucket. But the code that I found in the documentation only works for uploading strings.
func Save(client S3Client, T interface{}, key string) bool {
svc := client.S3clientObject
input := &s3.PutObjectInput{
Body: aws.ReadSeekCloser(strings.NewReader("testing this one")),
Bucket: aws.String(GetS3Bucket()),
Key: aws.String(GetObjectKey(T, key)),
Metadata: map[string]*string{
"metadata1": aws.String("value1"),
"metadata2": aws.String("value2"),
},
}
This is successful in uploading a basic file to the S3 bucket that when opened simply reads "testing this one". Is there a way to upload to the bucket so that it is uploading an object rather than simply just a string value??
Any help is appreciated as I am new to Go and S3.
edit
This is the code I'm using for the Get function:
func GetIt(client S3Client, T interface{}, key string) interface{} {
svc := client.S3clientObject
s3Key := GetObjectKey(T, key)
resp, err := svc.GetObject(&s3.GetObjectInput{
Bucket: aws.String(GetS3Bucket()),
Key: aws.String(s3Key),
})
if err != nil {
fmt.Println(err)
return err
}
result := json.NewDecoder(resp.Body).Decode(&T)
fmt.Println(result)
return json.NewDecoder(resp.Body).Decode(&T)
}
func main() {
client := b.CreateS3Client()
event := b.CreateEvent()
GetIt(client, event, key)
}
Encode the value as bytes and upload the bytes. Here's how to encode the value as JSON bytes:
func Save(client S3Client, value interface{}, key string) error {
p, err := json.Marshal(value)
if err != nil {
return err
}
input := &s3.PutObjectInput{
Body: aws.ReadSeekCloser(bytes.NewReader(p)),
…
}
…
}
Call Save with the value you want to upload:
value := &Event{ID: "an id", …}
err := Save(…, value, …)
if err != nil {
// handle error
}
There are many possible including including gob, xml and json, msgpack, etc. The best encoding format will depend on your application requirements.
Reverse the process when getting an object:
func GetIt(client S3Client, T interface{}, key string) error {
svc := client.S3clientObject
resp, err := svc.GetObject(&s3.GetObjectInput{
Bucket: aws.String(GetS3Bucket()),
Key: aws.String(key),
})
if err != nil {
return err
}
return json.NewDecoder(resp.Body).Decode(T)
}
Call GetIt with a pointer to the destination value:
var value model.Event
err := GetIt(client, &value, key)
if err != nil {
// handle error
}
fmt.Println(value) // prints the decoded value.
The example cited here shows that S3 allows you to upload anything that implements the io.Reader interface. The example is using the strings.NewReader syntax create a io.Reader that knows how to provide the specified string to the caller. Your job (according to AWS here) is to figure out how to adapt whatever you need to store into an io.Reader.
You can store the bytes directly JSON encoded like this
package main
import (
"bytes"
"encoding/json"
)
type Event struct {
ID string
ProcessID string
TxnID string
Inputs map[string]interface{}
}
func main() {
// To prepare the object for writing
b, err := json.Marshal(event)
if err != nil {
return
}
// pass this reader into aws.ReadSeekCloser(...)
reader := bytes.NewReader(b)
}
I have next struct.
package logger
import "fmt"
type IPrinter interface {
Print(value string)
}
type ConsolePrinter struct{}
func (cp *ConsolePrinter) Print(value string) {
fmt.Printf("this is value: %s", value)
}
Test coverage says I need to test that ConsolePrinter Print method.
How can I cover this method?
Thanks.
Following comment that #icza wrote, I've written test below.
func TestPrint(t *testing.T) {
rescueStdout := os.Stdout
r, w, _ := os.Pipe()
os.Stdout = w
cp := &ConsolePrinter{}
cp.Print("test")
w.Close()
out, _ := ioutil.ReadAll(r)
os.Stdout = rescueStdout
if string(out) != "this is value: test" {
t.Errorf("Expected %s, got %s", "this is value: test", out)
}
}
I've found example in question What is the best way to convert byte array to string?.
Use Examples to convey the usage of a function.
Don't fret over 100% test coverage, especially for simple straightforward functions.
func ExampleHello() {
fmt.Println("hello")
// Output: hello
}
The additional benefit is that examples are outputted in a generated doc with go doc tool.
I would recommend to create new instance of the logger, which would behave exactly as fmt methods for printing data to console. Also, you can configure it with additional features like showing the filename, date and etc. Such custom logger can be passed as a parameter to your service/instance factory method. That would make it really easy to mock and test.
Your code
type Logs interface {
Println(v ...interface{})
}
type InstanceToTest struct {
log Logs
}
func InstanceToTestFactory(logger Logs) *InstanceToTest {
return &InstanceToTest{logger}
}
func (i *InstanceToTest) SomeMethod(a string) {
i.log.Println(a)
}
Create a mock for logger
type LoggerMock struct {
CalledPrintln []interface{}
}
func (l *LoggerMock) Println(v ...interface{}) {
l.CalledPrintln = append(CalledPrintln, v)
}
And in your test
func TestInstanceToTestSomeMethod(t *testing.T) {
l := &LoggerMock{}
i := InstanceToTestFactory(l)
a := "Test"
i.SomeMethod(a)
if len(l.CalledPrintln) == 0 || l.CalledPrintln[0] != a {
t.Error("Not called")
}
}
I've built a quick and easy API in Go that queries ElasticSearch. Now that I know it can be done, I want to do it correctly by adding tests. I've abstracted some of my code so that it can be unit-testable, but I've been having some issues mocking the elastic library, and as such I figured it would be best if I tried a simple case to mock just that.
import (
"encoding/json"
"github.com/olivere/elastic"
"net/http"
)
...
func CheckBucketExists(name string, client *elastic.Client) bool {
exists, err := client.IndexExists(name).Do()
if err != nil {
panic(err)
}
return exists
}
And now the test...
import (
"fmt"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"testing"
)
type MockClient struct {
mock.Mock
}
func (m *MockClient) IndexExists(name string) (bool, error) {
args := m.Mock.Called()
fmt.Println("This is a thing")
return args.Bool(0), args.Error(1)
}
func TestMockBucketExists(t *testing.T) {
m := MockClient{}
m.On("IndexExists", "thisuri").Return(true)
>> r := CheckBucketExists("thisuri", m)
assert := assert.New(t)
assert.True(r, true)
}
To which I'm yielded with the following error: cannot use m (type MockClient) as type *elastic.Client in argument to CheckBucketExists.
I'm assuming this is something fundamental with my use of the elastic.client type, but I'm still too much of a noob.
This is an old question, but couldn't find the solution either.
Unfortunately, this library is implemented using a struct, that makes mocking it not trivial at all, so the options I found are:
(1) Wrap all the elastic.SearchResult Methods on an interface on your own and "proxy" the call, so you end up with something like:
type ObjectsearchESClient interface {
// ... all methods...
Do(context.Context) (*elastic.SearchResult, error)
}
// NewObjectsearchESClient returns a new implementation of ObjectsearchESClient
func NewObjectsearchESClient(cluster *config.ESCluster) (ObjectsearchESClient, error) {
esClient, err := newESClient(cluster)
if err != nil {
return nil, err
}
newClient := objectsearchESClient{
Client: esClient,
}
return &newClient, nil
}
// ... all methods...
func (oc *objectsearchESClient) Do(ctx context.Context) (*elastic.SearchResult, error) {
return oc.searchService.Do(ctx)
}
And then mock this interface and responses as you would with other modules of your app.
(2) Another option is like pointed in this blog post that is mock the response from the Rest calls using httptest.Server
for this, I mocked the handler, that consist of mocking the response from the "HTTP call"
func mockHandler () http.HandlerFunc{
return func(w http.ResponseWriter, r *http.Request) {
resp := `{
"took": 73,
"timed_out": false,
... json ...
"hits": [... ]
...json ... ,
"aggregations": { ... }
}`
w.Write([]byte(resp))
}
}
Then you create a dummy elastic.Client struct
func mockClient(url string) (*elastic.Client, error) {
client, err := elastic.NewSimpleClient(elastic.SetURL(url))
if err != nil {
return nil, err
}
return client, nil
}
In this case, I've a library that builds my elastic.SearchService and returns it, so I use the HTTP like:
...
ts := httptest.NewServer(mockHandler())
defer ts.Close()
esClient, err := mockClient(ts.URL)
ss := elastic.NewSearchService(esClient)
mockLibESClient := es_mock.NewMockSearcherClient(mockCtrl)
mockLibESClient.EXPECT().GetEmployeeSearchServices(ctx).Return(ss, nil)
where mockLibESClient is the library I mentioned, and we stub the mockLibESClient.GetEmployeeSearchServices method making it return the SearchService with that will return the expected payload.
Note: for creating the mock mockLibESClient I used https://github.com/golang/mock
I found this to be convoluted, but "Wrapping" the elastic.Client was in my point of view more work.
Question: I tried to mock it by using https://github.com/vburenin/ifacemaker to create an interface, and then mock that interface with https://github.com/golang/mock and kind of use it, but I kept getting compatibility errors when trying to return an interface instead of a struct, I'm not a Go expect at all so probably I needed to understand the typecasting a little better to be able to solve it like that. So if any of you know how to do it with that please let me know.
The elasticsearch go client Github repo contains an official example of how to mock the elasticsearch client. It basically involves calling NewClient with a configuration which stubs the HTTP transport:
client, err := elasticsearch.NewClient(elasticsearch.Config{
Transport: &mocktrans,
})
There are primarily three ways I discovered to create a Mock/Dumy ES client. My response does not include integration tests against a real Elasticsearch cluster.
You can follow this article so as to mock the response from the Rest calls using httptest.Server, to eventually create a dummy elastic.Client struct
As mentioned by the package author in this link, you can work on "specifying an interface that has two implementations: One that uses a real ES cluster, and one that uses callbacks used in testing. Here's an example to get you started:"
type Searcher interface {
Search(context.Context, SearchRequest) (*SearchResponse, error)
}
// ESSearcher will be used with a real ES cluster.
type ESSearcher struct {
client *elastic.Client
}
func (s *ESSearcher) Search(ctx context.Context, req SearchRequest) (*SearchResponse, error) {
// Use s.client to run against real ES cluster and perform a search
}
// MockedSearcher can be used in testing.
type MockedSearcher struct {
OnSearch func(context.Context, SearchRequest) (*SearchResponse, error)
}
func (s *ESSearcher) Search(ctx context.Context, req SearchRequest) (*SearchResponse, error) {
return s.OnSearch(ctx, req)
}
Finally, as mentioned by the author in the same link you can "run a real Elasticsearch cluster while testing. One particular nice way might be to start the ES cluster during testing with something like github.com/ory/dockertest. Here's an example to get you started:"
package search
import (
"context"
"fmt"
"log"
"os"
"testing"
"github.com/olivere/elastic/v7"
"github.com/ory/dockertest/v3"
"github.com/ory/dockertest/v3/docker"
)
// client will be initialize in TestMain
var client *elastic.Client
func TestMain(m *testing.M) {
pool, err := dockertest.NewPool("")
if err != nil {
log.Fatalf("unable to create new pool: %v", err)
}
options := &dockertest.RunOptions{
Repository: "docker.elastic.co/elasticsearch/elasticsearch-oss",
Tag: "7.8.0",
PortBindings: map[docker.Port][]docker.PortBinding{
"9200": {{HostPort: "9200"}},
},
Env: []string{
"cluster.name=elasticsearch",
"bootstrap.memory_lock=true",
"discovery.type=single-node",
"network.publish_host=127.0.0.1",
"logger.org.elasticsearch=warn",
"ES_JAVA_OPTS=-Xms1g -Xmx1g",
},
}
resource, err := pool.RunWithOptions(options)
if err != nil {
log.Fatalf("unable to ES: %v", err)
}
endpoint := fmt.Sprintf("http://127.0.0.1:%s", resource.GetPort("9200/tcp"))
if err := pool.Retry(func() error {
var err error
client, err = elastic.NewClient(
elastic.SetURL(endpoint),
elastic.SetSniff(false),
elastic.SetHealthcheck(false),
)
if err != nil {
return err
}
_, _, err = client.Ping(endpoint).Do(context.Background())
if err != nil {
return err
}
return nil
}); err != nil {
log.Fatalf("unable to connect to ES: %v", err)
}
code := m.Run()
if err := pool.Purge(resource); err != nil {
log.Fatalf("unable to stop ES: %v", err)
}
os.Exit(code)
}
func TestAgainstRealCluster(t *testing.T) {
// You can use "client" variable here
// Example code:
exists, err := client.IndexExists("cities-test").Do(context.Background())
if err != nil {
t.Fatal(err)
}
if !exists {
t.Fatal("expected to find ES index")
}
}
The line
func CheckBucketExists(name string, client *elastic.Client) bool {
states that CheckBucketExists expects a *elastic.Client.
The lines:
m := MockClient{}
m.On("IndexExists", "thisuri").Return(true)
r := CheckBucketExists("thisuri", m)
pass a MockClient to the CheckBucketExists function.
This is causing a type conflict.
Perhaps you need to import github.com/olivere/elastic into your test file and do:
m := &elastic.Client{}
instead of
m := MockClient{}
But I'm not 100% sure what you're trying to do.