AWS Custom Resource ROLLBACK_FAILED because of a catch22-like situation - amazon-web-services

I have a Custom Resource which gets stuck in a catch22-like situation whenever a ROLLBACK occurs.
The code below is a simplified example of what my code is doing. In case of a create request, it creates a table, for delete it deletes, and in case of an update it compares the old properties with the new, and returns an error when one of the columns has a new value (column updates are not supported yet).
A problematic ROLLBACK_FAILED occurs when
[SOLVED] whenever a create request type fails (due to sql syntax error for example). In this case it will trigger a delete request for the ROLLBACK phase, but this request will fail because the table does not yet exist.
whenever a update request type fails due to a updated column value. In this case it will trigger a new update request for the ROLLBACK phase in which the event.ResourceProperties and event.OldResourceProperties are switched, which will still cause an error.
package main
import (
"context"
"encoding/json"
"fmt"
"strings"
"github.com/aws/aws-lambda-go/cfn"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/rdsdataservice"
"github.com/google/uuid"
)
func main() {
lambda.Start(cfn.LambdaWrap(handler))
}
type properties struct {
TableName string
Columns []struct {
Name string
Value string
}
}
func handler(ctx context.Context, event cfn.Event) (physicalResourceID string, data map[string]interface{}, err error) {
prid := event.PhysicalResourceID
if event.RequestType == cfn.RequestCreate {
prid = strings.ReplaceAll(uuid.New().String(), "-", "")
}
var props properties
b, _ := json.Marshal(event.ResourceProperties)
json.Unmarshal(b, &props)
rds := rdsdataservice.New(nil)
if event.RequestType == cfn.RequestCreate {
rds.ExecuteStatement(&rdsdataservice.ExecuteStatementInput{
Sql: aws.String(fmt.Sprintf("CREATE TABLE %s", props.TableName)),
})
}
if event.RequestType == cfn.RequestDelete {
rds.ExecuteStatement(&rdsdataservice.ExecuteStatementInput{
Sql: aws.String(fmt.Sprintf("CREATE TABLE %s", props.TableName)),
})
}
if event.RequestType == cfn.RequestUpdate {
var oldProps properties
b, _ := json.Marshal(event.OldResourceProperties)
json.Unmarshal(b, &oldProps)
columns := map[string]string{}
for _, column := range props.Columns {
columns[column.Name] = column.Value
}
for _, column := range oldProps.Columns {
if val, ok := columns[column.Name]; ok {
if val != column.Value {
return "", nil, fmt.Errorf("cannot change column type")
}
}
}
// Do some extra stuff here for adding/removing columns
}
return prid, nil, nil
}
I have thought of 2 possible solutions. One of them I could implement, with potential issues. But it seems to me there should be a better way, as I can't be the only one with this problem. Or I am doing something very wrong..
disable rollback for this specific resource only, in some cases (sometimes I still want a rollback)
have access to the last status, so that I can check what to do; in case of delete with last status CREATE_FAILED, don't do anything. In case of an update with last status UPDATE_FAILED, don't do anything.
The second option I could implement by using the code below. But as the number of events grow, this could become very problematic.
events, err := cloud.DescribeStackEvents(&cloudformation.DescribeStackEventsInput{
StackName: &event.StackID,
})

For anyone who runs into the same problem; I have solved this issue by looking at the stack events, and find the last resource status that is being updated. As the current status is always in the list, I ignore IN_PROGRESS values. If the first value after status IN_PROGRESS is FAILED, it means this resource could not be updated, and thus a different ROLLBACK strategie can be applied.
The corresponding function, in GO
func isFailedEvent(sess *session.Session, event cfn.Event) (bool, error) {
cloud := cloudformation.New(sess)
var isFailedResource bool
if err := cloud.DescribeStackEventsPages(&cloudformation.DescribeStackEventsInput{
StackName: aws.String(event.StackID),
}, func(out *cloudformation.DescribeStackEventsOutput, lastPage bool) bool {
for _, e := range out.StackEvents {
if *e.LogicalResourceId == event.LogicalResourceID {
if strings.HasSuffix(*e.ResourceStatus, "IN_PROGRESS") {
continue
}
if strings.HasSuffix(*e.ResourceStatus, "FAILED") && !strings.Contains(*e.ResourceStatusReason, "cancelled") {
isFailedResource = true
}
return false
}
}
return true
}); err != nil {
return false, fmt.Errorf("describe stack events: %s", err)
}
return isFailedResource, nil
}

Related

k8s controller watch other controller CR

I’ve k8s operator which works as expected, I need to add a “watch” to other operator CRD (not mine), to make it simple lets call it extCR and our operator cr called inCR,
I tried the following but there is an issue how its right to trigger the reconcile.
func (r *Insiconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&Inv1alpha1.Iget{}}).
Watches(&source.Kind{Type: &ext.Se{}}, handler.EnqueueRequestsFromMapFunc(r.FWatch)).
Complete(r)
}
func (r *Insiconciler) FWatch(c client.Object) []reconcile.Request {
val := c.(*ivi.Srv)
req := reconcile.Request{NamespacedName: types.NamespacedName{Name: val.Name, Namespace: val.Namespace}}
return []reconcile.Request{req}
}
The problem here that I trigger the reconcile with the extCR , I want inside the FWatch to update the inCR and start the reconcile with inCR and not with extCR, how can I do it ?
I mean, to avoid something like the following code as sometimes the reconcile is done for the inCR and sometimes for the extCR and than I can get some ugly if's
func (r *Insiconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
var inCR FOO
var extCR BAR
if err := r.Get(ctx, req.NamespacedName, &inCR); err != nil {
return ctrl.Result{}, err
}
if err := r.Get(ctx, req.NamespacedName, &extCR); err != nil {
return ctrl.Result{}, err
}
I want to know what is the right/clean way to handle such case
case when you need to listen to externalCR (not part of your controller) and also internalCR (from your controller) .
One more thing - the CR are different GVK but the exteranlCR contain lot of fields which is not required, just some of them. but the required fields is having the same names on both cr's
update
type inCR struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec inSpec `json:"spec,omitempty"` / / ————————here is the difference
Status InsightTargetStatus `json:"status,omitempty"`
}
//————— This is defined on other program which is not owned by us, therefore cannot “reuse”
type Bar struct {
metav1.TypeMeta `json:",inline"`
metav1.ObjectMeta `json:"metadata,omitempty"`
Spec extSpec `json:"spec,omitempty"` // ———————here is the difference
Status ServiceStatus `json:"status,omitempty"`
}
And inSpec is having the following fields (subset of extSpec)
type inSpec struct {
name string
age int
}
and extSpec have those fields and many more which is not related
type extSpec struct {
name string
age int
foo string // not relevant
bar string // not relevant
bazz string // not relevant
}
at the end, Inside the reconcile I need to move the relevant fields to some functions. exactly same functions just take sometime the fields from extCR and sometimes for inCR, according to the event that happens (like updating the extCR or update the inCR by users )
Update2
func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
in := c.(*inCR)
vPass , e := vps.Get(ctx, r.Client, in.Spec.foo, in.Spec.bar)
return ctrl.Result{}, nil
}
But for extCR I should do the following
func sharedLogic(r reconciler, ctx context.Context, c client.Object) (ctrl.Result, error) {
ext := c.(*extCR)
vPass , e := vps.Get(ctx, r.Client, ext.Spec.val.foo, ext.Spec.val.bar)
return ctrl.Result{}, nil
}
Few things to keep in mind:
Each controller is responsible for exactly one resource.
Reconcile request contains the information necessary to reconcile a Kubernetes object. This includes the information to uniquely identify the object - its Name and Namespace. It does NOT contain information about any specific Event or the object contents itself.
You can create a second controller without the resource definition. In your main file, both controllers will be registered.
This could be useful if the CRDs are not related at all or if the external resource references the internal one, so you can make changes to the internal resource in the external reconciler.
kubebuilder create api --group other --version v2 --kind External \
--resource=false --controller=true
This gives you a controller with a SetupWithManager method that looks like the below.
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// Uncomment the following line adding a pointer to an instance of the controlled resource as an argument
// For().
Complete(r)
}
Note how the For method is commented out because you need to import the resource to watch from somewhere else and reference it.
import (
...
otherv2 "other.io/external/api/v2"
)
...
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&otherv2.External{}).
Complete(r)
}
If you cannot import the external resource you could fall back to mocking it yourself but this is probably not a very clean way. You should really try to import it from the other controller project.
kubebuilder edit --multigroup=true
kubebuilder create api --group=other --version v2 --kind External \
--resource --controller
Another way is when the resources are related to each other such that the internal resource has a reference in its spec to the external resource and knows how to get the external resource in its spec, when it reconciles. An example of this can be found here https://book.kubebuilder.io/reference/watching-resources/externally-managed.html
type InternalSpec struct {
// Name of an external resource
ExternalResource string `json:"externalResource,omitempty"`
}
This means that in each reconciliation loop, the controller will look up the external resource and use it to manage the internal resource.
func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
internal := examplev1.Internal{}
if err := r.Get(context.TODO(), types.NamespacedName{
Name: req.Name,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
external := otherv2.External{}
if err := r.Get(context.TODO(), types.NamespacedName{
// note how the name is taken from the internal spec
Name: internal.Spec.ExternalResource,
Namespace: req.Namespace,
}, &internal); err != nil {
return ctrl.Result{}, err
}
// do something with internal and external here
return ctrl.Result{}, nil
}
The problem with this is, that when the internal resource does not change, no reconciliation event will be triggered, even when the external resource has changed. To work around that, we can trigger the reconciliation by watching the external resource. Note the Watches method:
func (r *InternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
For(&examplev1.Main{}).
Watches(
&source.Kind{Type: &otherv2.ExternalResource{}},
handler.EnqueueRequestsFromMapFunc(r.triggerReconcileBecauseExternalHasChanged),
builder.WithPredicates(predicate.ResourceVersionChangedPredicate{}),
).
Complete(r)
}
In order to know for which internal object we should trigger an event, we use a mapping function to look up all the internal that have a reference to the external resource.
func (r *InternalReconciler) triggerReconcileBecauseExternalHasChanged(o client.Object) []reconcile.Request {
usedByInternals := &examplev1.InternalList{}
listOps := &client.ListOptions{
FieldSelector: fields.OneTermEqualSelector(".spec.ExternalResource", o.GetName()),
Namespace: o.GetNamespace(),
}
err := r.List(context.TODO(), usedByInternals, listOps)
if err != nil {
return []reconcile.Request{}
}
requests := make([]reconcile.Request, len(usedByInternals.Items))
for i, item := range usedByInternals.Items {
requests[i] = reconcile.Request{
NamespacedName: types.NamespacedName{
Name: item.GetName(),
Namespace: item.GetNamespace(),
},
}
}
return requests
}
Since you updated your question, I suggest doing something like below.
I am creating a new project and 2 controllers. Note on the second controller command no resource is created along with the controller. this is because the controller
will watch an external resource.
mkdir demo && cd demo
go mod init example.io/demo
kubebuilder init --domain example.io --repo example.io/demo --plugins=go/v4-alpha
kubebuilder create api --group=demo --version v1 --kind Internal --controller --resource
kubebuilder create api --group=other --version v2 --kind External --controller --resource=false
$ tree controllers
controllers
├── external_controller.go
├── internal_controller.go
└── suite_test.go
Now we need some shared logic, for example by adding this to the controllers package. We will call this from both reconcilers.
// the interface may need tweaking
// depending on what you want to do with
// the reconiler
type reconciler interface {
client.Reader
client.Writer
client.StatusClient
}
func sharedLogic(r reconciler, kobj *demov1.Internal) (ctrl.Result, error) {
// do your shared logic here operating on the internal object struct
// this works out because the external controller will call this passing the
// internal object
return ctrl.Result{}, nil
}
Here is an example for the internal reconciler.
func (r *InternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
And in the external reconciler we do the same.
func (r *ExternalReconciler) Reconcile(ctx context.Context, req ctrl.Request) (ctrl.Result, error) {
_ = log.FromContext(ctx)
// note, we can use the internal object here as long as the external object
// does contain the same fields we want. That means when unmarshalling the extra
// fields are dropped. If this cannot be done, you could first unmarshal into the external
// resource and then assign the fields you need to the internal one, before passing it down
obj := demov1.Internal{}
if err := r.Get(ctx, req.NamespacedName, &obj); err != nil {
return ctrl.Result{}, err
}
return sharedLogic(r, &obj)
}
func (r *ExternalReconciler) SetupWithManager(mgr ctrl.Manager) error {
return ctrl.NewControllerManagedBy(mgr).
// note the external resource is imported from another project
// you may be able to watch this without import by creating a minimal
// type with the right GKV
For(otherv2.External{}).
Complete(r)
}

How do I get pagination working with exclusiveStartKey for dynamodb aws-sdk-go-v2?

I'm trying to create a pagination endpoint for a dynamodb table I have. But I've tried everything to get the exclusiveStartKey to be the correct type for it to work. However, everything I've tried doesn't seem to work.
example code:
func GetPaginator(tableName string, limit int32, lastEvaluatedKey string) (*dynamodb.ScanPaginator, error) {
svc, err := GetClient()
if err != nil {
logrus.Error(err)
return nil, err
}
input := &dynamodb.ScanInput{
TableName: aws.String(tableName),
Limit: aws.Int32(limit),
}
if lastEvaluatedKey != "" {
input.ExclusiveStartKey = map[string]types.AttributeValue{
"id": &types.AttributeValueMemberS{
Value: lastEvaluatedKey,
},
}
}
paginator := dynamodb.NewScanPaginator(svc, input)
return paginator, nil
}
Edit:
Okay so I'm creating a API that requires pagination. The API needs to have a query parameter where the lastEvaluatedId can be defined. I can then use the lastEvaluatedId to pass as the ExclusiveStartKey on the ScanInput. However when I do this I still received the same item from the database. I've created a test.go file and will post the code below:
package main
import (
"context"
"fmt"
"os"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/feature/dynamodb/attributevalue"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
)
type PaginateID struct {
ID string `dynamodbav:"id" json:"id"`
}
func main() {
lastKey := PaginateID{ID: "ae82a99d-486e-11ec-a7a7-0242ac110002"}
key, err := attributevalue.MarshalMap(lastKey)
if err != nil {
fmt.Println(err)
return
}
cfg, err := config.LoadDefaultConfig(context.TODO(), func(o *config.LoadOptions) error {
o.Region = os.Getenv("REGION")
return nil
})
if err != nil {
fmt.Println(err)
return
}
svc := dynamodb.NewFromConfig(cfg, func(o *dynamodb.Options) {
o.EndpointResolver = dynamodb.EndpointResolverFromURL("http://localhost:8000")
})
input := &dynamodb.ScanInput{
TableName: aws.String("TABLE_NAME"),
Limit: aws.Int32(1),
ExclusiveStartKey: key,
}
paginator := dynamodb.NewScanPaginator(svc, input)
if paginator.HasMorePages() {
data, err := paginator.NextPage(context.TODO())
if err != nil {
fmt.Println(err)
return
}
fmt.Println(data.Items[0]["id"])
fmt.Println(data.LastEvaluatedKey["id"])
}
}
When I run this test code. I get this output:
&{ae82a99d-486e-11ec-a7a7-0242ac110002 {}}
&{ae82a99d-486e-11ec-a7a7-0242ac110002 {}}
So the item that is returned is the same Id that I am passing to the ScanInput.ExclusiveStartKey. Which means it's not starting from the ExclusiveStartKey. The scan is starting from the beginning everytime.
The aws-sdk-go-v2 DynamoDB query and scan paginator constructors have a bug (see my github issue, includes the fix). They do not respect the ExclusiveStartKey param.
As an interim fix, I copied the paginator type locally and added one line in to the constructor: nextToken: params.ExclusiveStartKey.
so basically what you need to do is to get the LastEvaluatedKey and to pass it to ExclusiveStartKey
you can not use the scan paginator attributes because it's not exported attributes, therefore instead I suggest that you use the returned page by calling NextPage
in the following snippet I have an example :
func GetPaginator(ctx context.Context,tableName string, limit int32, lastEvaluatedKey map[string]types.AttributeValue) (*dynamodb.ScanOutput, error) {
svc, err := GetClient()
if err != nil {
logrus.Error(err)
return nil, err
}
input := &dynamodb.ScanInput{
TableName: aws.String(tableName),
Limit: aws.Int32(limit),
}
if len(lastEvaluatedKey) > 0 {
input.ExclusiveStartKey = lastEvaluatedKey
}
paginator := dynamodb.NewScanPaginator(svc, input)
return paginator.NextPage(), nil
}
keep in mind that paginator.NextPage(ctx) could be nil incase there is no more pages or you can use HasMorePages()

DynamoDB list all backups using AWS GoLang SDK

Based on the example given in the link blow on API Operation Pagination without Callbacks
https://aws.amazon.com/blogs/developer/context-pattern-added-to-the-aws-sdk-for-go/
I am trying to list all the Backups in dynamodb. But it seems like pagination is not working and it is just retrieving first page and not going to next page
package main
import (
"context"
"fmt"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/request"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/dynamodb"
)
func main() {
sess, sessErr := session.NewSession()
if sessErr != nil {
fmt.Println(sessErr)
fmt.Println("Cound not initilize session..returning..")
return
}
// Create DynamoDB client
dynamodbSvc := dynamodb.New(sess)
params := dynamodb.ListBackupsInput{}
ctx := context.Background()
p := request.Pagination{
NewRequest: func() (*request.Request, error) {
req, _ := dynamodbSvc.ListBackupsRequest(&params)
req.SetContext(ctx)
return req, nil
},
}
for p.Next() {
page := p.Page().(*dynamodb.ListBackupsOutput)
fmt.Println("Received", len(page.BackupSummaries), "objects in page")
for _, obj := range page.BackupSummaries {
fmt.Println(aws.StringValue(obj.BackupName))
}
}
//return p.Err()
} //end of main
Its a bit late but I'll just put it here in case I can help somebody.
Example:
var exclusiveStartARN *string
var backups []*dynamodb.BackupSummary
for {
backup, err := svc.ListBackups(&dynamodb.ListBackupsInput{
ExclusiveStartBackupArn:exclusiveStartARN,
})
if err != nil {
fmt.Println(err)
os.Exit(1)
}
backups = append(backups, backup.BackupSummaries...)
if backup.LastEvaluatedBackupArn != nil {
exclusiveStartARN = backup.LastEvaluatedBackupArn
//max 5 times a second so we dont hit the limit
time.Sleep(200 * time.Millisecond)
continue
}
break
}
fmt.Println(len(backups))
Explaination:
The way that pagination is done is via ExclusiveStartBackupArn in the ListBackupsRequest. The ListBackupsResponse returns LastEvaluatedBackupArn if there are more pages, or nil if its the last/only page.
It could be that you're smashing into the API a bit with your usage
You can call ListBackups a maximum of 5 times per second.
What is the value of p.HasNextPage() in your p.Next() loop?

How can I add multiple result sets in go-sqlmock?

I have a db query that returns 2 result sets and I would like to unit test the go function that performs this query.
While I can add and test rows like this:
myMockRows:=sqlmock.NewRows([]string{"col1","col2"}).AddRow("col1val1", "col2val2")
mock.ExpectQuery("my_stored_procedure").WithArgs(arg1, arg2).WillReturnRows(myMockRows)
I am not sure how to proceed with creating multiple result sets in my rows object. How do I do that?
I had tried out, #Nikhil Vandanapu's answer and I wasn't getting the desired output. For some reason it took only myMockRows and myMockRows2 was ignored. I did some reading and we can do the following to get it to return multiple rows.
myMockRows:=sqlmock.NewRows([]string{"col1","col2"})
.AddRow("col1val1", "col2val2")
.AddRow("col1val1", "col2val2")
mock.ExpectQuery("my_stored_procedure").WillReturnRows(myMockRows)
According to godoc. Value slice return the same instance to perform subsequent actions.
Adding this blog post if you want an easier read about the topic
Do something like this:
myMockRows:=sqlmock.NewRows([]string{"col1","col2"}).AddRow("col1val1", "col2val2")
myMockRows2:=sqlmock.NewRows([]string{"col3","col4"}).AddRow("col3val1", "col4val2")
mock.ExpectQuery("my_stored_procedure").WithArgs(arg1, arg2).WillReturnRows(myMockRows, myMockRows2)
Since WillReturnRows accepts multiple row objects and forms a slice, use it to construct the next Result Set.
You have to get your test rows inside a struct and then execute loop over the struct like below code.
type args struct {
val string
}
tests := []struct {
name string
s *<YourDBstruct>
args args
wantErr bool
}{
{
name: "Test with correct value1",
s: &YourDBstruct{db},
args: args{"Val1"}
wantErr: true,
},
{
name: "Test with correct value2",
s: &YourDBstruct{db},
args: args{"Val2"}
wantErr: true,
},
{
name: "Test with correct valueN",
s: &YourDBstruct{db},
args: args{"ValN"}
wantErr: true,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
mock.ExpectExec("my_stored_procedure")
if err := tt.s.YourStoreFuncName(); (err != nil) != tt.wantErr {
t.Errorf("YourStoreFuncName() error = %v, wantErr %v", err, tt.wantErr)
}
})

Create item in dynamodb using go

I'm using the follow code to create a item in my dynamodb table:
package main
import (
"fmt"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/aws/endpoints"
"github.com/aws/aws-sdk-go-v2/aws/external"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
"github.com/aws/aws-sdk-go-v2/service/dynamodb/dynamodbattribute"
)
type Record struct {
ID string
URLs []string
}
// Using the SDK's default configuration, loading additional config
// and credentials values from the environment variables, shared
// credentials, and shared configuration files
var cfg, err = external.LoadDefaultAWSConfig()
func createNewExtraction() error {
svc := dynamodb.New(cfg)
r := Record{
ID: "ABC123",
URLs: []string{
"https://example.com/first/link",
"https://example.com/second/url",
},
}
item, err := dynamodbattribute.MarshalMap(r)
if err != nil {
panic(fmt.Sprintf("failed to DynamoDB marshal Record, %v", err))
}
req := svc.PutItemRequest(&dynamodb.PutItemInput{
TableName: aws.String("test"), Item: item })
_, err = req.Send(); if err != nil {
return err
}
return nil
}
func main() {
if len(cfg.Region) > 0 {
// Set Region to us-east-1 as default.
cfg.Region = endpoints.UsEast1RegionID
}
err = createNewExtraction(); if err != nil {
panic(err.Error())
}
}
But it's returning the error:
panic: ValidationException: One or more parameter values were invalid: Missing the key id in the item
status code: 400, request id: F3VCQSGRIG5GM5PEJE7T5M4CEVVV4KQNSO5AEMVJF66Q9ASUAAJG
goroutine 1 [running]:
main.main()
/Users/user/go/src/Test2/test.go:56 +0x102
exit status 2
I already tried to declare Id, id and ID in my Record struct, but it doesn't work.
The stranger is: I got this code in the official documentation (I'm updating to work with the aws-sdk-go-v2).
Thank you in advance.
I do not know golang, but I had similar problems in nodejs.
Make sure the item you put in the table contains the 'partition key' and the sorting key, case-sensitive.
EDIT:
• It is a golang issue, the item is not built properly when the DynamoDB column names are lowercase.
• Consider redefining the Record structure (see this link for details):
type Record struct{
ID string `json:"id"`
URLs []string `json:"url"`
}
where id and url are column names in DynamoDB table.