How to serialize DynamoDB LastEvaluatedKey to string - amazon-web-services

Have a requirement of transferring the LastEvaluatedKey returned from the query output as the response of a paginated API call so that the users can request the next page with the LastEvaluatedKey.
Is it possible to convert with aws-sdk-go-v2 ?
Have tried to marshal and unmarshal using json but it was not working
lek := map[string]types.AttributeValue{
"num": &types.AttributeValueMemberN{Value: "1"},
"text": &types.AttributeValueMemberS{Value: "text"},
}
barray, err := json.Marshal(lek)
if err != nil {
fmt.println(err)
}
lekDecoded := map[string]types.AttributeValue{}
err = json.Unmarshal(barray, &lekDecoded)
if err != nil {
fmt.println(err)
}
This always keeps failing to decode it back to map[string]types.AttributeValue

LastEvaluatedKey is provided in plain text in the response. If you want to encode it, you can do so using base64 for example. You can choose any type of encoding so long as you decode before supplying back to the next paged request.
Encode LEK
aws dynamodb scan \
--table-name test \
--limit 1 \
--query LastEvaluatedKey | base64
ewogICAgImlkIjogewogICAgICAgICJTIjogIjR3OG5pWnBpTXk2cXoxbW50RkE1dSIKICAgIH0KfQo
Decode encoded LEK
echo ewogICAgImlkIjogewogICAgICAgICJTIjogIjR3OG5pWnBpTXk2cXoxbW50RkE1dSIKICAgIH0KfQo | base64 --decode
{
"id": {
"S": "4w8niZpiMy6qz1mntFA5u"
}
}

Related

Why extra string in AWS Cloud watch logs generated by Lambda function?

I have golang based lambda function which does some work and logs the information during execution.
I'm using zapcore Golang logger as below
func NewLogger(logLevel string) (*zap.SugaredLogger, error) {
encoderConfig := zapcore.EncoderConfig{
TimeKey: "Time",
LevelKey: "Level",
NameKey: "Name",
CallerKey: "Caller",
MessageKey: "Msg",
StacktraceKey: "St",
EncodeLevel: zapcore.CapitalColorLevelEncoder,
EncodeTime: zapcore.ISO8601TimeEncoder,
EncodeDuration: zapcore.StringDurationEncoder,
EncodeCaller: zapcore.ShortCallerEncoder,
}
consoleEncoder := zapcore.NewConsoleEncoder(encoderConfig)
consoleOut := zapcore.Lock(os.Stdout)
var level zap.AtomicLevel
err := level.UnmarshalText(([]byte(logLevel)))
if err != nil {
level.UnmarshalText([]byte(defaultLogLevel))
}
core := zapcore.NewTee(zapcore.NewCore(
consoleEncoder,
consoleOut,
level,
))
logger := zap.New(core)
Logger = logger.Sugar()
return Logger, nil
}
However when it generates the logs in CloudWatch, I see an extra character in the logs.
2022-06-30T21:52:43.310-07:00 2022-07-01T04:52:43.310Z [34mINFO[0m Process my event
Why the logs is getting generated with [34m and [0m string in it?
Do I really need to generate logs with timestamp because I see cloudwatch already adds timestamp to logs.
To disable colors you can remove ASCII Escape Codes:
- EncodeLevel: zapcore.CapitalColorLevelEncoder
+ EncodeLevel: zapcore.CapitalLevelEncoder
I don't think you need to repeat time information, if you need more metrics later you'll probably log as JSONs and there will be already time information when querying from CloudWatch.

AWS Golang CreateSecret() ResourceExistsException on new unique key name

Not sure what is going on, this code worked once yesterday. Now no matter what value I use, AWS is returning a error that it already exists, but that's impossible.
2020/04/17 19:10:30 error ResourceExistsException: The operation failed because the secret /gog1/RandomSiteName3 already exists.
_, err = PutParam("/gog1/RandomSiteName3", "test", true, EventGuid)
if err != nil {
log.Printf("error writing secret: %v ", err)
return
}
func PutParam(paramName string, paramValue string, encrypt bool, guid string) (output string, err error) {
svc := secretsmanager.New(AWSSession)
input := &secretsmanager.CreateSecretInput{
// ClientRequestToken: aws.String(guid),
// Description: aws.String("My test database secret created with the CLI"),
Name: aws.String(paramName),
SecretString: aws.String(paramValue),
}
fmt.Printf("putting secret key: %v", paramName)
_, err = svc.CreateSecret(input)
if err != nil {
return "", err
}
return
}
It was due to an s3 trigger firing in a loop:
NOTE: If writing to the bucket that triggers the notification, this
could cause an execution loop. For example, if the bucket triggers a
Lambda function each time an object is uploaded, and the function
uploads an object to the bucket, then the function indirectly triggers
itself. To avoid this, use two buckets, or configure the trigger to
only apply to a prefix used for incoming objects.

Docker ImagePush failing with "no basic auth credentials"

I'm attempting to use the docker go-sdk to push an image to AWS ECR.
This is the code I'm using to push the image.
where tag = ".dkr.ecr.us-east-1.amazonaws.com/api:mytag"
func Push(c context.Context, tag string, credentials string) error {
cli, err := client.NewClient(apiSocket, apiVersion, nil, apiHeaders)
if err != nil {
return err
}
fmt.Println(credentials)
resp, err := cli.ImagePush(c, tag, types.ImagePushOptions{
RegistryAuth: credentials,
})
if err != nil {
panic(err)
}
io.Copy(os.Stdout, resp)
resp.Close()
return nil
}
But I keep getting this response:
{"status":"The push refers to repository [<id>.dkr.ecr.us-east-1.amazonaws.com/api]"}
{"status":"Preparing","progressDetail":{},"id":"23432919a50a"}
{"status":"Preparing","progressDetail":{},"id":"9387ad10e44c"}
{"status":"Preparing","progressDetail":{},"id":"e2a4679276bf"}
{"status":"Preparing","progressDetail":{},"id":"31c5c8035e63"}
{"status":"Preparing","progressDetail":{},"id":"a73789d39a06"}
{"status":"Preparing","progressDetail":{},"id":"f36942254806"}
{"status":"Preparing","progressDetail":{},"id":"4a2596f9aa79"}
{"status":"Preparing","progressDetail":{},"id":"5cf3066ccdbc"}
{"status":"Preparing","progressDetail":{},"id":"76a1661c28fc"}
{"status":"Preparing","progressDetail":{},"id":"beefb6beb20f"}
{"status":"Preparing","progressDetail":{},"id":"df64d3292fd6"}
{"status":"Waiting","progressDetail":{},"id":"beefb6beb20f"}
{"status":"Waiting","progressDetail":{},"id":"df64d3292fd6"}
{"errorDetail":{"message":"no basic auth credentials"},"error":"no basic auth credentials"}
Any ideas?
Notes:
I've verified that the credentials string I'm passing in is a base64 encoded user:pass for the ECR registry.
I've verified that the ECR credentials I'm getting are from the same AWS Region as where im attempting to push the image.
I found out in a GitHub comment that RegistryAuth actually needs to be a base64 JSON string with username and password fields. Ugh. This is undocumented in the Docker repository.
RegistryAuth = "{ \"username\": \"myusername\", \"password\": \"mypassword\", \"email\": \"myemail\" }
Relevant GitHub comment.
It is working for me now.

Forward on JSON received from SNS to Lambda - GoLang

I am trying to achieve the following:
Cloudwatch alarm details are received as JSON to a Lambda
The Lambda looks at the JSON to determine if the 'NewStateValue' == "ALARM"
If it does == "ALARM" forward the whole JSON received from the SNS out via another SNS.
I am most of the way towards achieving this and I have the following code:
package main
import (
"context"
"fmt"
"encoding/json"
"github.com/aws/aws-lambda-go/events"
"github.com/aws/aws-lambda-go/lambda"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/sns"
)
func handler(ctx context.Context, snsEvent events.SNSEvent) {
for _, record := range snsEvent.Records {
snsRecord := record.SNS
//To add in additional fields to publish to the logs add "snsRecord.'fieldname'"
fmt.Printf("Message = %s \n", snsRecord.Message)
var event CloudWatchAlarm
err := json.Unmarshal([]byte(snsRecord.Message), &event)
if err != nil {
fmt.Println("There is an error: " + err.Error())
}
fmt.Printf("Test Message = %s \n", event.NewStateValue)
if ( event.NewStateValue == "ALARM") {
svc := sns.New(session.New())
// params will be sent to the publish call included here is the bare minimum params to send a message.
params := &sns.PublishInput{
Message: Message: aws.String("message"), // This is the message itself (can be XML / JSON / Text - anything you want)
TopicArn: aws.String("my arn"), //Get this from the Topic in the AWS console.
}
resp, err := svc.Publish(params) //Call to puclish the message
if err != nil { //Check for errors
// Print the error, cast err to awserr.Error to get the Code and
// Message from an error.
fmt.Println(err.Error())
return
}
// Pretty-print the response data.
fmt.Println(resp)
}
}
}
func main() {
lambda.Start(handler)
}
Currently this sends an email to the address set-up in the SNS linked to the ARN above. However I would like the email to include the full, ideally formatted, JSON received by the first SNS. I have the Cloudwatch JSON structure defined in another file, this is being called by var event CloudWatchAlarm
From AWS SDK for Go docs:
type PublishInput struct {
// The message you want to send.
//
// If you are publishing to a topic and you want to send the same message to
// all transport protocols, include the text of the message as a String value.
// If you want to send different messages for each transport protocol, set the
// value of the MessageStructure parameter to json and use a JSON object for
// the Message parameter.
So, params would be:
params := &sns.PublishInput{
Message: myjson, // json data you want to send
TopicArn: aws.String("my arn"),
MessageStructure: aws.String("json")
}
WARNING
Notice the help paragraph says "use a JSON object for the parameter message", such JSON object must have a keys that correspond to supported transport protocol That means
{
'default': json_message,
'email': json_message
}
It will send json_message using both the default and email transports.

Multiple file upload using s3

I'd like to upload files to my s3 bucket via the aws golang sdk. I have a web server listening to POST requests and I'm expecting to receive multiple files of any type.
Using the sdk, the s3 struct PutObjectInput expects Body to be of type io.ReadSeeker and I'm not sure how to extract the content from the files uploaded and in turn satisfy the io.ReadSeeker interface.
images := r.MultipartForm.File
for _, files := range images {
for _, f := range files {
# In my handler, I can loop over the files
# and see the content
fmt.Println(f.Header)
_, err = svc.PutObjectWithContext(ctx, &s3.PutObjectInput{
Bucket: aws.String("bucket"),
Key: aws.String("key"),
Body: FILE_CONTENT_HERE,
})
}
}
Use the FileHeader.Open method to get an io.ReadSeeker.
f, err := f.Open()
if err != nil {
// handle error
}
_, err = svc.PutObjectWithContext(ctx, &s3.PutObjectInput{
Bucket: aws.String("bucket"),
Key: aws.String("key"),
Body: f,
})
Open returns a File. This type satisfies the io.ReadSeeker interface.
Use the S3 Manager's Uploader.Upload method, http://docs.aws.amazon.com/sdk-for-go/api/service/s3/s3manager/#Uploader.Upload. We have an example at http://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/s3-example-basic-bucket-operations.html#s3-examples-bucket-ops-upload-file-to-bucket.