Why extra string in AWS Cloud watch logs generated by Lambda function? - amazon-web-services

I have golang based lambda function which does some work and logs the information during execution.
I'm using zapcore Golang logger as below
func NewLogger(logLevel string) (*zap.SugaredLogger, error) {
encoderConfig := zapcore.EncoderConfig{
TimeKey: "Time",
LevelKey: "Level",
NameKey: "Name",
CallerKey: "Caller",
MessageKey: "Msg",
StacktraceKey: "St",
EncodeLevel: zapcore.CapitalColorLevelEncoder,
EncodeTime: zapcore.ISO8601TimeEncoder,
EncodeDuration: zapcore.StringDurationEncoder,
EncodeCaller: zapcore.ShortCallerEncoder,
}
consoleEncoder := zapcore.NewConsoleEncoder(encoderConfig)
consoleOut := zapcore.Lock(os.Stdout)
var level zap.AtomicLevel
err := level.UnmarshalText(([]byte(logLevel)))
if err != nil {
level.UnmarshalText([]byte(defaultLogLevel))
}
core := zapcore.NewTee(zapcore.NewCore(
consoleEncoder,
consoleOut,
level,
))
logger := zap.New(core)
Logger = logger.Sugar()
return Logger, nil
}
However when it generates the logs in CloudWatch, I see an extra character in the logs.
2022-06-30T21:52:43.310-07:00 2022-07-01T04:52:43.310Z [34mINFO[0m Process my event
Why the logs is getting generated with [34m and [0m string in it?
Do I really need to generate logs with timestamp because I see cloudwatch already adds timestamp to logs.

To disable colors you can remove ASCII Escape Codes:
- EncodeLevel: zapcore.CapitalColorLevelEncoder
+ EncodeLevel: zapcore.CapitalLevelEncoder
I don't think you need to repeat time information, if you need more metrics later you'll probably log as JSONs and there will be already time information when querying from CloudWatch.

Related

Can't get custom metrics data when using OpenCensus

I am following this guide monitoring_opencensus_metrics_quickstart-go. Besides, I also tried the way from this answer.
Code here:
package main
import (
"context"
"fmt"
"log"
"os"
"path"
"time"
"google.golang.org/api/option"
"contrib.go.opencensus.io/exporter/stackdriver"
"go.opencensus.io/stats"
"go.opencensus.io/stats/view"
"golang.org/x/exp/rand"
)
var (
// The task latency in milliseconds.
latencyMs = stats.Float64("task_latency", "The task latency in milliseconds", "ms")
)
func main() {
ctx := context.Background()
v := &view.View{
Name: "task_latency_distribution",
Measure: latencyMs,
Description: "The distribution of the task latencies",
Aggregation: view.Distribution(0, 100, 200, 400, 1000, 2000, 4000),
}
if err := view.Register(v); err != nil {
log.Fatalf("Failed to register the view: %v", err)
}
exporter, err := stackdriver.NewExporter(stackdriver.Options{
ProjectID: os.Getenv("GOOGLE_CLOUD_PROJECT"),
MonitoringClientOptions: []option.ClientOption{
option.WithCredentialsFile(path.Join("./.gcp/stackdriver-monitor-admin.json")),
},
})
if err != nil {
log.Fatal(err)
}
view.RegisterExporter(exporter)
view.SetReportingPeriod(60 * time.Second)
// Flush must be called before main() exits to ensure metrics are recorded.
defer exporter.Flush()
if err := exporter.StartMetricsExporter(); err != nil {
log.Fatalf("Error starting metric exporter: %v", err)
}
defer exporter.StopMetricsExporter()
// Record 100 fake latency values between 0 and 5 seconds.
for i := 0; i < 100; i++ {
ms := float64(5*time.Second/time.Millisecond) * rand.Float64()
fmt.Printf("Latency %d: %f\n", i, ms)
stats.Record(ctx, latencyMs.M(ms))
time.Sleep(1 * time.Second)
}
fmt.Println("Done recording metrics")
}
I run above code locally, NOT in GCE, GAE and GKE environments.
At the metrics explorer web UI, here is the metric query condition:
Resource Type: Consumed API
Metric: custom.googleapis.com/opencensus/task_latency_distribution
Full query:
fetch consumed_api
| metric 'custom.googleapis.com/opencensus/task_latency_distribution'
| align delta(1m)
| every 1m
| group_by [],
[value_task_latency_distribution_aggregate:
aggregate(value.task_latency_distribution)]
The service account has Monitoring Admin role.
But got No data is available for the selected time frame.
It works for me but I've tended to do things slightly differently:
export GOOGLE_APPLICATION_CREDENTIALS=path/to/creds instead of MonitoringClientOptions{};
Previously (!?) I had problems with distributions with a 0 bucket; try removing that initial (0) bucket and try again;
Drop the resource.type from the metrics explorer; from APIs Explorer, if any, this should be global
Google APIs Explorer is an excellent way to diagnose Stackdriver API challenges. You can use it to list metrics and to list timeseries (replace your-project-id and update the 2 interval values):
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.metricDescriptors/list?apix=true&apix_params=%7B%22name%22%3A%22projects%2Fyour-project-id%22%2C%22filter%22%3A%22metric.type%3D%5C%22custom.googleapis.com%2Fopencensus%2Ftask_latency_distribution%5C%22%22%7D
https://cloud.google.com/monitoring/api/ref_v3/rest/v3/projects.timeSeries/list?apix=true&apix_params=%7B%22name%22%3A%22projects%2Fyour-project-id%22%2C%22aggregation.alignmentPeriod%22%3A%22%2B300s%22%2C%22aggregation.crossSeriesReducer%22%3A%22REDUCE_MEAN%22%2C%22aggregation.perSeriesAligner%22%3A%22ALIGN_DELTA%22%2C%22filter%22%3A%22metric.type%3D%5C%22custom.googleapis.com%2Fopencensus%2Ftask_latency_distribution%5C%22%22%2C%22interval.endTime%22%3A%222020-09-08T23%3A59%3A59Z%22%2C%22interval.startTime%22%3A%222020-09-08T00%3A00%3A00Z%22%7D
Using Chrome's Developer Console, you can find one of Stackdriver's calls to the API to more easily reproduce it, e.g.
filter: metric.type="custom.googleapis.com/opencensus/task_latency_distribution"
aggregation.crossSeriesReducer: REDUCE_MEAN
aggregation.alignmentPeriod: +60s
aggregation.perSeriesAligner: ALIGN_DELTA
secondaryAggregation.crossSeriesReducer: REDUCE_NONE
interval.startTime: 2020-09-08T23:59:59Z
interval.endTime: 2020-09-08T00:00:00Z

AWS S3 - Golang SDK - SignatureDoesNotMatch

I'm looking to integrade an S3 bucket with an API im developing, I'm running into this error wherever I go -
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403
I have done the following
Installed SDK & AWS CLI, and AWS configured
Double(triple) checked spelling of key & secret key & bucket permissions
Attempted with credentials document, .env, and even hard coding the values directly
Tested with AWS CLI (THIS WORKS), so I believe I can rule out permissions, keys, as a whole.
I'm testing by trying to list buckets, here is the code taken directly from the AWS documentation-
sess := session.Must(session.NewSessionWithOptions(session.Options{ <--- DEBUGGER SET HERE
SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil { exitErrorf("Unable to list buckets, %v", err) }
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n", aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
Using debugger, I can see the sessions config files as the program runs, the issue is potentially here
config -
-> credentials
-> creds
-> v
-> Access Key = ""
-> Secret Access Key = ""
-> Token = ""
-> provider
->value
-> Access Key With Value
-> Secret Access Key With Value
-> Token With Value
I personally cannot find any documentation regarding "creds" / "v", and I don't know if this is causing the issue. As I mentioned, I can use the AWS CLI to upload into the bucket, and even when I hard code my access key etc in to the Go SDK I receive this error.
Thank you for any thoughts, greatly appreciated.
I just compiled your code and its executing OK ... one of the many ways to supply credentials to your binary is to populate these env vars
export AWS_ACCESS_KEY_ID=AKfoobarT2IJEAU4
export AWS_SECRET_ACCESS_KEY=oa6oT0Xxt4foobarbambazdWFCb
export AWS_REGION=us-west-2
that is all you need when using the env var approach ( your values are available using the aws console browser )
the big picture is to create a wrapper shell script ( bash ) which contains above three lines to populate the env vars to supply credentials then in same shell script execute the golang binary ( typically you compile the golang in some preliminary process ) ... in my case I store the values of my three env vars in encrypted files which the shell script decrypts just before it calls the above export commands
sometimes its helpful to drop kick and just use the aws command line equivalent commands to get yourself into the ballpark ... from a terminal run
aws s3 ls s3://cairo-mombasa-zaire --region us-west-2
which can also use those same env vars shown above
for completeness here is your code with boilerplate added ... this runs OK and lists out the buckets
package main
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
// "github.com/aws/aws-sdk-go/service/s3/s3manager"
"fmt"
"os"
)
func exitErrorf(msg string, args ...interface{}) {
fmt.Fprintf(os.Stderr, msg+"\n", args...)
os.Exit(1)
}
func main() {
region_env_var := "AWS_REGION"
curr_region := os.Getenv(region_env_var)
if curr_region == "" {
exitErrorf("ERROR - failed to get region from env var %v", region_env_var)
}
fmt.Println("here is region ", curr_region)
// Load session from shared config
sess := session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil { exitErrorf("Unable to list buckets, %v", err) }
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n", aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
}
numBytes, err := downloader.Download(tempFile,
&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(fileName),
},
)
In my case the bucket value was wrong, it is missing literal "/" at the end. Adding that fixes my problem.
Error i got - err: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403,
If anyone else happens to have this problem,
The issue was regarding environment variables much like Scott suggest above, however it was due to lacking
export AWS_SDK_LOAD_CONFIG="true"
If this environment variable is not present, then the Golang SDK will not look for a credentials file, along with this, I instantiated environment variables for both my keys which allowed the connection to succeed.
To recap
if you're attempting to use the shared credentials folder, you must use the above noted environment variable to enable it.
If you're using environment variables, you shouldn't be affected by this problem.

How to deploy REST API to AWS lambda using go-iris framework

I have created REST API using Go Iris framework. Now I want to deploy these API's on AWS with lambda function. I am using MySQL as database. Is it possible to deploy my Go executable file on AWS lambda or should I need to modify my code according to AWS lambda specifications? I am trying to find the solution, but not getting much information.
Here is one of my API end point.
package main
import (
"database/sql"
"github.com/kataras/iris"
"github.com/kataras/iris/middleware/logger"
"github.com/kataras/iris/middleware/recover"
)
type Reward struct {
Id int `json:"reward_id"`
LotteryID int `json:"lottery_id"`
RewardName string `json:"reward_name"`
Description string `json:"reward_description"`
Asset int `json:"reward_asset"`
AssetName string `json:"reward_asset_name"`
}
func dbConn() (db *sql.DB) {
dbDriver := "mysql"
dbUser := "xxx"
dbPass := "xxx"
dbName := "xxx"
db, err := sql.Open(xxxxxxxxx)
if err != nil {
panic(err.Error())
}
return db
}
func newApp() *iris.Application {
app := iris.New()
app.Logger().SetLevel("debug")
app.Use(recover.New())
app.Use(logger.New())
db := dbConn()
app.Get("/reward/{reward_id:int}", func(ctx iris.Context) {
id1 := ctx.Params().GetIntDefault("reward_id", 0)
stmtOut, err := db.Prepare("select id, lottery_id,reward_name,reward_description,reward_asset, reward_asset_name from rewards_table where id =?")
if err != nil {
panic(err.Error())
}
defer stmtOut.Close()
var id, lotteryId, rewardAsset int
var rewardName, rewardDescription, rewardAssetName string
err1 := stmtOut.QueryRow(id1).Scan(&id, &lotteryId, &rewardName, &rewardDescription, &rewardAsset, &rewardAssetName)
if err1 != nil {
panic(err.Error())
}
reward := Reward{
Id: id,
LotteryID: lotteryId,
RewardName: rewardName,
Description: rewardDescription,
Asset: rewardAsset,
AssetName: rewardAssetName,
}
ctx.JSON(&reward)
})
return app
}
func main() {
app := newApp()
app.Run(iris.Addr(":8080"), iris.WithoutServerError(iris.ErrServerClosed), iris.WithOptimizations)
}
I have few more API endpoints which do basic CRUD operations. I am thinking about using AWS lambda and AWS API Gateway.
should I need to modify my code according to AWS lambda specifications?
Yes. Your code for lambda will require to have a handler:
AWS Lambda function handler in Go
This is the entry point to your function.
Also it seems that your go program is a web server build on iris. If this is the case, you won't be able to use it anyway, as you can't invoke lambda from internet as you would a regular server.
Also lambda runs for max 15 minutes, thus its use as a server would be very limited.

Retrieve List of AWS Config Rule Names using AWS Golang SDK

AWS Config has a set of Managed Rules and I am trying to use the Golang AWS SDK to use the DescribeConfigRules API to retrieve the list of AWS Config Managed Rule Names and other details.
It seems like every request receives a response of 25 rules and a NextToken for the next set of results. What I am having trouble understanding is how do I use this NextToken to retrieve the next set of results?
Here is what I have so far.
package main
import (
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/configservice"
)
func main() {
//Create an aws session
sess, err := session.NewSession(&aws.Config{Region: aws.String("us-west-2"), Credentials: credentials.NewSharedCredentials("", "my-aws-profile")})
// Create a ConfigService client from just a session.
configsvc := configservice.New(sess)
rules := (*configservice.DescribeConfigRulesInput)(nil)
configrulesoutput, err := configsvc.DescribeConfigRules(rules)
if err != nil {
log.Fatal(err)
}
for _, rule := range configrulesoutput.ConfigRules {
fmt.Println("Rule: ", *rule.ConfigRuleName)
}
}
The above code successfully prints the first 25 rules received in the response. However I am not sure how to use the NextToken received in the response to get the next set of results.
Sample Response.
ConfigRules: [
{
ConfigRuleArn: "ConfigRuleARN",
ConfigRuleId: "config-rule-ppwclr",
ConfigRuleName: "cloudtrail-enabled",
ConfigRuleState: "ACTIVE",
Description: "Checks whether AWS CloudTrail is enabled in your AWS account. Optionally, you can specify which S3 bucket, SNS topic, and Amazon CloudWatch Logs ARN to use.",
InputParameters: "{}",
MaximumExecutionFrequency: "TwentyFour_Hours",
Source: {
Owner: "AWS",
SourceIdentifier: "CLOUD_TRAIL_ENABLED"
}
},
{ Rule 2 }, ....{ Rule 25}
],
NextToken: "nexttoken"
}
Code extracts the rulenames from the response and output is as below.
Rule: cloudtrail-enabled
Rule: restricted-ssh
Rule: securityhub-access-keys-rotated
Rule: securityhub-autoscaling-group-elb-healthcheck-required
Rule: securityhub-cloud-trail-cloud-watch-logs-enabled
Rule: securityhub-cloud-trail-encryption-enabled
Rule: securityhub-cloud-trail-log-file-validation-enabled
Rule: securityhub-cloudtrail-enabled
Rule: securityhub-cmk-backing-key-rotation-enabled
Rule: securityhub-codebuild-project-envvar-awscred-check
Rule: securityhub-codebuild-project-source-repo-url-check
Rule: securityhub-ebs-snapshot-public-restorable-check
Rule: securityhub-ec2-managedinstance-patch-compliance
Rule: securityhub-ec2-security-group-attached-to-eni
Rule: securityhub-eip-attached
Rule: securityhub-elasticsearch-encrypted-at-rest
Rule: securityhub-elasticsearch-in-vpc-only
Rule: securityhub-iam-password-policy-ensure-expires
Rule: securityhub-iam-password-policy-lowercase-letter-check
Rule: securityhub-iam-password-policy-minimum-length-check
Rule: securityhub-iam-password-policy-number-check
Rule: securityhub-iam-password-policy-prevent-reuse-check
Rule: securityhub-iam-password-policy-symbol-check
Rule: securityhub-iam-password-policy-uppercase-letter-check
Rule: securityhub-iam-policy-no-statements-with-admin-access
End Goal: Using golang AWS SDK, extract the AWS Config Managed Rule details and put it in an excel format using Excelize to review which AWS Config rules we want enabled.
Thanks for your help in advance.
---New based on #Adrian's comment and doc reference---
As per doc
type DescribeConfigRulesInput struct {
// The names of the AWS Config rules for which you want details. If you do not
// specify any names, AWS Config returns details for all your rules.
ConfigRuleNames []*string `type:"list"`
// The nextToken string returned on a previous page that you use to get the
// next page of results in a paginated response.
NextToken *string `type:"string"`
// contains filtered or unexported fields }
So here is what I am trying. Specifying nil should give me back all rules. nextToken is blank string for the first call.
configsvc := configservice.New(sess)
rules := (*configservice.DescribeConfigRulesInput)(nil)
nextToken := ""
rules.SetNextToken(nextToken)
getConfigRulesFunc(configsvc, rules)
//getConfigRulesFunc function
func getConfigRulesFunc(cfgsvc *configservice.ConfigService, ruleset *configservice.DescribeConfigRulesInput) {
configrulesoutput, err := cfgsvc.DescribeConfigRules(ruleset)
if err != nil {
log.Fatal(err)
}
for i, r := range configrulesoutput.ConfigRules {
fmt.Println("Rule: ", i, ""+*r.ConfigRuleName)
}
if *configrulesoutput.NextToken != "" {
ruleset := (*configservice.DescribeConfigRulesInput)(nil)
ruleset.SetNextToken(*configrulesoutput.NextToken)
getConfigRulesFunc(cfgsvc, ruleset)
}
}
Above code compiles fine but here the runtime error I believe because of nil.
configsvc type: *configservice.ConfigService
panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x13c7ed2]
goroutine 1 [running]:
github.com/aws/aws-sdk-go/service/configservice.(*DescribeConfigRulesInput).SetNextToken(...)
/Users/user/go/src/github.com/aws/aws-sdk-go/service/configservice/api.go:12230
main.main()
/Users/user/golang/awsgotest/awsgotest.go:26 +0x232
Ok, finally figured it out with the help of a very kind Alex Diehl via this ticket https://github.com/aws/aws-sdk-go/issues/3293 on the official aws-sdk-go repo.
I would still say the aws sdk for go definitely lacks simple examples for configservice at the least on recommended usage.
Here the code that works. This will also show how to use simple recursive function in go to use NextToken for pagination of api results that span multiple pages especially apis that do not have built in paginators.
Also note that DescribeConfigRules API does not list all AWS Managed Config Rules, only the Config rules enabled for your account.
package main
import (
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/configservice"
)
var i int = 0
func main() {
sess, err := session.NewSession(&aws.Config{Region: aws.String("us-west-2"), Credentials: credentials.NewSharedCredentials("", "my-profile")})
if err != nil {
log.Fatal(err)
}
//Create a ConfigService client from just a session.
configsvc := configservice.New(sess)
fmt.Printf("configsvc type: %T\n", configsvc)
rules := &configservice.DescribeConfigRulesInput{}
getConfigRulesFunc(configsvc, rules)
}
func getConfigRulesFunc(cfgsvc *configservice.ConfigService, ruleset *configservice.DescribeConfigRulesInput) {
configrulesoutput, err := cfgsvc.DescribeConfigRules(ruleset)
if err != nil {
log.Fatal(err)
}
for _, r := range configrulesoutput.ConfigRules {
fmt.Println("Rule: ", i, ""+*r.ConfigRuleName)
i = i + 1
}
if configrulesoutput.NextToken != nil {
fmt.Println("In if nexttoken is not empty")
fmt.Println("Print NextToken: ", *configrulesoutput.NextToken)
ruleset := &configservice.DescribeConfigRulesInput{}
ruleset.SetNextToken(*configrulesoutput.NextToken)
getConfigRulesFunc(cfgsvc, ruleset)
}
}
Code in Bold were the ones giving me grief on how to use the NextToken based on best practices atleast for the go sdk for aws.
FYI, you could have looked in the AWS Go guide as there is a section on pagination: https://docs.aws.amazon.com/sdk-for-go/v1/developer-guide/making-requests.html#using-pagination-methods.

S3 objects not expiring using the Golang SDK

Using the AWS Golang SDK, I'm attempting to set an expiration date for some of the objects that I'm uploading. I'm pretty sure that the header is being set correctly, however, when logging into S3 and viewing the properties of the new object, it doesn't appear to have a expiration date.
Below is a snippet of how I'm uploading objects
exp := time.Now()
exp = exp.Add(time.Hour * 24)
svc := s3.New(session.New(config))
_, err = svc.PutObject(&s3.PutObjectInput{
Bucket: aws.String("MyBucketName"),
Key: aws.String("201700689.zip"),
Body: fileBytes,
ContentLength: aws.Int64(size),
ContentType: aws.String(fileType),
Expires: &exp,
})
And here is what I see when logging into the site
Any idea what is going on here? Thanks
Well, Expires is just the wrong field:
// The date and time at which the object is no longer cacheable.
What you want is Object Expiration which can be set as a bucket rule and not per object.
Basically, you add a Lifecycle rule (on the bucket properties) specifying:
Each rule has the following attributes:
Prefix – Initial part of the key name, (e.g. logs/), or the entire key name. Any object in the bucket with a matching prefix will be subject to this expiration rule. An empty prefix will match all objects in the bucket.
Status – Either Enabled or Disabled. You can choose to enable rules from time to time to perform deletion or garbage collection on your buckets, and leave the rules disabled at other times.
Expiration – Specifies an expiration period for the objects that are subject to the rule, as a number of days from the object’s creation date.
Id – Optional, gives a name to the rule.
This rule will then be evaluated daily and any expired objects will be removed.
See https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/ for a more in-depth explanation.
One way to expire objects in S3 using Golang SDK is to tag your upload with something like
Tagging: aws.String("temp=true")
Then, Go to S3 Bucket Managment Console and Set a LifeCycle Rule targeting for that specific tag like this.
You can configure the time frame to expire the object during the creation of the Rule in LifeCycle.
you need to set s3.PresignOptions.Expires, like this:
func PreSignPutObject(cfg aws.Config, bucket, objectKey string) (string, error) {
client := s3.NewFromConfig(cfg)
psClient := s3.NewPresignClient(client)
input := &s3.PutObjectInput{
Bucket: &bucket,
Key: &objectKey,
}
resp, err := psClient.PresignPutObject(context.Background(), input, func(options *s3.PresignOptions){
options.Expires = 3600 * time.Second
})
if err != nil {
return "", err
}
return resp.URL, nil
}