How do I use aws-sdk-go-v2 with localstack? - amazon-web-services

I'm trying to migrate from aws-sdk-go to aws-sdk-go-v2. But, I am using localstack locally to mimic some aws services such as sqs and s3. I'm not sure how to configure the new sdk to use the localstack endpoint instead of the real one.
For example, in the v1 SDK I can point it to localstack by setting the endpoint here:
session.Must(session.NewSession(&aws.Config{
Region: aws.String("us-east-1"),
Endpoint: aws.String("http://localstack:4566"),
}))
But, how do I do this in the v2 SDK? I think I need to set some param in the config but I dont see any option to specify the endpoint.

So if you go trudging through the python code, principally this, you'll see:
https://github.com/localstack/localstack/blob/25ba1de8a8841af27feab54b8d55c80ac46349e2/localstack/services/edge.py#L115
I then needed to overwrite the authorization header, when using the v2 aws golang sdk in order to add the correct structure.
I picked the structure up from running the aws cli tool and trace logging the localstack docker container:
'Authorization': 'AWS4-HMAC-SHA256 Credential=AKIAR2X5NRNSRTCOJHCI/20210827/eu-west-1/sns/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=c69672d38631752ede15d90e7047a5183ebf3707a228decf6ec26e97fdbd02aa',
In go I then needed to overwrite the http client to add that header in:
type s struct {
cl http.Client
}
func (s s) Do(r *http.Request) (*http.Response, error) {
r.Header.Add("authorization", "AWS4-HMAC-SHA256 Credential=AKIAR2X5NRNSRTCOJHCI/20210827/eu-west-1/sns/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=33fa777a3bb1241f30742419b8fab81945aa219050da6e29b34db16053661000")
return s.cl.Do(r)
}
func NewSNS(endpoint, topicARN string) (awsPubSub, error) {
cfg := aws.Config{
EndpointResolver: aws.EndpointResolverFunc(func(service, region string) (aws.Endpoint, error) {
return aws.Endpoint{
PartitionID: "aws",
URL: "http://localhost:4566",
SigningRegion: "eu-west-1",
HostnameImmutable: true,
// Source: aws.EndpointSourceCustom,
}, nil
}),
HTTPClient: s{http.Client{}},
}
....
It was very time consuming and painful and I'd love to know a better way, but this works for the time being...

It depends from the service that you use.
In order to initialize a Glue client:
cfg, err := config.LoadDefaultConfig(context.Background())
if err != nil {
panic(err)
}
glueConnection := glue.New(glue.Options{Credentials: cfg.Credentials, Region: cfg.Region})

The equivalent of your code in the SDK v2 is:
cfg, err := config.LoadDefaultConfig(
ctx,
config.WithRegion("us-east-1"),
config.WithEndpointResolverWithOptions(aws.EndpointResolverWithOptionsFunc(
func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{URL: "http://localhost:4566"}, nil
}),
),
)
Using LoadDefaultConfig you will not need to specify the region if you already set it up on the AWS config. You can read more on the AWS SDK v2 docs.
You can find the above example in the package docs.

Related

How do I configure S3ForcePathStyle with AWS golang v2 SDK?

I'm putting and reading files to S3 using the AWS golang v2 SDK. Locally I am using local stack and thus need to set the param S3ForcePathStyle. But, I can't find where to set this parameter in the config.
This is what my config looks like:
conf, err = config.LoadDefaultConfig(
context.TODO(),
config.WithRegion("us-east-1"),
config.WithEndpointResolver(
aws.EndpointResolverFunc(func(service, region string) (aws.Endpoint, error) {
return aws.Endpoint{
PartitionID: "aws",
URL: "http://localstack:4566",
SigningRegion: "us-east-1",
}, nil
}),
),
)
Where can I pass in S3ForcePathStyle = true?
Seems I was looking in the wrong place. The documentation here explains that in aws-sdk-go-v2 they moved the service-specific configuration flags to the individual service client option types. Ironically to improve discoverability.
I should set the UsePathStyle like this:
client := s3.NewFromConfig(conf, func(o *s3.Options) {
o.UsePathStyle = true
})

Enable DATA Api for Amazon Aurora

We have a running AWS Aurora Cluster (Not the serverless version).
I was already successfully connected to the DB externally via Querious (GUI for SQL)
When using the Golang RDS SDK I get the following error message:
HttpEndpoint is not enabled for cluster sample-db-cluster. Please refer to https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.troubleshooting
This links tells me to activate the Data API.
Problem: This link and anything else I have found so far always relates to serverless Aurora and I could not find any way to enable this for my Aurora instance.
I also tried to enable the DATA Api via the CLI:
aws rds modify-db-cluster --db-cluster-identifier my-cluster-id --enable-http-endpoint --region us-east-1
This did not work!
Below is my go code to connect to Aurora:
package main
import (
"fmt"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/rdsdataservice"
"log"
"os"
)
func main() {
sess:= getSession()
SQLStatement := `SELECT * FROM testTable`
fmt.Println("SQLStatement",SQLStatement)
rdsdataservice_client := rdsdataservice.New(sess)
req, resp := rdsdataservice_client.ExecuteStatementRequest(&rdsdataservice.ExecuteStatementInput{
Database: aws.String("my-database-name"),
ResourceArn: aws.String("arn:aws:rds:us-east-1:XXXXXXXXXXX:cluster:XXXXXXXX"),
SecretArn: aws.String("arn:aws:secretsmanager:us-east-1:XXXXXXXXXXX:secret:XXXXXXXX"),
Sql: aws.String(SQLStatement),
})
err1 := req.Send()
if err1 == nil {
fmt.Println("Response:", resp)
} else {
fmt.Println("error:", err1) // Produces the mentioned error
}
}
func getSession() *session.Session {
var sess *session.Session
var err error
if os.Getenv("aws_access_key_id") != "" && os.Getenv("aws_secret_access_key") != "" && os.Getenv("aws_region") != "" { // explicit credentials
creds := credentials.NewStaticCredentials(os.Getenv("aws_access_key_id"), os.Getenv("aws_secret_access_key"), "")
sess, err = session.NewSession(&aws.Config{
Region: aws.String("us-east-1"),
Credentials: creds,
})
if err != nil {
log.Println("Error cred")
}
} else {
sess = session.Must(session.NewSession()) // credentials are passed implicit by role lambda-news-parser-executor (defined in IAM)
}
return sess
}
I could not find any way to enable this for my Aurora instance
This is because it is not supported. Data API is only for Serverless Aurora.
Ok! I found that issue
the github.com/aws/aws-sdk-go/service/rdsdataservice is only usable for serverless Aurora not the "normal" instances.
Link here
Package rdsdataservice provides the client and types for making API requests to AWS RDS DataService.
Amazon RDS provides an HTTP endpoint to run SQL statements on an Amazon Aurora Serverless DB cluster. To run these statements, you work with the Data Service API.

What's the correct way of loading AWS credentials from role in a Fargate task using AWS SDK Go?

I have the following snippet:
awsCredentials := credentials.NewChainCredentials(
[]credentials.Provider{
&ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(newSession, aws.NewConfig()),
},
&credentials.SharedCredentialsProvider{},
&credentials.EnvProvider{},
})
which works fine whenever the code is running on an EC2 instance or when the access/secret key are passed through variables (used for local testing).
However, this code is failing when running on ECS+Fargate because NoCredentialProviders: no valid providers in chain. Checked the environment variables of the running container and it has the expected AWS_CONTAINER_CREDENTIALS_RELATIVE_URI, so the credentials.EnvProvider should read it.
So, my question is, what's the correct way of reading these credentials? Because the problem I'm facing is not about lack of permissions (which would indicate an error in the policy / role), but that code is not able to get the credentials.
UPDATE
I have narrowed this to the use of ec2rolescreds.
Using this simple example:
package main
import (
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
)
func main() {
newSession, err := session.NewSession()
if err != nil {
log.Fatal(err)
}
awsCredentials := credentials.NewChainCredentials(
[]credentials.Provider{
&ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(newSession, aws.NewConfig()),
},
&credentials.SharedCredentialsProvider{},
&credentials.EnvProvider{},
})
sess, err := session.NewSession(&aws.Config{
Region: aws.String("us-east-1"),
Credentials: awsCredentials},
)
if err != nil {
log.Fatal(err)
}
// Create S3 service client
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil {
log.Fatal(err)
}
fmt.Println("Buckets:")
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n",
aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
}
If I remove ec2rolescreds, everything works fine both local and in ECS+Fargate.
However, if I run this code as is, I get the same error of NoCredentialProviders: no valid providers in chain
The solution is to initialize clients using sessions instead of credentials, i.e:
conf := aws.NewConfig().WithRegion("us-east-1")
sess := session.Must(session.NewSession(conf))
svc := s3.New(sess)
// others:
// svc := sqs.New(sess)
// svc := dynamodb.New(sess)
// ...
Because, as #Ay0 points, the default credential chain already includes both EnvProvider and RemoteCredProvider.
In case you still need the credentials, you can use:
creds := stscreds.NewCredentials(sess, "myRoleARN")
as the documentation points out. Notice that the policy of the role must have the sts:AssumeRole action enabled. For more information, here are the stscreds.NewCredentials(...) docs
So, a session can be configured using a Config object.
Reading through the specs of this object, it says for Credentials:
// The credentials object to use when signing requests. Defaults to a
// chain of credential providers to search for credentials in environment
// variables, shared credential file, and EC2 Instance Roles.
Credentials *credentials.Credentials
The defaults are already what my snippet was doing, so I removed all the awsCredentials block and now it's working fine everywhere. Locally, EC2, Fargate...
UPDATE
To expand the answer, the reason why removing the awsCredentials made this work is because, if you check the SDK's code, https://github.com/aws/aws-sdk-go/blob/master/aws/defaults/defaults.go#L107, the default credentials check both EnvProvider and RemoteCredProvider.
By overriding the default chain credentials, it was not able to look for credentials in RemoteCredProvider, which is the provider that handles the environment variable AWS_CONTAINER_CREDENTIALS_FULL_URI.

AWS S3 - Golang SDK - SignatureDoesNotMatch

I'm looking to integrade an S3 bucket with an API im developing, I'm running into this error wherever I go -
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403
I have done the following
Installed SDK & AWS CLI, and AWS configured
Double(triple) checked spelling of key & secret key & bucket permissions
Attempted with credentials document, .env, and even hard coding the values directly
Tested with AWS CLI (THIS WORKS), so I believe I can rule out permissions, keys, as a whole.
I'm testing by trying to list buckets, here is the code taken directly from the AWS documentation-
sess := session.Must(session.NewSessionWithOptions(session.Options{ <--- DEBUGGER SET HERE
SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil { exitErrorf("Unable to list buckets, %v", err) }
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n", aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
Using debugger, I can see the sessions config files as the program runs, the issue is potentially here
config -
-> credentials
-> creds
-> v
-> Access Key = ""
-> Secret Access Key = ""
-> Token = ""
-> provider
->value
-> Access Key With Value
-> Secret Access Key With Value
-> Token With Value
I personally cannot find any documentation regarding "creds" / "v", and I don't know if this is causing the issue. As I mentioned, I can use the AWS CLI to upload into the bucket, and even when I hard code my access key etc in to the Go SDK I receive this error.
Thank you for any thoughts, greatly appreciated.
I just compiled your code and its executing OK ... one of the many ways to supply credentials to your binary is to populate these env vars
export AWS_ACCESS_KEY_ID=AKfoobarT2IJEAU4
export AWS_SECRET_ACCESS_KEY=oa6oT0Xxt4foobarbambazdWFCb
export AWS_REGION=us-west-2
that is all you need when using the env var approach ( your values are available using the aws console browser )
the big picture is to create a wrapper shell script ( bash ) which contains above three lines to populate the env vars to supply credentials then in same shell script execute the golang binary ( typically you compile the golang in some preliminary process ) ... in my case I store the values of my three env vars in encrypted files which the shell script decrypts just before it calls the above export commands
sometimes its helpful to drop kick and just use the aws command line equivalent commands to get yourself into the ballpark ... from a terminal run
aws s3 ls s3://cairo-mombasa-zaire --region us-west-2
which can also use those same env vars shown above
for completeness here is your code with boilerplate added ... this runs OK and lists out the buckets
package main
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
// "github.com/aws/aws-sdk-go/service/s3/s3manager"
"fmt"
"os"
)
func exitErrorf(msg string, args ...interface{}) {
fmt.Fprintf(os.Stderr, msg+"\n", args...)
os.Exit(1)
}
func main() {
region_env_var := "AWS_REGION"
curr_region := os.Getenv(region_env_var)
if curr_region == "" {
exitErrorf("ERROR - failed to get region from env var %v", region_env_var)
}
fmt.Println("here is region ", curr_region)
// Load session from shared config
sess := session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil { exitErrorf("Unable to list buckets, %v", err) }
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n", aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
}
numBytes, err := downloader.Download(tempFile,
&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(fileName),
},
)
In my case the bucket value was wrong, it is missing literal "/" at the end. Adding that fixes my problem.
Error i got - err: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403,
If anyone else happens to have this problem,
The issue was regarding environment variables much like Scott suggest above, however it was due to lacking
export AWS_SDK_LOAD_CONFIG="true"
If this environment variable is not present, then the Golang SDK will not look for a credentials file, along with this, I instantiated environment variables for both my keys which allowed the connection to succeed.
To recap
if you're attempting to use the shared credentials folder, you must use the above noted environment variable to enable it.
If you're using environment variables, you shouldn't be affected by this problem.

How to deploy REST API to AWS lambda using go-iris framework

I have created REST API using Go Iris framework. Now I want to deploy these API's on AWS with lambda function. I am using MySQL as database. Is it possible to deploy my Go executable file on AWS lambda or should I need to modify my code according to AWS lambda specifications? I am trying to find the solution, but not getting much information.
Here is one of my API end point.
package main
import (
"database/sql"
"github.com/kataras/iris"
"github.com/kataras/iris/middleware/logger"
"github.com/kataras/iris/middleware/recover"
)
type Reward struct {
Id int `json:"reward_id"`
LotteryID int `json:"lottery_id"`
RewardName string `json:"reward_name"`
Description string `json:"reward_description"`
Asset int `json:"reward_asset"`
AssetName string `json:"reward_asset_name"`
}
func dbConn() (db *sql.DB) {
dbDriver := "mysql"
dbUser := "xxx"
dbPass := "xxx"
dbName := "xxx"
db, err := sql.Open(xxxxxxxxx)
if err != nil {
panic(err.Error())
}
return db
}
func newApp() *iris.Application {
app := iris.New()
app.Logger().SetLevel("debug")
app.Use(recover.New())
app.Use(logger.New())
db := dbConn()
app.Get("/reward/{reward_id:int}", func(ctx iris.Context) {
id1 := ctx.Params().GetIntDefault("reward_id", 0)
stmtOut, err := db.Prepare("select id, lottery_id,reward_name,reward_description,reward_asset, reward_asset_name from rewards_table where id =?")
if err != nil {
panic(err.Error())
}
defer stmtOut.Close()
var id, lotteryId, rewardAsset int
var rewardName, rewardDescription, rewardAssetName string
err1 := stmtOut.QueryRow(id1).Scan(&id, &lotteryId, &rewardName, &rewardDescription, &rewardAsset, &rewardAssetName)
if err1 != nil {
panic(err.Error())
}
reward := Reward{
Id: id,
LotteryID: lotteryId,
RewardName: rewardName,
Description: rewardDescription,
Asset: rewardAsset,
AssetName: rewardAssetName,
}
ctx.JSON(&reward)
})
return app
}
func main() {
app := newApp()
app.Run(iris.Addr(":8080"), iris.WithoutServerError(iris.ErrServerClosed), iris.WithOptimizations)
}
I have few more API endpoints which do basic CRUD operations. I am thinking about using AWS lambda and AWS API Gateway.
should I need to modify my code according to AWS lambda specifications?
Yes. Your code for lambda will require to have a handler:
AWS Lambda function handler in Go
This is the entry point to your function.
Also it seems that your go program is a web server build on iris. If this is the case, you won't be able to use it anyway, as you can't invoke lambda from internet as you would a regular server.
Also lambda runs for max 15 minutes, thus its use as a server would be very limited.