Enable DATA Api for Amazon Aurora - amazon-web-services

We have a running AWS Aurora Cluster (Not the serverless version).
I was already successfully connected to the DB externally via Querious (GUI for SQL)
When using the Golang RDS SDK I get the following error message:
HttpEndpoint is not enabled for cluster sample-db-cluster. Please refer to https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.troubleshooting
This links tells me to activate the Data API.
Problem: This link and anything else I have found so far always relates to serverless Aurora and I could not find any way to enable this for my Aurora instance.
I also tried to enable the DATA Api via the CLI:
aws rds modify-db-cluster --db-cluster-identifier my-cluster-id --enable-http-endpoint --region us-east-1
This did not work!
Below is my go code to connect to Aurora:
package main
import (
"fmt"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/rdsdataservice"
"log"
"os"
)
func main() {
sess:= getSession()
SQLStatement := `SELECT * FROM testTable`
fmt.Println("SQLStatement",SQLStatement)
rdsdataservice_client := rdsdataservice.New(sess)
req, resp := rdsdataservice_client.ExecuteStatementRequest(&rdsdataservice.ExecuteStatementInput{
Database: aws.String("my-database-name"),
ResourceArn: aws.String("arn:aws:rds:us-east-1:XXXXXXXXXXX:cluster:XXXXXXXX"),
SecretArn: aws.String("arn:aws:secretsmanager:us-east-1:XXXXXXXXXXX:secret:XXXXXXXX"),
Sql: aws.String(SQLStatement),
})
err1 := req.Send()
if err1 == nil {
fmt.Println("Response:", resp)
} else {
fmt.Println("error:", err1) // Produces the mentioned error
}
}
func getSession() *session.Session {
var sess *session.Session
var err error
if os.Getenv("aws_access_key_id") != "" && os.Getenv("aws_secret_access_key") != "" && os.Getenv("aws_region") != "" { // explicit credentials
creds := credentials.NewStaticCredentials(os.Getenv("aws_access_key_id"), os.Getenv("aws_secret_access_key"), "")
sess, err = session.NewSession(&aws.Config{
Region: aws.String("us-east-1"),
Credentials: creds,
})
if err != nil {
log.Println("Error cred")
}
} else {
sess = session.Must(session.NewSession()) // credentials are passed implicit by role lambda-news-parser-executor (defined in IAM)
}
return sess
}

I could not find any way to enable this for my Aurora instance
This is because it is not supported. Data API is only for Serverless Aurora.

Ok! I found that issue
the github.com/aws/aws-sdk-go/service/rdsdataservice is only usable for serverless Aurora not the "normal" instances.
Link here
Package rdsdataservice provides the client and types for making API requests to AWS RDS DataService.
Amazon RDS provides an HTTP endpoint to run SQL statements on an Amazon Aurora Serverless DB cluster. To run these statements, you work with the Data Service API.

Related

Connect to AWS Neptune with Golang GremlinGo

I am at the moment trying to set up a connection to AWS Neptune via go, but its not working. I am able to connect to AWS itself, but when I try to connect to Neptune DB it says "no successful connections could be made: dial tcp 172.31.4.48:8182: i/o timeout". I am using the Gremlingo module like in this code
package main
import (
"fmt"
"net/http"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/neptune"
"github.com/gin-gonic/gin"
gremlingo "github.com/apache/tinkerpop/gremlin-go/v3/driver"
)
func main() {
sess, err := session.NewSession(&aws.Config{
Region: aws.String("us-east-id1"),
Credentials: credentials.NewStaticCredentials("AWS-id key", "aws secret id key", ""),
})
if err != nil {
fmt.Println("Couldn't create new session")
return
}
neptune.New(sess)
driverRemoteConnection, err := gremlingo.NewDriverRemoteConnection("wss://database-1-instance-1.asdasdasd.us-east-1.neptune.amazonaws.com:8182/gremlin",
func(settings *gremlingo.DriverRemoteConnectionSettings) {
settings.TraversalSource = "g"
})
if err != nil {
fmt.Println(err)
return
}
//Cleanup
defer driverRemoteConnection.Close()
//Creating graph traversal
g := gremlingo.Traversal_().WithRemote(driverRemoteConnection)
// Perform traversal
results, err := g.V().Limit(2).ToList()
if err != nil {
fmt.Println(err)
return
}
// print results
for _, r := range results {
fmt.Println(r.GetString())
}
}
I wasn't quite sure what the problem was so I tried to connect to the cluster itself and as it didn't work I tried to connect to the Writer.
Thank you very much for your help.
Best regards
Amazon Neptune runs inside of a VPC and does not expose a public endpoint. Code designed to send queries must have access tho that VPC. This could be as simple as the code running on an EC2 instance in the same VPC, but there are many other ways that access to a VPC can be granted, such as Load Balancers, VPC Peering, Direct Connect, and many others.
An easy way to check if your code can access the database, is to send an HTTP request to the /status API from the same point of origin and see if it works.

How do I use aws-sdk-go-v2 with localstack?

I'm trying to migrate from aws-sdk-go to aws-sdk-go-v2. But, I am using localstack locally to mimic some aws services such as sqs and s3. I'm not sure how to configure the new sdk to use the localstack endpoint instead of the real one.
For example, in the v1 SDK I can point it to localstack by setting the endpoint here:
session.Must(session.NewSession(&aws.Config{
Region: aws.String("us-east-1"),
Endpoint: aws.String("http://localstack:4566"),
}))
But, how do I do this in the v2 SDK? I think I need to set some param in the config but I dont see any option to specify the endpoint.
So if you go trudging through the python code, principally this, you'll see:
https://github.com/localstack/localstack/blob/25ba1de8a8841af27feab54b8d55c80ac46349e2/localstack/services/edge.py#L115
I then needed to overwrite the authorization header, when using the v2 aws golang sdk in order to add the correct structure.
I picked the structure up from running the aws cli tool and trace logging the localstack docker container:
'Authorization': 'AWS4-HMAC-SHA256 Credential=AKIAR2X5NRNSRTCOJHCI/20210827/eu-west-1/sns/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=c69672d38631752ede15d90e7047a5183ebf3707a228decf6ec26e97fdbd02aa',
In go I then needed to overwrite the http client to add that header in:
type s struct {
cl http.Client
}
func (s s) Do(r *http.Request) (*http.Response, error) {
r.Header.Add("authorization", "AWS4-HMAC-SHA256 Credential=AKIAR2X5NRNSRTCOJHCI/20210827/eu-west-1/sns/aws4_request, SignedHeaders=content-type;host;x-amz-date, Signature=33fa777a3bb1241f30742419b8fab81945aa219050da6e29b34db16053661000")
return s.cl.Do(r)
}
func NewSNS(endpoint, topicARN string) (awsPubSub, error) {
cfg := aws.Config{
EndpointResolver: aws.EndpointResolverFunc(func(service, region string) (aws.Endpoint, error) {
return aws.Endpoint{
PartitionID: "aws",
URL: "http://localhost:4566",
SigningRegion: "eu-west-1",
HostnameImmutable: true,
// Source: aws.EndpointSourceCustom,
}, nil
}),
HTTPClient: s{http.Client{}},
}
....
It was very time consuming and painful and I'd love to know a better way, but this works for the time being...
It depends from the service that you use.
In order to initialize a Glue client:
cfg, err := config.LoadDefaultConfig(context.Background())
if err != nil {
panic(err)
}
glueConnection := glue.New(glue.Options{Credentials: cfg.Credentials, Region: cfg.Region})
The equivalent of your code in the SDK v2 is:
cfg, err := config.LoadDefaultConfig(
ctx,
config.WithRegion("us-east-1"),
config.WithEndpointResolverWithOptions(aws.EndpointResolverWithOptionsFunc(
func(service, region string, options ...interface{}) (aws.Endpoint, error) {
return aws.Endpoint{URL: "http://localhost:4566"}, nil
}),
),
)
Using LoadDefaultConfig you will not need to specify the region if you already set it up on the AWS config. You can read more on the AWS SDK v2 docs.
You can find the above example in the package docs.

What's the correct way of loading AWS credentials from role in a Fargate task using AWS SDK Go?

I have the following snippet:
awsCredentials := credentials.NewChainCredentials(
[]credentials.Provider{
&ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(newSession, aws.NewConfig()),
},
&credentials.SharedCredentialsProvider{},
&credentials.EnvProvider{},
})
which works fine whenever the code is running on an EC2 instance or when the access/secret key are passed through variables (used for local testing).
However, this code is failing when running on ECS+Fargate because NoCredentialProviders: no valid providers in chain. Checked the environment variables of the running container and it has the expected AWS_CONTAINER_CREDENTIALS_RELATIVE_URI, so the credentials.EnvProvider should read it.
So, my question is, what's the correct way of reading these credentials? Because the problem I'm facing is not about lack of permissions (which would indicate an error in the policy / role), but that code is not able to get the credentials.
UPDATE
I have narrowed this to the use of ec2rolescreds.
Using this simple example:
package main
import (
"fmt"
"log"
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/credentials"
"github.com/aws/aws-sdk-go/aws/credentials/ec2rolecreds"
"github.com/aws/aws-sdk-go/aws/ec2metadata"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
)
func main() {
newSession, err := session.NewSession()
if err != nil {
log.Fatal(err)
}
awsCredentials := credentials.NewChainCredentials(
[]credentials.Provider{
&ec2rolecreds.EC2RoleProvider{
Client: ec2metadata.New(newSession, aws.NewConfig()),
},
&credentials.SharedCredentialsProvider{},
&credentials.EnvProvider{},
})
sess, err := session.NewSession(&aws.Config{
Region: aws.String("us-east-1"),
Credentials: awsCredentials},
)
if err != nil {
log.Fatal(err)
}
// Create S3 service client
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil {
log.Fatal(err)
}
fmt.Println("Buckets:")
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n",
aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
}
If I remove ec2rolescreds, everything works fine both local and in ECS+Fargate.
However, if I run this code as is, I get the same error of NoCredentialProviders: no valid providers in chain
The solution is to initialize clients using sessions instead of credentials, i.e:
conf := aws.NewConfig().WithRegion("us-east-1")
sess := session.Must(session.NewSession(conf))
svc := s3.New(sess)
// others:
// svc := sqs.New(sess)
// svc := dynamodb.New(sess)
// ...
Because, as #Ay0 points, the default credential chain already includes both EnvProvider and RemoteCredProvider.
In case you still need the credentials, you can use:
creds := stscreds.NewCredentials(sess, "myRoleARN")
as the documentation points out. Notice that the policy of the role must have the sts:AssumeRole action enabled. For more information, here are the stscreds.NewCredentials(...) docs
So, a session can be configured using a Config object.
Reading through the specs of this object, it says for Credentials:
// The credentials object to use when signing requests. Defaults to a
// chain of credential providers to search for credentials in environment
// variables, shared credential file, and EC2 Instance Roles.
Credentials *credentials.Credentials
The defaults are already what my snippet was doing, so I removed all the awsCredentials block and now it's working fine everywhere. Locally, EC2, Fargate...
UPDATE
To expand the answer, the reason why removing the awsCredentials made this work is because, if you check the SDK's code, https://github.com/aws/aws-sdk-go/blob/master/aws/defaults/defaults.go#L107, the default credentials check both EnvProvider and RemoteCredProvider.
By overriding the default chain credentials, it was not able to look for credentials in RemoteCredProvider, which is the provider that handles the environment variable AWS_CONTAINER_CREDENTIALS_FULL_URI.

How to deploy REST API to AWS lambda using go-iris framework

I have created REST API using Go Iris framework. Now I want to deploy these API's on AWS with lambda function. I am using MySQL as database. Is it possible to deploy my Go executable file on AWS lambda or should I need to modify my code according to AWS lambda specifications? I am trying to find the solution, but not getting much information.
Here is one of my API end point.
package main
import (
"database/sql"
"github.com/kataras/iris"
"github.com/kataras/iris/middleware/logger"
"github.com/kataras/iris/middleware/recover"
)
type Reward struct {
Id int `json:"reward_id"`
LotteryID int `json:"lottery_id"`
RewardName string `json:"reward_name"`
Description string `json:"reward_description"`
Asset int `json:"reward_asset"`
AssetName string `json:"reward_asset_name"`
}
func dbConn() (db *sql.DB) {
dbDriver := "mysql"
dbUser := "xxx"
dbPass := "xxx"
dbName := "xxx"
db, err := sql.Open(xxxxxxxxx)
if err != nil {
panic(err.Error())
}
return db
}
func newApp() *iris.Application {
app := iris.New()
app.Logger().SetLevel("debug")
app.Use(recover.New())
app.Use(logger.New())
db := dbConn()
app.Get("/reward/{reward_id:int}", func(ctx iris.Context) {
id1 := ctx.Params().GetIntDefault("reward_id", 0)
stmtOut, err := db.Prepare("select id, lottery_id,reward_name,reward_description,reward_asset, reward_asset_name from rewards_table where id =?")
if err != nil {
panic(err.Error())
}
defer stmtOut.Close()
var id, lotteryId, rewardAsset int
var rewardName, rewardDescription, rewardAssetName string
err1 := stmtOut.QueryRow(id1).Scan(&id, &lotteryId, &rewardName, &rewardDescription, &rewardAsset, &rewardAssetName)
if err1 != nil {
panic(err.Error())
}
reward := Reward{
Id: id,
LotteryID: lotteryId,
RewardName: rewardName,
Description: rewardDescription,
Asset: rewardAsset,
AssetName: rewardAssetName,
}
ctx.JSON(&reward)
})
return app
}
func main() {
app := newApp()
app.Run(iris.Addr(":8080"), iris.WithoutServerError(iris.ErrServerClosed), iris.WithOptimizations)
}
I have few more API endpoints which do basic CRUD operations. I am thinking about using AWS lambda and AWS API Gateway.
should I need to modify my code according to AWS lambda specifications?
Yes. Your code for lambda will require to have a handler:
AWS Lambda function handler in Go
This is the entry point to your function.
Also it seems that your go program is a web server build on iris. If this is the case, you won't be able to use it anyway, as you can't invoke lambda from internet as you would a regular server.
Also lambda runs for max 15 minutes, thus its use as a server would be very limited.

Access DynamoDB in Golang Fargate Task

I'm trying to access DynamoDB from my Fargate task, which is written in golang. And all I get is a timeout. What I am missing?
I'm using the Cloudformation templates from AWS Labs (here) plus a task role that allows full DynamoDB access. It's the simplest public subnet template plus the Fargate one.
I tried adding a VPC endpoint, but it made no difference.
Running the task on my machine works.
Running a Python (Flask) task that does (more or less) the same works both locally and on AWS. It's the same setup, I just changed the task image.
This is the code:
package main
import (
"context"
"fmt"
"github.com/aws/aws-sdk-go-v2/aws/endpoints"
"github.com/aws/aws-sdk-go-v2/aws/external"
"github.com/aws/aws-sdk-go-v2/service/dynamodb"
"github.com/gin-gonic/gin"
"time"
)
var db *dynamodb.Client
func init() {
cfg, err := external.LoadDefaultAWSConfig()
if err != nil {
panic("unable to load SDK config, " + err.Error())
}
cfg.Region = endpoints.UsEast2RegionID
db = dynamodb.New(cfg)
}
func main() {
fmt.Println("go!")
router := gin.New()
router.Use(gin.Recovery())
router.GET("/ping", func(c *gin.Context) { c.JSON(200, gin.H{"msg": "pong"}) })
router.GET("/pong", func(c *gin.Context) {
req := db.ListTablesRequest(&dynamodb.ListTablesInput{})
ctx := context.Background()
ctx, cancelFn := context.WithTimeout(ctx, time.Second*5)
defer cancelFn()
res, err := req.Send(ctx)
if err != nil {
c.JSON(400, gin.H{"msg": "Fail", "error": err.Error()})
return
}
c.JSON(200, gin.H{"msg": fmt.Sprint(res)})
return
})
router.Run()
}
Timeout:
helles:v2> curl xyz.us-east-2.elb.amazonaws.com/pong
{"error":"RequestCanceled: request context canceled\ncaused by: context deadline exceeded","msg":"Fail"}
Expected:
helles:v2> curl 127.0.0.1:8080/pong
{"msg":"{\n TableNames: [\"OneTable\",\"OtherTable\"]\n}"}
Python for comparison:
#!/usr/bin/env python3
from flask import Flask
import boto3
dynamodb = boto3.client("dynamodb")
app = Flask(__name__)
#app.route("/ping")
def index():
return "pong"
#app.route("/pong")
def pong():
return dynamodb.list_tables()
if __name__ == "__main__":
app.run(debug=True, host="0.0.0.0", port=8080)
The result is a bit different, with metadata added, but the table names are there.
Thanks
Answering my own question.
Problem was with the Docker base image I was using. My Dockerfile was:
FROM scratch
ADD ./build/api/api /
EXPOSE 8080
ENTRYPOINT ["/api"]
With a statically linked executable.
Changing FROM scratch to FROM gcr.io/distroless/base made it work.
My guess is that the application/dynamodb client wasn't able to resolve the service address without the missing parts from the base image.
Thanks #Dude0001.
Timeout is often a network Issue. Have you checked security groups used by both the ECS task and Dynamo DB. Need to make sure you have rules setup to egress out of ECS and ingress into DynamoDB on the correct ports.
You said you setup an endpoint for Dynamo in the VPC. Not clear from your OP if you are trying to connect to private endpoint in a private VPC or if you are trying to go through the public internet. If you are trying to go through public internet you need to also check that your ECS task is in a VPC that has a NAT gateway out to the public internet. It looks like you are trying to connect through 127.0.0.1 or an ELB DNS to connect to the DynamoDB service which doesn't make sense to me.