Getting S3_REGION for AWS S3 image upload in Golang - amazon-web-services

I wanted to upload an image in aws s3.
const (
S3_REGION = ""
S3_BUCKET = ""
)
func main() {
// Create a single AWS session (we can re use this if we're uploading many files)
s, err := session.NewSession(&aws.Config{Region: aws.String(S3_REGION)})
if err != nil {
log.Fatal(err)
}
// Upload
err = AddFileToS3(s, "result.csv")
if err != nil {
log.Fatal(err)
}
}
I am stuck here.
Where can I get S3_REGION as per this code standard?
Source : https://golangcode.com/uploading-a-file-to-s3/

When you log in to the AWS console you see on the top right which region you are logged in to, for example "Oregon" which refers to the "us-west-2" region.
Please refer to this table and this link.

Related

AWS stscreds SDK to refresh credentials for cross account assume roles

I have setup cross account reading kinesis stream, but i get security token expired error when kinesis client is reading records. I used sts assume role to assume roleA in accountA, then use roleA credentials to assume roleB, lastly return the kinesis client, so there is no refresh feature applied to it and the client will expire in 1 hr by default. I looked up the stscreds AssumeRoleProvider and the doc says it will refresh the credentials. But i have no idea on how to refresh the first credential for assumed roleA then refresh the second credential for assumed roleB. Or is it better to call the method to reinitialize the kinesis client?
Here is the code block.
cfg, err := config.LoadDefaultConfig(
context.TODO(),
config.WithRegion("us-west-2"),
)
if err != nil {
return nil, err
}
stsclient := sts.NewFromConfig(cfg)
assumingcnf, err := config.LoadDefaultConfig(
context.TODO(),
config.WithRegion("us-west-2"),
config.WithCredentialsProvider(aws.NewCredentialsCache(
stscreds.NewAssumeRoleProvider(
stsclient,
roleToAssumeArn1,
)),
),
)
if err != nil {
return nil, err
}
stsclient = sts.NewFromConfig(assumingcnf)
cnf, err := config.LoadDefaultConfig(
context.TODO(),
config.WithRegion("us-west-2"),
config.WithCredentialsProvider(aws.NewCredentialsCache(
stscreds.NewAssumeRoleProvider(
stsclient,
roleToAssumeArn2,
)),
),
)
if err != nil {
return nil, err
}
kClient := kinesis.NewFromConfig(cnf)
return kClient
You should be able to do this with the providers provided by AWS. I'm assuming you're using aws-sdk-go-v2.
This would make the resulting CredentialsProvider return the cached credentials until they expire; then it will call provider2, which uses sts2 to get new credentials for roleB, and sts2 will always first call provider1 first to get new credentials for roleA.
func createProvider(cfg aws.Config) aws.CredentialsProvider {
sts1 := sts.NewFromConfig(cfg)
provider1 := stscreds.NewAssumeRoleProvider(sts1, "roleA")
sts2 := sts.NewFromConfig(cfg, func (o *sts.Options) { o.Credentials = provider1 })
provider2 := stscreds.NewAssumeRoleProvider(sts2, "roleB")
return aws.NewCredentialsCache(provider2)
}

How to deploy REST API to AWS lambda using go-iris framework

I have created REST API using Go Iris framework. Now I want to deploy these API's on AWS with lambda function. I am using MySQL as database. Is it possible to deploy my Go executable file on AWS lambda or should I need to modify my code according to AWS lambda specifications? I am trying to find the solution, but not getting much information.
Here is one of my API end point.
package main
import (
"database/sql"
"github.com/kataras/iris"
"github.com/kataras/iris/middleware/logger"
"github.com/kataras/iris/middleware/recover"
)
type Reward struct {
Id int `json:"reward_id"`
LotteryID int `json:"lottery_id"`
RewardName string `json:"reward_name"`
Description string `json:"reward_description"`
Asset int `json:"reward_asset"`
AssetName string `json:"reward_asset_name"`
}
func dbConn() (db *sql.DB) {
dbDriver := "mysql"
dbUser := "xxx"
dbPass := "xxx"
dbName := "xxx"
db, err := sql.Open(xxxxxxxxx)
if err != nil {
panic(err.Error())
}
return db
}
func newApp() *iris.Application {
app := iris.New()
app.Logger().SetLevel("debug")
app.Use(recover.New())
app.Use(logger.New())
db := dbConn()
app.Get("/reward/{reward_id:int}", func(ctx iris.Context) {
id1 := ctx.Params().GetIntDefault("reward_id", 0)
stmtOut, err := db.Prepare("select id, lottery_id,reward_name,reward_description,reward_asset, reward_asset_name from rewards_table where id =?")
if err != nil {
panic(err.Error())
}
defer stmtOut.Close()
var id, lotteryId, rewardAsset int
var rewardName, rewardDescription, rewardAssetName string
err1 := stmtOut.QueryRow(id1).Scan(&id, &lotteryId, &rewardName, &rewardDescription, &rewardAsset, &rewardAssetName)
if err1 != nil {
panic(err.Error())
}
reward := Reward{
Id: id,
LotteryID: lotteryId,
RewardName: rewardName,
Description: rewardDescription,
Asset: rewardAsset,
AssetName: rewardAssetName,
}
ctx.JSON(&reward)
})
return app
}
func main() {
app := newApp()
app.Run(iris.Addr(":8080"), iris.WithoutServerError(iris.ErrServerClosed), iris.WithOptimizations)
}
I have few more API endpoints which do basic CRUD operations. I am thinking about using AWS lambda and AWS API Gateway.
should I need to modify my code according to AWS lambda specifications?
Yes. Your code for lambda will require to have a handler:
AWS Lambda function handler in Go
This is the entry point to your function.
Also it seems that your go program is a web server build on iris. If this is the case, you won't be able to use it anyway, as you can't invoke lambda from internet as you would a regular server.
Also lambda runs for max 15 minutes, thus its use as a server would be very limited.

S3 on EC2 with IAM: Error NoCredentialProviders: no valid providers in chain. Deprecated

My application use s3 and running on EC2. The IAM is configured on the instance, so the auth happen keyless (without the access key and secret key).
I'm able to upload or download file using aws cli. However when I tried to perform download operation using aws-sdk-go, I get error below:
AccessDenied: Access Denied
status code: 403, request id: F945BDB5410E1A00, host id: m74jJ8z/AEzdkaJkWKdIqPEwPIYPZfWnLLfa5UpEwHwaBcXOuXTPY1aw/u/5HGralKg+ewAWEJA=
I followed the official guide from https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/ec2rolecreds/ and from this issue https://github.com/aws/aws-sdk-go/issues/430 but got the error above.
Below is my code:
s3UploadPath = config.GetString("upload assets to s3.bucket")
s3Config := aws.NewConfig()
s3Config.CredentialsChainVerboseErrors = aws.Bool(true)
session, err := session.NewSession(s3Config)
if err != nil {
Logger.Fatal("Error initializing s3 uploader. " + err.Error())
os.Exit(0)
}
// the upload code
uploader = s3manager.NewUploader(session)
res, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(s3UploadPath),
Key: aws.String(filename),
Body: f,
})
if err != nil {
log.Fatal("error on upload. " + err.Error())
}
// then continue with the download code
Attached screenshot showing that the download and upload operations are success through aws cli
Am I doing it wrong?
You dont need to specify credentials when using IAM role on EC2 instance.
I see you are getting Access Denied which means your Go program is able to pick the EC2 profile creds but probably due to lack of permissions, its getting this error.
Reading your code, it seems you want to write object to S3. Can you make sure you have given s3:Get*, s3:List*, s3:PutObject, s3:PutObjectAcl to your IAM Role and there is no explicit Deny on S3 Bucket policy?
I managed to solve this error by doing two things.
The first one is by using stscreds.NewCredentials(session, roleArn) as the credentials during session creation.
s3Config := aws.NewConfig()
s3Config.CredentialsChainVerboseErrors = aws.Bool(true)
s3Config.WithLogLevel(aws.LogDebugWithHTTPBody)
s3Config.Region = aws.String(region)
s3Config.WithHTTPClient(&http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
},
Timeout: 10 * time.Second,
})
sess, err := session.NewSession(s3Config)
if err != nil {
log.Fatal("Error initializing session. " + err.Error())
}
sess.Config.Credentials = stscreds.NewCredentials(sess, arn)
_, err = sess.Config.Credentials.Get()
if err != nil {
log.Fatal("Error getting role. " + err.Error())
}
And the 2nd thing is by defining NO_PROXY environment variable with value is 169.254.169.254. The particular IP is AWS global IP used for getting the EC2 metadata.
And since my application uses proxy to communicate with the S3 server, I need to exclude that IP.

Docker ImagePush failing with "no basic auth credentials"

I'm attempting to use the docker go-sdk to push an image to AWS ECR.
This is the code I'm using to push the image.
where tag = ".dkr.ecr.us-east-1.amazonaws.com/api:mytag"
func Push(c context.Context, tag string, credentials string) error {
cli, err := client.NewClient(apiSocket, apiVersion, nil, apiHeaders)
if err != nil {
return err
}
fmt.Println(credentials)
resp, err := cli.ImagePush(c, tag, types.ImagePushOptions{
RegistryAuth: credentials,
})
if err != nil {
panic(err)
}
io.Copy(os.Stdout, resp)
resp.Close()
return nil
}
But I keep getting this response:
{"status":"The push refers to repository [<id>.dkr.ecr.us-east-1.amazonaws.com/api]"}
{"status":"Preparing","progressDetail":{},"id":"23432919a50a"}
{"status":"Preparing","progressDetail":{},"id":"9387ad10e44c"}
{"status":"Preparing","progressDetail":{},"id":"e2a4679276bf"}
{"status":"Preparing","progressDetail":{},"id":"31c5c8035e63"}
{"status":"Preparing","progressDetail":{},"id":"a73789d39a06"}
{"status":"Preparing","progressDetail":{},"id":"f36942254806"}
{"status":"Preparing","progressDetail":{},"id":"4a2596f9aa79"}
{"status":"Preparing","progressDetail":{},"id":"5cf3066ccdbc"}
{"status":"Preparing","progressDetail":{},"id":"76a1661c28fc"}
{"status":"Preparing","progressDetail":{},"id":"beefb6beb20f"}
{"status":"Preparing","progressDetail":{},"id":"df64d3292fd6"}
{"status":"Waiting","progressDetail":{},"id":"beefb6beb20f"}
{"status":"Waiting","progressDetail":{},"id":"df64d3292fd6"}
{"errorDetail":{"message":"no basic auth credentials"},"error":"no basic auth credentials"}
Any ideas?
Notes:
I've verified that the credentials string I'm passing in is a base64 encoded user:pass for the ECR registry.
I've verified that the ECR credentials I'm getting are from the same AWS Region as where im attempting to push the image.
I found out in a GitHub comment that RegistryAuth actually needs to be a base64 JSON string with username and password fields. Ugh. This is undocumented in the Docker repository.
RegistryAuth = "{ \"username\": \"myusername\", \"password\": \"mypassword\", \"email\": \"myemail\" }
Relevant GitHub comment.
It is working for me now.

NoCredentialproviders in AWS S3 in Golang

I am working in Golang,now I am attempting to upload an image to AWS S3, but I get:
NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
My code is like this:
func firstFunction(){
//Connect to S3
AWSsession, err := ConnectAWS()
if err != nil {
fmt.Println("Error Connecting to AWS S3")
}
GetSingleMedia(AWSsession)
}
func ConnectAWS()(*session.Session, error){
//Create S3 Session
AWSsession, err := session.NewSession(&aws.Config{
Region: aws.String("us-west-2")},
)
if err != nil {
fmt.Println("Error AWS:", err.Error())
}
return AWSsession,err
}
func GetSingleMedia(...someparams,AWSsession *session.Session){
//o.Blob is correct, this is valid
data, err := ioutil.ReadAll(bytes.NewReader(o.Blob))
//Store: bytes.NewReader(o.Blob)
UploadImage(AWSsession,bytes.NewReader(o.Blob),bucket,"SomeID")
}
func UploadImage(AWSsession *session.Session,reader *bytes.Reader,bucket string, key string) (*s3manager.UploadOutput,error){
uploader := s3manager.NewUploader(AWSsession)
result, err := uploader.Upload(&s3manager.UploadInput{
Body : reader,
Bucket: aws.String(bucket),
Key : aws.String(key),
})
if err != nil {
fmt.Println("Error uploagin img: ",err.Error())
}
return result,err
}
Also, I have placed the creentials under /home/myuser/.aws/ there's a credential file, I don't get any error on creating the session, then, what could be the problem?
The error is triggered in UploadImage
EDIT:
Currently in the credentials file I have:
[default]
awsBucket = "someBucket"
awsAccessKey = "SOME_ACCESS_KEY"
awsSecretKey = "SOME_AWS_SECRET_KEY"
Sould I change any permission or something?
I would suggest you follow the guide here: http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html
This command will ask you for access/secret key and write them in a correct format:
aws configure
It would appear you have a wrong format of credentials file. The correct format would be something like this:
[default]
aws_access_key_id = SOME_ACCESS_KEY
aws_secret_access_key = SOME_AWS_SECRET_KEY