Unable to get s3.Object ACL using AWS SDK in go - amazon-web-services

I am trying to list the objects of one bucket using the official example
err = svc.ListObjectsPages(&s3.ListObjectsInput{
Bucket: &bucketName,
}, func(p *s3.ListObjectsOutput, last bool) (shouldContinue bool) {
fmt.Println("Page,", i)
i++
for _, obj := range p.Contents {
fmt.Println("Object:", *obj.Key)
}
return true
})
However I see that the s3.Object type does not have any ACL information associated with it.
How can I get the s3.Object's ACL information?

Refer to this example to learn how to get an Amazon S3 object's ACL information using the AWS SDK for Go V2.
https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/gov2/s3/GetObjectAcl

Related

Getting an API error when adding CIDRs into Managed Prefix List on AWS using AWS Go SDK

I am facing a very strange situation here. I currently have a completely new Managed Prefix List provisioned in my AWS account. No CIDRs registered in it.
My objective is to have those CIDRs loaded by an external service written in Go using the AWS Go SDK. The part of my code that actually loads the CIDR list is the one I'm sending below
func (a AWSPrefixListRepository) AddCidrs(cidrs []domain.Cidr, resource string, currentVersion int64) error {
svc := ec2.New(a.sess)
_, err := svc.ModifyManagedPrefixList(&ec2.ModifyManagedPrefixListInput{
CurrentVersion: &currentVersion,
PrefixListId: &resource,
AddEntries: a.buildAddEntries(cidrs),
})
if err != nil {
return err
}
return nil
}
func (a AWSPrefixListRepository) buildAddEntries(cidrs []domain.Cidr) []*ec2.AddPrefixListEntry {
var addEntries []*ec2.AddPrefixListEntry
for _, cidr := range cidrs {
addEntries = append(addEntries, &ec2.AddPrefixListEntry{
Cidr: &cidr.PrefixIpv4,
Description: &cidr.Description,
})
}
return addEntries
}
The problem happens when cidrs []domain.Cidr has more than one item. Then I get the error below
CIDR (99.79.87.237/32) is a duplicate.
That was totally my mistake. This reference to &cidr was pointing to the same variable and bringing the same PrefixIpv4
The API message makes sense now. I managed to get this by enabling the debugging level of the SDK.

AWS S3 - Golang SDK - SignatureDoesNotMatch

I'm looking to integrade an S3 bucket with an API im developing, I'm running into this error wherever I go -
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403
I have done the following
Installed SDK & AWS CLI, and AWS configured
Double(triple) checked spelling of key & secret key & bucket permissions
Attempted with credentials document, .env, and even hard coding the values directly
Tested with AWS CLI (THIS WORKS), so I believe I can rule out permissions, keys, as a whole.
I'm testing by trying to list buckets, here is the code taken directly from the AWS documentation-
sess := session.Must(session.NewSessionWithOptions(session.Options{ <--- DEBUGGER SET HERE
SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil { exitErrorf("Unable to list buckets, %v", err) }
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n", aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
Using debugger, I can see the sessions config files as the program runs, the issue is potentially here
config -
-> credentials
-> creds
-> v
-> Access Key = ""
-> Secret Access Key = ""
-> Token = ""
-> provider
->value
-> Access Key With Value
-> Secret Access Key With Value
-> Token With Value
I personally cannot find any documentation regarding "creds" / "v", and I don't know if this is causing the issue. As I mentioned, I can use the AWS CLI to upload into the bucket, and even when I hard code my access key etc in to the Go SDK I receive this error.
Thank you for any thoughts, greatly appreciated.
I just compiled your code and its executing OK ... one of the many ways to supply credentials to your binary is to populate these env vars
export AWS_ACCESS_KEY_ID=AKfoobarT2IJEAU4
export AWS_SECRET_ACCESS_KEY=oa6oT0Xxt4foobarbambazdWFCb
export AWS_REGION=us-west-2
that is all you need when using the env var approach ( your values are available using the aws console browser )
the big picture is to create a wrapper shell script ( bash ) which contains above three lines to populate the env vars to supply credentials then in same shell script execute the golang binary ( typically you compile the golang in some preliminary process ) ... in my case I store the values of my three env vars in encrypted files which the shell script decrypts just before it calls the above export commands
sometimes its helpful to drop kick and just use the aws command line equivalent commands to get yourself into the ballpark ... from a terminal run
aws s3 ls s3://cairo-mombasa-zaire --region us-west-2
which can also use those same env vars shown above
for completeness here is your code with boilerplate added ... this runs OK and lists out the buckets
package main
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
// "github.com/aws/aws-sdk-go/service/s3/s3manager"
"fmt"
"os"
)
func exitErrorf(msg string, args ...interface{}) {
fmt.Fprintf(os.Stderr, msg+"\n", args...)
os.Exit(1)
}
func main() {
region_env_var := "AWS_REGION"
curr_region := os.Getenv(region_env_var)
if curr_region == "" {
exitErrorf("ERROR - failed to get region from env var %v", region_env_var)
}
fmt.Println("here is region ", curr_region)
// Load session from shared config
sess := session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil { exitErrorf("Unable to list buckets, %v", err) }
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n", aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
}
numBytes, err := downloader.Download(tempFile,
&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(fileName),
},
)
In my case the bucket value was wrong, it is missing literal "/" at the end. Adding that fixes my problem.
Error i got - err: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403,
If anyone else happens to have this problem,
The issue was regarding environment variables much like Scott suggest above, however it was due to lacking
export AWS_SDK_LOAD_CONFIG="true"
If this environment variable is not present, then the Golang SDK will not look for a credentials file, along with this, I instantiated environment variables for both my keys which allowed the connection to succeed.
To recap
if you're attempting to use the shared credentials folder, you must use the above noted environment variable to enable it.
If you're using environment variables, you shouldn't be affected by this problem.

S3 on EC2 with IAM: Error NoCredentialProviders: no valid providers in chain. Deprecated

My application use s3 and running on EC2. The IAM is configured on the instance, so the auth happen keyless (without the access key and secret key).
I'm able to upload or download file using aws cli. However when I tried to perform download operation using aws-sdk-go, I get error below:
AccessDenied: Access Denied
status code: 403, request id: F945BDB5410E1A00, host id: m74jJ8z/AEzdkaJkWKdIqPEwPIYPZfWnLLfa5UpEwHwaBcXOuXTPY1aw/u/5HGralKg+ewAWEJA=
I followed the official guide from https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/ec2rolecreds/ and from this issue https://github.com/aws/aws-sdk-go/issues/430 but got the error above.
Below is my code:
s3UploadPath = config.GetString("upload assets to s3.bucket")
s3Config := aws.NewConfig()
s3Config.CredentialsChainVerboseErrors = aws.Bool(true)
session, err := session.NewSession(s3Config)
if err != nil {
Logger.Fatal("Error initializing s3 uploader. " + err.Error())
os.Exit(0)
}
// the upload code
uploader = s3manager.NewUploader(session)
res, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(s3UploadPath),
Key: aws.String(filename),
Body: f,
})
if err != nil {
log.Fatal("error on upload. " + err.Error())
}
// then continue with the download code
Attached screenshot showing that the download and upload operations are success through aws cli
Am I doing it wrong?
You dont need to specify credentials when using IAM role on EC2 instance.
I see you are getting Access Denied which means your Go program is able to pick the EC2 profile creds but probably due to lack of permissions, its getting this error.
Reading your code, it seems you want to write object to S3. Can you make sure you have given s3:Get*, s3:List*, s3:PutObject, s3:PutObjectAcl to your IAM Role and there is no explicit Deny on S3 Bucket policy?
I managed to solve this error by doing two things.
The first one is by using stscreds.NewCredentials(session, roleArn) as the credentials during session creation.
s3Config := aws.NewConfig()
s3Config.CredentialsChainVerboseErrors = aws.Bool(true)
s3Config.WithLogLevel(aws.LogDebugWithHTTPBody)
s3Config.Region = aws.String(region)
s3Config.WithHTTPClient(&http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
},
Timeout: 10 * time.Second,
})
sess, err := session.NewSession(s3Config)
if err != nil {
log.Fatal("Error initializing session. " + err.Error())
}
sess.Config.Credentials = stscreds.NewCredentials(sess, arn)
_, err = sess.Config.Credentials.Get()
if err != nil {
log.Fatal("Error getting role. " + err.Error())
}
And the 2nd thing is by defining NO_PROXY environment variable with value is 169.254.169.254. The particular IP is AWS global IP used for getting the EC2 metadata.
And since my application uses proxy to communicate with the S3 server, I need to exclude that IP.

AWS presigned url acl public read invalid signature

I have a private bucket, I want create a pre signed url that allows a user to upload a file to within the time limit and set the ACL to public read only.
When creating a PutObjectRequest like below it works fine I can PUT the file no problem. When I add ACL: aws.String("public-read"), I get the error 'signature doesn't match' and the PUT fails, here is a sample of the url the GO sdk is generating.
https://<MY-BUCKET>.s3.eu-west-2.amazonaws.com/<MY-KEY>?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<AWS_ACCESS_KEY>/20170505/eu-west-2/s3/aws4_request&X-Amz-Date=20170505T793528Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host;x-amz-acl&X-Amz-Signature=2584062aaa76545665bfed7204fcf0dfe233a45016f698e7e8a11c34a5a7921e
I have tried with the root aws user and a normal user.
I have tried with bucket policy and without, and with bucket policy and IAM policy of FULL S3 access and without. Basically all combinations. Any time I add the ACL field the signature error appears.
I am not sure if it's related to the GO SDK or to the AWS service. Can someone advice on what I am to do?
svc := s3.New(session.New(&aws.Config{Region: aws.String("eu-west-2")}))
req, _ := svc.PutObjectRequest(&s3.PutObjectInput{
ACL: aws.String("public-read"),
Bucket: aws.String("MY BUCKET NAME"),
Key: aws.String("MY KEY"),
})
str, err := req.Presign(15 * time.Minute)
It was and error on the aws service end, the url is not being signed.

S3 objects not expiring using the Golang SDK

Using the AWS Golang SDK, I'm attempting to set an expiration date for some of the objects that I'm uploading. I'm pretty sure that the header is being set correctly, however, when logging into S3 and viewing the properties of the new object, it doesn't appear to have a expiration date.
Below is a snippet of how I'm uploading objects
exp := time.Now()
exp = exp.Add(time.Hour * 24)
svc := s3.New(session.New(config))
_, err = svc.PutObject(&s3.PutObjectInput{
Bucket: aws.String("MyBucketName"),
Key: aws.String("201700689.zip"),
Body: fileBytes,
ContentLength: aws.Int64(size),
ContentType: aws.String(fileType),
Expires: &exp,
})
And here is what I see when logging into the site
Any idea what is going on here? Thanks
Well, Expires is just the wrong field:
// The date and time at which the object is no longer cacheable.
What you want is Object Expiration which can be set as a bucket rule and not per object.
Basically, you add a Lifecycle rule (on the bucket properties) specifying:
Each rule has the following attributes:
Prefix – Initial part of the key name, (e.g. logs/), or the entire key name. Any object in the bucket with a matching prefix will be subject to this expiration rule. An empty prefix will match all objects in the bucket.
Status – Either Enabled or Disabled. You can choose to enable rules from time to time to perform deletion or garbage collection on your buckets, and leave the rules disabled at other times.
Expiration – Specifies an expiration period for the objects that are subject to the rule, as a number of days from the object’s creation date.
Id – Optional, gives a name to the rule.
This rule will then be evaluated daily and any expired objects will be removed.
See https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/ for a more in-depth explanation.
One way to expire objects in S3 using Golang SDK is to tag your upload with something like
Tagging: aws.String("temp=true")
Then, Go to S3 Bucket Managment Console and Set a LifeCycle Rule targeting for that specific tag like this.
You can configure the time frame to expire the object during the creation of the Rule in LifeCycle.
you need to set s3.PresignOptions.Expires, like this:
func PreSignPutObject(cfg aws.Config, bucket, objectKey string) (string, error) {
client := s3.NewFromConfig(cfg)
psClient := s3.NewPresignClient(client)
input := &s3.PutObjectInput{
Bucket: &bucket,
Key: &objectKey,
}
resp, err := psClient.PresignPutObject(context.Background(), input, func(options *s3.PresignOptions){
options.Expires = 3600 * time.Second
})
if err != nil {
return "", err
}
return resp.URL, nil
}