Using the AWS Golang SDK, I'm attempting to set an expiration date for some of the objects that I'm uploading. I'm pretty sure that the header is being set correctly, however, when logging into S3 and viewing the properties of the new object, it doesn't appear to have a expiration date.
Below is a snippet of how I'm uploading objects
exp := time.Now()
exp = exp.Add(time.Hour * 24)
svc := s3.New(session.New(config))
_, err = svc.PutObject(&s3.PutObjectInput{
Bucket: aws.String("MyBucketName"),
Key: aws.String("201700689.zip"),
Body: fileBytes,
ContentLength: aws.Int64(size),
ContentType: aws.String(fileType),
Expires: &exp,
})
And here is what I see when logging into the site
Any idea what is going on here? Thanks
Well, Expires is just the wrong field:
// The date and time at which the object is no longer cacheable.
What you want is Object Expiration which can be set as a bucket rule and not per object.
Basically, you add a Lifecycle rule (on the bucket properties) specifying:
Each rule has the following attributes:
Prefix – Initial part of the key name, (e.g. logs/), or the entire key name. Any object in the bucket with a matching prefix will be subject to this expiration rule. An empty prefix will match all objects in the bucket.
Status – Either Enabled or Disabled. You can choose to enable rules from time to time to perform deletion or garbage collection on your buckets, and leave the rules disabled at other times.
Expiration – Specifies an expiration period for the objects that are subject to the rule, as a number of days from the object’s creation date.
Id – Optional, gives a name to the rule.
This rule will then be evaluated daily and any expired objects will be removed.
See https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/ for a more in-depth explanation.
One way to expire objects in S3 using Golang SDK is to tag your upload with something like
Tagging: aws.String("temp=true")
Then, Go to S3 Bucket Managment Console and Set a LifeCycle Rule targeting for that specific tag like this.
You can configure the time frame to expire the object during the creation of the Rule in LifeCycle.
you need to set s3.PresignOptions.Expires, like this:
func PreSignPutObject(cfg aws.Config, bucket, objectKey string) (string, error) {
client := s3.NewFromConfig(cfg)
psClient := s3.NewPresignClient(client)
input := &s3.PutObjectInput{
Bucket: &bucket,
Key: &objectKey,
}
resp, err := psClient.PresignPutObject(context.Background(), input, func(options *s3.PresignOptions){
options.Expires = 3600 * time.Second
})
if err != nil {
return "", err
}
return resp.URL, nil
}
Related
I am trying to list the objects of one bucket using the official example
err = svc.ListObjectsPages(&s3.ListObjectsInput{
Bucket: &bucketName,
}, func(p *s3.ListObjectsOutput, last bool) (shouldContinue bool) {
fmt.Println("Page,", i)
i++
for _, obj := range p.Contents {
fmt.Println("Object:", *obj.Key)
}
return true
})
However I see that the s3.Object type does not have any ACL information associated with it.
How can I get the s3.Object's ACL information?
Refer to this example to learn how to get an Amazon S3 object's ACL information using the AWS SDK for Go V2.
https://github.com/awsdocs/aws-doc-sdk-examples/tree/main/gov2/s3/GetObjectAcl
I'm looking to integrade an S3 bucket with an API im developing, I'm running into this error wherever I go -
SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403
I have done the following
Installed SDK & AWS CLI, and AWS configured
Double(triple) checked spelling of key & secret key & bucket permissions
Attempted with credentials document, .env, and even hard coding the values directly
Tested with AWS CLI (THIS WORKS), so I believe I can rule out permissions, keys, as a whole.
I'm testing by trying to list buckets, here is the code taken directly from the AWS documentation-
sess := session.Must(session.NewSessionWithOptions(session.Options{ <--- DEBUGGER SET HERE
SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil { exitErrorf("Unable to list buckets, %v", err) }
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n", aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
Using debugger, I can see the sessions config files as the program runs, the issue is potentially here
config -
-> credentials
-> creds
-> v
-> Access Key = ""
-> Secret Access Key = ""
-> Token = ""
-> provider
->value
-> Access Key With Value
-> Secret Access Key With Value
-> Token With Value
I personally cannot find any documentation regarding "creds" / "v", and I don't know if this is causing the issue. As I mentioned, I can use the AWS CLI to upload into the bucket, and even when I hard code my access key etc in to the Go SDK I receive this error.
Thank you for any thoughts, greatly appreciated.
I just compiled your code and its executing OK ... one of the many ways to supply credentials to your binary is to populate these env vars
export AWS_ACCESS_KEY_ID=AKfoobarT2IJEAU4
export AWS_SECRET_ACCESS_KEY=oa6oT0Xxt4foobarbambazdWFCb
export AWS_REGION=us-west-2
that is all you need when using the env var approach ( your values are available using the aws console browser )
the big picture is to create a wrapper shell script ( bash ) which contains above three lines to populate the env vars to supply credentials then in same shell script execute the golang binary ( typically you compile the golang in some preliminary process ) ... in my case I store the values of my three env vars in encrypted files which the shell script decrypts just before it calls the above export commands
sometimes its helpful to drop kick and just use the aws command line equivalent commands to get yourself into the ballpark ... from a terminal run
aws s3 ls s3://cairo-mombasa-zaire --region us-west-2
which can also use those same env vars shown above
for completeness here is your code with boilerplate added ... this runs OK and lists out the buckets
package main
import (
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/aws/session"
"github.com/aws/aws-sdk-go/service/s3"
// "github.com/aws/aws-sdk-go/service/s3/s3manager"
"fmt"
"os"
)
func exitErrorf(msg string, args ...interface{}) {
fmt.Fprintf(os.Stderr, msg+"\n", args...)
os.Exit(1)
}
func main() {
region_env_var := "AWS_REGION"
curr_region := os.Getenv(region_env_var)
if curr_region == "" {
exitErrorf("ERROR - failed to get region from env var %v", region_env_var)
}
fmt.Println("here is region ", curr_region)
// Load session from shared config
sess := session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
}))
svc := s3.New(sess)
result, err := svc.ListBuckets(nil)
if err != nil { exitErrorf("Unable to list buckets, %v", err) }
for _, b := range result.Buckets {
fmt.Printf("* %s created on %s\n", aws.StringValue(b.Name), aws.TimeValue(b.CreationDate))
}
}
numBytes, err := downloader.Download(tempFile,
&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(fileName),
},
)
In my case the bucket value was wrong, it is missing literal "/" at the end. Adding that fixes my problem.
Error i got - err: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403,
If anyone else happens to have this problem,
The issue was regarding environment variables much like Scott suggest above, however it was due to lacking
export AWS_SDK_LOAD_CONFIG="true"
If this environment variable is not present, then the Golang SDK will not look for a credentials file, along with this, I instantiated environment variables for both my keys which allowed the connection to succeed.
To recap
if you're attempting to use the shared credentials folder, you must use the above noted environment variable to enable it.
If you're using environment variables, you shouldn't be affected by this problem.
Earlier I was using "launchpad.net/goamz/s3"
but for my new project I am using "github.com/goamz/goamz/s3".
And there is change in put method of bucket now it has one more param "options"
region := aws.USEast2
connection := s3.New(AWSAuth, region)
bucket := connection.Bucket("XXXXX") // change this your bucket name
path := "mypath" // this is the target file and location in S3
//Save image to s3
err = bucket.Put(path,user_content,content_type, s3.ACL("public-read"), options)
Above is my code. Can you help me what is expected in options and how I can get the value of that?
Options is defined in s3.go:
type Options struct {
SSE bool
Meta map[string][]string
ContentEncoding string
CacheControl string
RedirectLocation string
ContentMD5 string
// What else?
// Content-Disposition string
//// The following become headers so they are []strings rather than strings... I think
// x-amz-storage-class []string
}
These options are well documented in the official S3 api docs.
In the simplest case, you can just pass nothing. eg:
bucket.Put(path,user_content,content_type, s3.ACL("public-read"), s3.Options{})
I try add expire days to a file and bucket but I have this problem:
sudo s3cmd expire s3://<my-bucket>/ --expiry-days=3 expiry-prefix=backup
ERROR: Error parsing xml: syntax error: line 1, column 0
ERROR: not found
ERROR: S3 error: 404 (Not Found)
and this
sudo s3cmd expire s3://<my-bucket>/<folder>/<file> --expiry-day=3
ERROR: Parameter problem: Expecting S3 URI with just the bucket name set instead of 's3:////'
How to add expire days in DO Spaces for a folder or file by using s3cmd?
Consider configuring Bucket's Lifecycle Rules
Lifecycle rules can be used to perform different actions on objects in a Space over the course of their "life." For example, a Space may be configured so that objects in it expire and are automatically deleted after a certain length of time.
In order to configure new lifecycle rules, send a PUT request to ${BUCKET}.${REGION}.digitaloceanspaces.com/?lifecycle
The body of the request should include an XML element named LifecycleConfiguration containing a list of Rule objects.
https://developers.digitalocean.com/documentation/spaces/#get-bucket-lifecycle
The expire option is not implemented on Digital Ocean Spaces
Thanks to Vitalii answer for pointing to API.
However API isn't really easy to use, so I've done it via NodeJS script.
First of all, generate your API keys here: https://cloud.digitalocean.com/account/api/tokens
And put them in ~/.aws/credentials file (according to docs):
[default]
aws_access_key_id=your_access_key
aws_secret_access_key=your_secret_key
Now create empty NodeJS project, run npm install aws-sdk and use following script:
const aws = require('aws-sdk');
// Replace with your region endpoint, nyc1.digitaloceanspaces.com for example
const spacesEndpoint = new aws.Endpoint('fra1.digitaloceanspaces.com');
// Replace with your bucket name
const bucketName = 'myHeckingBucket';
const s3 = new aws.S3({endpoint: spacesEndpoint});
s3.putBucketLifecycleConfiguration({
Bucket: bucketName,
LifecycleConfiguration: {
Rules: [{
ID: "autodelete_rule",
Expiration: {Days: 30},
Status: "Enabled",
Prefix: '/', // Unlike AWS in DO this parameter is required
}]
}
}, function (error, data) {
if (error)
console.error(error);
else
console.log("Successfully modified bucket lifecycle!");
});
I have a private bucket, I want create a pre signed url that allows a user to upload a file to within the time limit and set the ACL to public read only.
When creating a PutObjectRequest like below it works fine I can PUT the file no problem. When I add ACL: aws.String("public-read"), I get the error 'signature doesn't match' and the PUT fails, here is a sample of the url the GO sdk is generating.
https://<MY-BUCKET>.s3.eu-west-2.amazonaws.com/<MY-KEY>?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<AWS_ACCESS_KEY>/20170505/eu-west-2/s3/aws4_request&X-Amz-Date=20170505T793528Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host;x-amz-acl&X-Amz-Signature=2584062aaa76545665bfed7204fcf0dfe233a45016f698e7e8a11c34a5a7921e
I have tried with the root aws user and a normal user.
I have tried with bucket policy and without, and with bucket policy and IAM policy of FULL S3 access and without. Basically all combinations. Any time I add the ACL field the signature error appears.
I am not sure if it's related to the GO SDK or to the AWS service. Can someone advice on what I am to do?
svc := s3.New(session.New(&aws.Config{Region: aws.String("eu-west-2")}))
req, _ := svc.PutObjectRequest(&s3.PutObjectInput{
ACL: aws.String("public-read"),
Bucket: aws.String("MY BUCKET NAME"),
Key: aws.String("MY KEY"),
})
str, err := req.Presign(15 * time.Minute)
It was and error on the aws service end, the url is not being signed.