aws s3 put method params by goamz - amazon-web-services

Earlier I was using "launchpad.net/goamz/s3"
but for my new project I am using "github.com/goamz/goamz/s3".
And there is change in put method of bucket now it has one more param "options"
region := aws.USEast2
connection := s3.New(AWSAuth, region)
bucket := connection.Bucket("XXXXX") // change this your bucket name
path := "mypath" // this is the target file and location in S3
//Save image to s3
err = bucket.Put(path,user_content,content_type, s3.ACL("public-read"), options)
Above is my code. Can you help me what is expected in options and how I can get the value of that?

Options is defined in s3.go:
type Options struct {
SSE bool
Meta map[string][]string
ContentEncoding string
CacheControl string
RedirectLocation string
ContentMD5 string
// What else?
// Content-Disposition string
//// The following become headers so they are []strings rather than strings... I think
// x-amz-storage-class []string
}
These options are well documented in the official S3 api docs.
In the simplest case, you can just pass nothing. eg:
bucket.Put(path,user_content,content_type, s3.ACL("public-read"), s3.Options{})

Related

Get S3 bucket size from aws GO SDK

I am using Amazon S3 bucket to upload files (using GO SDK). I have a requirement to charge client when their directory size exceeds 2GB.
The directory hierarchy inside bucket is like: /BUCKET/uploads/CLIENTID/yyyy/mm/dd
For this, I have searched a lot about it. But could not find anything.
How can I get the directory size inside a bucket using SDK ?
First of all, /uploads/CLIENTID/yyyy/mm/dd is not a directory in S3 bucket, but a prefix. The S3 management UI in AWS Console may trick you to think a bucket has subdirectories, just like your computer file system, but they are prefixes.
Your question really is: How can I get the total size of all objects inside a bucket, with a given prefix?
Hope this code snippet can clear your doubts.
package main
import (
"context"
"fmt"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
// iterate all objects in a given S3 bucket and prefix, sum up objects' total size in bytes
// use: size, err := S3ObjectsSize("example-bucket-name", "/a/b/c")
func S3ObjectsSize(bucket string, prefix string, s3client S3Client) (int64, error) {
output, err := s3client.ListObjectsV2(context.TODO(), &s3.ListObjectsV2Input{
Bucket: aws.String(bucket),
Prefix: aws.String(prefix),
})
if err != nil {
return -1, fmt.Errorf("cannot ListObjectsV2 in %s/%s: %s", bucket, prefix, err.Error())
}
var size int64
for _, object := range output.Contents {
size += object.Size
}
return size, nil
}
// stub of s3.Client for dependency injection
type S3Client interface {
ListObjectsV2(ctx context.Context, params *s3.ListObjectsV2Input, optFns ...func(*s3.Options)) (*s3.ListObjectsV2Output, error)
}

ListObjectsV2 - Get only folders in an S3 bucket

I am using AWS S3 JS SDK. I have folders within folders in my S3 bucket and I would like to list only folders at a certain level.
This is the structure:
bucket/folder1/folder2/folder3a/file1
bucket/folder1/folder2/folder3a/file2
bucket/folder1/folder2/folder3a/file3
bucket/folder1/folder2/folder3a/...
bucket/folder1/folder2/folder3b/file1
bucket/folder1/folder2/folder3b/file2
bucket/folder1/folder2/folder3b/file3
bucket/folder1/folder2/folder3b/...
bucket/folder1/folder2/folder3c/file1
bucket/folder1/folder2/folder3c/file2
bucket/folder1/folder2/folder3c/file3
bucket/folder1/folder2/folder3c/...
As you can see, at the level of folder 3, I have multiple folders and each of those folders contain multiple items. I don't care about the items. I just want to list the folder names at level 3. Is there a good way to do this?
The only way I found is to use ListObjectsV2. But this gives me also the files which inflates the results set and I would need to do a manual filtering afterwards. Is there a way to get just the folder names at the API level?
This article answers all my questions. https://realguess.net/2014/05/24/amazon-s3-delimiter-and-prefix/
The solution can be done using the combination of prefix and delimiter. In my examples the parameters should contain the following:
const params = {
Bucket: 'bucket',
Prefix: 'folder1/folder2/',
Delimiter: '/',
};
Be sure to not forget the slash at the end of the Prefix parameter.
The list of folders will be in the CommonPrefixes attribute of the response object.
To give you a real life example:
...
const params = {
Bucket: bucketName,
Prefix: prefix + '/',
MaxKeys: 25,
Delimiter: '/',
};
const command = new ListObjectsV2Command(params);
const results = await s3client.send(command);
const foldersList = results.CommonPrefixes;
...

How to create Presigned put url and use environment variable to set Bucket and Key

I'm using the following code to create a presigned put url:
svc := s3.New(nil)
req, _ := svc.PutObjectRequest(&s3.PutObjectInput{
Bucket: aws.String("myBucket"),
Key: aws.String("myKey"),
})
str, err := req.Presign(15 * time.Minute)
log.Println("The URL is:", str, " err:", err)
But I would like to get the configuration from a environment variable:
CONFIGURATIONS={ "Bucket": "myBucket", "Key": "myKey" }
I have just two weeks of Golang, and I have mainly a background in Node.js, so, I'm sorry if this question is very basic.
To better illustrate, I'm trying to do this... but in Go:
const CONFIGURATIONS = JSON.parse(process.env.CONFIGURATIONS)
const S3 = new AWS.S3()
S3.generatePresignedUrl('putObject', CONFIGURATIONS, callback...)
Thank you very much!
If I understand your question correctly you need to retrieve the bucket and key name from env variables. This is pretty easy to do in go. The os package has a GetEnv function for that.
Assuming your env variables are called AWS_BUCKET and AWS_KEY this is the way to get the values:
bucketName := Getenv("AWS_BUCKET")
key := Getenv("AWS_KEY")
You can check the package docs here.

Is it possible to provide credentials for the target Google Cloud Bucket when copying to it in the Java API?

In the below code, I take in an input bucket and blob and an output bucket and blob. The code establishes service account credentials for the blob in the input bucket via the setCredentials method. However, there is no mirrored step in the copyTo method for the output bucket.
def copy(inBucketName: String,
inBlobName: String,
outBucketName: String,
outBlobName: String) = {
val storage = StorageOptions.newBuilder
.setCredentials(ServiceAccountCredentials.fromStream(new FileInputStream("key.json")))
.build
.getService
val blobId = BlobId.of(inBucketName, inBlobName)
val blob = storage.get(blobId)
if (blob != null) {
val copyWriter = blob.copyTo(outBucketName, outBlobName)
val copiedBlob = copyWriter.getResult()
}
}
I'm concerned that this will cause some authentication issues in the future if each bucket has different service credentials. Looking at the blob.copyTo() API, I can also pass in a BlobSourceOptoin object : public CopyWriter copyTo(String targetBucket, String targetBlob, BlobSourceOption... options). However, I'm failing to see anywhere in BlobSourceOption where I could assign credentials. Is this credential assignment for the target bucket necessary? And if so, is there a standard way to set them?
Worked with the below snippet. Credentials is a credential JSON string.
val storage = StorageOptions.newBuilder
.setCredentials(
ServiceAccountCredentials.fromStream(
new FileInputStream(credentials)))
.build
.getService

S3 objects not expiring using the Golang SDK

Using the AWS Golang SDK, I'm attempting to set an expiration date for some of the objects that I'm uploading. I'm pretty sure that the header is being set correctly, however, when logging into S3 and viewing the properties of the new object, it doesn't appear to have a expiration date.
Below is a snippet of how I'm uploading objects
exp := time.Now()
exp = exp.Add(time.Hour * 24)
svc := s3.New(session.New(config))
_, err = svc.PutObject(&s3.PutObjectInput{
Bucket: aws.String("MyBucketName"),
Key: aws.String("201700689.zip"),
Body: fileBytes,
ContentLength: aws.Int64(size),
ContentType: aws.String(fileType),
Expires: &exp,
})
And here is what I see when logging into the site
Any idea what is going on here? Thanks
Well, Expires is just the wrong field:
// The date and time at which the object is no longer cacheable.
What you want is Object Expiration which can be set as a bucket rule and not per object.
Basically, you add a Lifecycle rule (on the bucket properties) specifying:
Each rule has the following attributes:
Prefix – Initial part of the key name, (e.g. logs/), or the entire key name. Any object in the bucket with a matching prefix will be subject to this expiration rule. An empty prefix will match all objects in the bucket.
Status – Either Enabled or Disabled. You can choose to enable rules from time to time to perform deletion or garbage collection on your buckets, and leave the rules disabled at other times.
Expiration – Specifies an expiration period for the objects that are subject to the rule, as a number of days from the object’s creation date.
Id – Optional, gives a name to the rule.
This rule will then be evaluated daily and any expired objects will be removed.
See https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/ for a more in-depth explanation.
One way to expire objects in S3 using Golang SDK is to tag your upload with something like
Tagging: aws.String("temp=true")
Then, Go to S3 Bucket Managment Console and Set a LifeCycle Rule targeting for that specific tag like this.
You can configure the time frame to expire the object during the creation of the Rule in LifeCycle.
you need to set s3.PresignOptions.Expires, like this:
func PreSignPutObject(cfg aws.Config, bucket, objectKey string) (string, error) {
client := s3.NewFromConfig(cfg)
psClient := s3.NewPresignClient(client)
input := &s3.PutObjectInput{
Bucket: &bucket,
Key: &objectKey,
}
resp, err := psClient.PresignPutObject(context.Background(), input, func(options *s3.PresignOptions){
options.Expires = 3600 * time.Second
})
if err != nil {
return "", err
}
return resp.URL, nil
}