I'm trying to download Objects from S3, the following is my code:
func listFile(bucket, prefix string) error {
svc := s3.New(sess)
params := &s3.ListObjectsInput{
Bucket: aws.String(bucket), // Required
Prefix: aws.String(prefix),
}
return svc.ListObjectsPages(params, func(p *s3.ListObjectsOutput, lastPage bool) bool {
for _, o := range p.Contents {
//log.Println(*o.Key)
log.Println(*o.Key)
download(bucket, *o.Key)
return true
}
return lastPage
})
}
func download(bucket, key string) {
logDir := conf.Cfg.Section("share").Key("LOG_DIR").MustString(".")
tmpLogPath := filepath.Join(logDir, bucket, key)
s3Svc := s3.New(sess)
downloader := s3manager.NewDownloaderWithClient(s3Svc, func(d *s3manager.Downloader) {
d.PartSize = 2 * 1024 * 1024 // 2MB per part
})
f, err := os.OpenFile(tmpLogPath, os.O_CREATE|os.O_WRONLY, 0644)
if _, err = downloader.Download(f, &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
}); err != nil {
log.Fatal(err)
}
f.Close()
}
func main() {
bucket := "mybucket"
key := "myprefix"
listFile(bucket, key)
}
I can get the objects list in the function listFile(), but a 404 returned when call download, why?
I had the same problem with recent versions of the library. Sometimes, the object key will be prefixed with a "./" that the SDK will remove by default making the download fail.
Try adding this to your aws.Config and see if it helps:
config := aws.Config{
...
DisableRestProtocolURICleaning: aws.Bool(true),
}
I submitted an issue.
Related
I am working with the AWS S3 SDK in GoLang, playing with uploads and downloads to various buckets. I am wondering if there is a simpler way to upload structs or objects directly to the bucket?
I have a struct representing an event:
type Event struct {
ID string
ProcessID string
TxnID string
Inputs map[string]interface{}
}
That I would like to upload into the S3 bucket. But the code that I found in the documentation only works for uploading strings.
func Save(client S3Client, T interface{}, key string) bool {
svc := client.S3clientObject
input := &s3.PutObjectInput{
Body: aws.ReadSeekCloser(strings.NewReader("testing this one")),
Bucket: aws.String(GetS3Bucket()),
Key: aws.String(GetObjectKey(T, key)),
Metadata: map[string]*string{
"metadata1": aws.String("value1"),
"metadata2": aws.String("value2"),
},
}
This is successful in uploading a basic file to the S3 bucket that when opened simply reads "testing this one". Is there a way to upload to the bucket so that it is uploading an object rather than simply just a string value??
Any help is appreciated as I am new to Go and S3.
edit
This is the code I'm using for the Get function:
func GetIt(client S3Client, T interface{}, key string) interface{} {
svc := client.S3clientObject
s3Key := GetObjectKey(T, key)
resp, err := svc.GetObject(&s3.GetObjectInput{
Bucket: aws.String(GetS3Bucket()),
Key: aws.String(s3Key),
})
if err != nil {
fmt.Println(err)
return err
}
result := json.NewDecoder(resp.Body).Decode(&T)
fmt.Println(result)
return json.NewDecoder(resp.Body).Decode(&T)
}
func main() {
client := b.CreateS3Client()
event := b.CreateEvent()
GetIt(client, event, key)
}
Encode the value as bytes and upload the bytes. Here's how to encode the value as JSON bytes:
func Save(client S3Client, value interface{}, key string) error {
p, err := json.Marshal(value)
if err != nil {
return err
}
input := &s3.PutObjectInput{
Body: aws.ReadSeekCloser(bytes.NewReader(p)),
…
}
…
}
Call Save with the value you want to upload:
value := &Event{ID: "an id", …}
err := Save(…, value, …)
if err != nil {
// handle error
}
There are many possible including including gob, xml and json, msgpack, etc. The best encoding format will depend on your application requirements.
Reverse the process when getting an object:
func GetIt(client S3Client, T interface{}, key string) error {
svc := client.S3clientObject
resp, err := svc.GetObject(&s3.GetObjectInput{
Bucket: aws.String(GetS3Bucket()),
Key: aws.String(key),
})
if err != nil {
return err
}
return json.NewDecoder(resp.Body).Decode(T)
}
Call GetIt with a pointer to the destination value:
var value model.Event
err := GetIt(client, &value, key)
if err != nil {
// handle error
}
fmt.Println(value) // prints the decoded value.
The example cited here shows that S3 allows you to upload anything that implements the io.Reader interface. The example is using the strings.NewReader syntax create a io.Reader that knows how to provide the specified string to the caller. Your job (according to AWS here) is to figure out how to adapt whatever you need to store into an io.Reader.
You can store the bytes directly JSON encoded like this
package main
import (
"bytes"
"encoding/json"
)
type Event struct {
ID string
ProcessID string
TxnID string
Inputs map[string]interface{}
}
func main() {
// To prepare the object for writing
b, err := json.Marshal(event)
if err != nil {
return
}
// pass this reader into aws.ReadSeekCloser(...)
reader := bytes.NewReader(b)
}
I'm implementing a function to download a file from an s3 bucket. This worked fine when the bucket was private and I set the credentials
os.Setenv("AWS_ACCESS_KEY_ID", "test")
os.Setenv("AWS_SECRET_ACCESS_KEY", "test")
However, I made the s3 bucket public as described in here and now I want to download it without credentials.
func DownloadFromS3Bucket(bucket, item, path string) {
file, err := os.Create(filepath.Join(path, item))
if err != nil {
fmt.Printf("Error in downloading from file: %v \n", err)
os.Exit(1)
}
defer file.Close()
sess, _ := session.NewSession(&aws.Config{
Region: aws.String(constants.AWS_REGION)},
)
// Create a downloader with the session and custom options
downloader := s3manager.NewDownloader(sess, func(d *s3manager.Downloader) {
d.PartSize = 64 * 1024 * 1024 // 64MB per part
d.Concurrency = 6
})
numBytes, err := downloader.Download(file,
&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(item),
})
if err != nil {
fmt.Printf("Error in downloading from file: %v \n", err)
os.Exit(1)
}
fmt.Println("Download completed", file.Name(), numBytes, "bytes")
}
But now I'm getting an error.
Error in downloading from file: NoCredentialProviders: no valid providers in chain. Deprecated.
For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Any idea how to download it without credentials?
We can set Credentials: credentials.AnonymousCredentials when creating session. Following is the working code.
func DownloadFromS3Bucket(bucket, item, path string) {
file, err := os.Create(filepath.Join(path, item))
if err != nil {
fmt.Printf("Error in downloading from file: %v \n", err)
os.Exit(1)
}
defer file.Close()
sess, _ := session.NewSession(&aws.Config{
Region: aws.String(constants.AWS_REGION), Credentials: credentials.AnonymousCredentials},
)
// Create a downloader with the session and custom options
downloader := s3manager.NewDownloader(sess, func(d *s3manager.Downloader) {
d.PartSize = 64 * 1024 * 1024 // 64MB per part
d.Concurrency = 6
})
numBytes, err := downloader.Download(file,
&s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(item),
})
if err != nil {
fmt.Printf("Error in downloading from file: %v \n", err)
os.Exit(1)
}
fmt.Println("Download completed", file.Name(), numBytes, "bytes")
}
I am attempting to use S3 MultipartUpload to concat files in an S3 bucket. If you have several files >5MB (the last file can be smaller), you can concatenate them in S3 into a larger file. It's basically the equivalent of using cat to merge files together. When I attempt to do this in Go, I get:
An error occurred (AccessDenied) when calling the UploadPartCopy operation: Access Denied
The code looks kind of like this:
mpuOut, err := s3CreateMultipartUpload(&S3.CreateMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
})
if err != nil {
return err
}
var ps []*S3.CompletedPart
for i, part := range parts { // parts is a list of paths to things in s3
partNumber := int64(i) + 1
upOut, err := s3UploadPartCopy(&S3.UploadPartCopyInput{
Bucket: aws.String(bucket),
CopySource: aws.String(part),
Key: aws.String(concatenatedFile),
UploadId: aws.String(*mpuOut.UploadId),
PartNumber: aws.Int64(partNumber),
})
if err != nil {
return err // <- fails here
}
ps = append(ps, &S3.CompletedPart{
ETag: s3Out.CopyPartResult.ETag,
PartNumber: aws.Int64(int64(i)),
})
}
_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
UploadId: aws.String(*mpuOut.UploadId),
})
if err != nil {
return err
}
_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
UploadId: aws.String(*mpuOut.UploadId),
})
if err != nil {
return err
}
When it runs, it blows up with the error above. The permissions on the bucket are wide open. Any ideas?
Ok, so the problem is that when you are doing a UploadPartCopy, for the CopySource parameter, you don't just use the path in the s3 bucket. You have to put the buckname at the front of the path, even if it is in the same bucket. Derp
mpuOut, err := s3CreateMultipartUpload(&S3.CreateMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
})
if err != nil {
return err
}
var ps []*S3.CompletedPart
for i, part := range parts { // parts is a list of paths to things in s3
partNumber := int64(i) + 1
upOut, err := s3UploadPartCopy(&S3.UploadPartCopyInput{
Bucket: aws.String(bucket),
CopySource: aws.String(fmt.Sprintf("%s/%s", bucket, part), // <- ugh
Key: aws.String(concatenatedFile),
UploadId: aws.String(*mpuOut.UploadId),
PartNumber: aws.Int64(partNumber),
})
if err != nil {
return err
}
ps = append(ps, &S3.CompletedPart{
ETag: s3Out.CopyPartResult.ETag,
PartNumber: aws.Int64(int64(i)),
})
}
_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
UploadId: aws.String(*mpuOut.UploadId),
})
if err != nil {
return err
}
_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
UploadId: aws.String(*mpuOut.UploadId),
})
if err != nil {
return err
}
This just wasted about an hour of my life, so I figure I would try to save someone else the trouble.
I want to delete the key(bucket/userID/fileName) using AWS-SDK-Go.
But this code doesn't delete the userID key.
config := model.NewConfig()
sess, _ := session.NewSession(&aws.Config{
Region: aws.String(config.AWSS3Region)},
)
svc := s3.New(sess)
input := &s3.DeleteObjectInput{
Bucket: aws.String(config.AWSS3Bucket),
Key: aws.String(userID + "/"),
}
result, err := svc.DeleteObject(input)
I can delete bucket/userID/fileName but I can't delete bucket/userID.
Here is my code to delete Objects from S3.
import(
"context"
"github.com/aws/aws-sdk-go-v2/aws"
"github.com/aws/aws-sdk-go-v2/feature/s3/manager"
"github.com/aws/aws-sdk-go-v2/config"
"github.com/aws/aws-sdk-go-v2/credentials"
"github.com/aws/aws-sdk-go-v2/service/s3"
)
func configS3() *s3.Client {
creds := credentials.NewStaticCredentialsProvider(os.Getenv("S3_ACCESS_KEY_ID"), os.Getenv("S3_SECRET_ACCESS_KEY"), "")
cfg, err := config.LoadDefaultConfig(context.TODO(), config.WithCredentialsProvider(creds), config.WithRegion(os.Getenv("S3_REGION")))
if err != nil {
log.Fatal(err)
}
return s3.NewFromConfig(cfg)
}
func DeleteImageFromS3(echoCtx echo.Context) error {
awsClient := configS3()
input := &s3.DeleteObjectInput{
Bucket: aws.String("mybucket"),
Key: aws.String("pic.jpg"),
}
_, err := awsClient.DeleteObject(context.TODO(), input)
if err != nil {
fmt.Println("Got an error deleting item:")
fmt.Println(err)
return
}
return echoCtx.JSON(http.StatusOK, "Object Deleted Successfully")
}
For further reference, please check this https://aws.github.io/aws-sdk-go-v2/docs/code-examples/s3/deleteobject/
I have a request handler named Download which I want to access a large file from Amazon S3 and push it to the user's browser. My goals are:
To record some request information before granting the user access to the file
To not buffer the file into memory too much. Files may become too large.
Here is what I've explored so far:
func Download(w http.ResponseWriter, r *http.Request) {
sess := session.New(&aws.Config{
Region: aws.String("eu-west-1"),
Endpoint: aws.String("s3-eu-west-1.amazonaws.com"),
S3ForcePathStyle: aws.Bool(true),
Credentials: cred,
})
downloader := s3manager.NewDownloader(sess)
// I can't write directly into the ResponseWriter. It doesn't implement WriteAt.
// Besides, it doesn't seem like the right thing to do.
_, err := downloader.Download(w, &s3.GetObjectInput{
Bucket: aws.String(BUCKET),
Key: aws.String(filename),
})
if err != nil {
log.Error(4, err.Error())
return
}
}
I'm wondering if there isn't a better approach (given the goals I'm trying to achieve).
Any suggestions are welcome. Thank you in advance :-)
If you do want to stream the file through your service (rather than download directly as recommended in the accepted answer) -
import (
...
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/s3"
)
func StreamDownloadHandler(w http.ResponseWriter, r *http.Request) {
sess, awsSessErr := session.NewSession(&aws.Config{
Region: aws.String("eu-west-1"),
Credentials: credentials.NewStaticCredentials("my-aws-id", "my-aws-secret", ""),
})
if awsSessErr != nil {
http.Error(w, fmt.Sprintf("Error creating aws session %s", awsSessErr.Error()), http.StatusInternalServerError)
return
}
result, err := s3.New(sess).GetObject(&s3.GetObjectInput{
Bucket: aws.String("my-bucket"),
Key: aws.String("my-file-id"),
})
if err != nil {
http.Error(w, fmt.Sprintf("Error getting file from s3 %s", err.Error()), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", "my-file.csv"))
w.Header().Set("Cache-Control", "no-store")
bytesWritten, copyErr := io.Copy(w, result.Body)
if copyErr != nil {
http.Error(w, fmt.Sprintf("Error copying file to the http response %s", copyErr.Error()), http.StatusInternalServerError)
return
}
log.Printf("Download of \"%s\" complete. Wrote %s bytes", "my-file.csv", strconv.FormatInt(bytesWritten, 10))
}
If the file is potentially large, you don't want it to go trough your own server.
The best approach (in my opinion) is to have the user download it directly from S3.
You can do this by generating a presigned url:
func Download(w http.ResponseWriter, r *http.Request) {
...
sess := session.New(&aws.Config{
Region: aws.String("eu-west-1"),
Endpoint: aws.String("s3-eu-west-1.amazonaws.com"),
S3ForcePathStyle: aws.Bool(true),
Credentials: cred,
})
s3svc := s3.New(sess)
req, _ := s3svc.GetObjectRequest(&s3.GetObjectInput{
Bucket: aws.String(BUCKET),
Key: aws.String(filename),
})
url, err := req.Presign(5 * time.Minute)
if err != nil {
//handle error
}
http.Redirect(w, r, url, http.StatusTemporaryRedirect)
}
The presigned url is only valid for a limited time (5 minutes in this example, adjust to your needs) and takes the user directly to S3. No need to worry about downloads anymore!