S3 images downloading instead of displaying when uploading with Golang - amazon-web-services

I'm trying to upload an image to AWS S3. The images are saving in the bucket but when I click on them (their URL) they download instead of displaying. In the past this has been because the Content Type wasn't set to image/jpeg but I verified this time that it is.
Here's my code:
func UploadImageToS3(file os.File) error {
fi, err := file.Stat() // get FileInfo
if err != nil {
return errors.New("Couldn't get FileInfo")
}
size := fi.Size()
buffer := make([]byte, size)
file.Read(buffer)
tempFileName := "images/picturename.jpg" // key to save under
putObject := &s3.PutObjectInput{
Bucket: aws.String("mybucket"),
Key: aws.String(tempFileName),
ACL: aws.String("public-read"),
Body: bytes.NewReader(buffer),
ContentLength: aws.Int64(int64(size)),
// verified is properly getting image/jpeg
ContentType: aws.String(http.DetectContentType(buffer)),
}
_, err = AwsS3.PutObject(putObject)
if err != nil {
log.Fatal(err.Error())
return err
}
return nil
}
I also tried making my s3.PutObjectInput as
putObject := &s3.PutObjectInput{
Bucket: aws.String("mybucket"),
Key: aws.String(tempFileName),
ACL: aws.String("public-read"),
Body: bytes.NewReader(buffer),
ContentLength: aws.Int64(int64(size)),
ContentType: aws.String(http.DetectContentType(buffer)),
ContentDisposition: aws.String("attachment"),
ServerSideEncryption: aws.String("AES256"),
StorageClass: aws.String("INTELLIGENT_TIERING"),
}
What am I doing wrong here?

Figured it out.
Not totally sure why, but I needed to separate all the values.
var size int64 = fi.Size()
buffer := make([]byte, size)
file.Read(buffer)
fileBytes := bytes.NewReader(buffer)
fileType := http.DetectContentType(buffer)
path := "images/test.jpeg"
params := &s3.PutObjectInput{
Bucket: aws.String("mybucket"),
Key: aws.String(path),
Body: fileBytes,
ContentLength: aws.Int64(size),
ContentType: aws.String(fileType),
}
_, err = AwsS3.PutObject(params)
If anyone knows why this works and the previous code doesn't, please share.

Related

AccessDenied being encountered while using UploadPartCopy to MultiPartUpload in Golang

I am attempting to use S3 MultipartUpload to concat files in an S3 bucket. If you have several files >5MB (the last file can be smaller), you can concatenate them in S3 into a larger file. It's basically the equivalent of using cat to merge files together. When I attempt to do this in Go, I get:
An error occurred (AccessDenied) when calling the UploadPartCopy operation: Access Denied
The code looks kind of like this:
mpuOut, err := s3CreateMultipartUpload(&S3.CreateMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
})
if err != nil {
return err
}
var ps []*S3.CompletedPart
for i, part := range parts { // parts is a list of paths to things in s3
partNumber := int64(i) + 1
upOut, err := s3UploadPartCopy(&S3.UploadPartCopyInput{
Bucket: aws.String(bucket),
CopySource: aws.String(part),
Key: aws.String(concatenatedFile),
UploadId: aws.String(*mpuOut.UploadId),
PartNumber: aws.Int64(partNumber),
})
if err != nil {
return err // <- fails here
}
ps = append(ps, &S3.CompletedPart{
ETag: s3Out.CopyPartResult.ETag,
PartNumber: aws.Int64(int64(i)),
})
}
_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
UploadId: aws.String(*mpuOut.UploadId),
})
if err != nil {
return err
}
_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
UploadId: aws.String(*mpuOut.UploadId),
})
if err != nil {
return err
}
When it runs, it blows up with the error above. The permissions on the bucket are wide open. Any ideas?
Ok, so the problem is that when you are doing a UploadPartCopy, for the CopySource parameter, you don't just use the path in the s3 bucket. You have to put the buckname at the front of the path, even if it is in the same bucket. Derp
mpuOut, err := s3CreateMultipartUpload(&S3.CreateMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
})
if err != nil {
return err
}
var ps []*S3.CompletedPart
for i, part := range parts { // parts is a list of paths to things in s3
partNumber := int64(i) + 1
upOut, err := s3UploadPartCopy(&S3.UploadPartCopyInput{
Bucket: aws.String(bucket),
CopySource: aws.String(fmt.Sprintf("%s/%s", bucket, part), // <- ugh
Key: aws.String(concatenatedFile),
UploadId: aws.String(*mpuOut.UploadId),
PartNumber: aws.Int64(partNumber),
})
if err != nil {
return err
}
ps = append(ps, &S3.CompletedPart{
ETag: s3Out.CopyPartResult.ETag,
PartNumber: aws.Int64(int64(i)),
})
}
_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
UploadId: aws.String(*mpuOut.UploadId),
})
if err != nil {
return err
}
_, err = s3CompleteMultipartUpload(&S3.CompleteMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(concatenatedFile),
MultipartUpload: &S3.CompletedMultipartUpload{Parts: ps},
UploadId: aws.String(*mpuOut.UploadId),
})
if err != nil {
return err
}
This just wasted about an hour of my life, so I figure I would try to save someone else the trouble.

Unable to download a byte range from S3 object using AWS Go SDK

I'm trying to download a particular data chunk from S3. Following is the code snippet.
func DownloadFromS3() ([]byte, error) {
retries := 5
awsSession = session.Must(session.NewSessionWithOptions(session.Options{
SharedConfigState: session.SharedConfigEnable,
Config: aws.Config{
MaxRetries: &retries,
LogLevel: aws.LogLevel(aws.LogDebugWithHTTPBody),
},
}))
// Create S3 service client
serviceS3 = s3.New(awsSession)
d := s3manager.NewDownloaderWithClient(serviceS3,func(d *s3manager.Downloader) {
d.Concurrency = 10 // should be ignored
d.PartSize = 1 // should be ignored
})
w := &aws.WriteAtBuffer{}
n, err := d.Download(w, &s3.GetObjectInput{
Bucket: aws.String("mybucket"),
Key: aws.String("key1"),
Range: aws.String("bytes=0-9"),
})
if err != nil {
return nil, err
}
return w.Bytes(), err
}
But this keeps downloading part by part continuously till the entire object is retrieved; without downloading only the specified part. Am I missing any configurations here?
Looks like an issue with the Go SDK; try the s3.GetObject instead of downloader.
This was an issue with previous AWS-Go-SDK. It's now fixed https://github.com/aws/aws-sdk-go/pull/1311

Copy S3 Object with MultiPartUpload

I need to rename a quite a bunch of objects in AWS S3. For small objects the following snippet works flawlessly:
input := &s3.CopyObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(targetPrefix),
CopySource: aws.String(source),
}
_, err = svc.CopyObject(input)
if err != nil {
panic(errors.Wrap(err, "error copying object"))
}
I am running into the S3 size limitation for larger objects. I understand I need to copy the object using a multi part upload. This is what I tried so far:
multiPartUpload, err := svc.CreateMultipartUpload(
&s3.CreateMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(targetPrefix), // targetPrefix is the new name
},
)
if err != nil {
panic(errors.Wrap(err, "could not create MultiPartUpload"))
}
resp, err := svc.UploadPartCopy(
&s3.UploadPartCopyInput{
UploadId: multiPartUpload.UploadId,
Bucket: aws.String(bucket),
Key: aws.String(targetPrefix),
CopySource: aws.String(source),
PartNumber: aws.Int64(1),
},
)
if err != nil {
panic(errors.Wrap(err, "error copying multipart object"))
}
log.Printf("copied: %v", resp)
The golang SDK bails out on me with:
InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120
I have also tried the following approach but I do not get any parts listed here:
multiPartUpload, err := svc.CreateMultipartUpload(
&s3.CreateMultipartUploadInput{
Bucket: aws.String(bucket),
Key: aws.String(targetPrefix), // targetPrefix is the new name
},
)
if err != nil {
panic(errors.Wrap(err, "could not create MultiPartUpload"))
}
err = svc.ListPartsPages(
&s3.ListPartsInput{
Bucket: aws.String(bucket), // Required
Key: obj.Key, // Required
UploadId: multiPartUpload.UploadId, // Required
},
// Iterate over all parts in the `CopySource` object
func(parts *s3.ListPartsOutput, lastPage bool) bool {
log.Printf("parts:\n%v\n%v", parts, parts.Parts)
// parts.Parts is an empty slice
for _, part := range parts.Parts {
log.Printf("copying %v part %v", source, part.PartNumber)
resp, err := svc.UploadPartCopy(
&s3.UploadPartCopyInput{
UploadId: multiPartUpload.UploadId,
Bucket: aws.String(bucket),
Key: aws.String(targetPrefix),
CopySource: aws.String(source),
PartNumber: part.PartNumber,
},
)
if err != nil {
panic(errors.Wrap(err, "error copying object"))
}
log.Printf("copied: %v", resp)
}
return true
},
)
if err != nil {
panic(errors.Wrap(err, "something went wrong with ListPartsPages!"))
}
What am I doing wrong or am I missunderstanding something?
I think that ListPartsPages is the wrong direction because it works on "Multipart Uploads" which is a different entity than an an s3 "Object". So you're listing the already-uploaded parts to the multipart upload you just created.
Your first example is close to what's needed, but you need to manually split the original file into parts, with the range of each part specified by UploadPartCopyInput's CopySourceRange. At least that's my take from reading the documentation.

Golang Aws S3 NoSuchKey: The specified key does not exist

I'm trying to download Objects from S3, the following is my code:
func listFile(bucket, prefix string) error {
svc := s3.New(sess)
params := &s3.ListObjectsInput{
Bucket: aws.String(bucket), // Required
Prefix: aws.String(prefix),
}
return svc.ListObjectsPages(params, func(p *s3.ListObjectsOutput, lastPage bool) bool {
for _, o := range p.Contents {
//log.Println(*o.Key)
log.Println(*o.Key)
download(bucket, *o.Key)
return true
}
return lastPage
})
}
func download(bucket, key string) {
logDir := conf.Cfg.Section("share").Key("LOG_DIR").MustString(".")
tmpLogPath := filepath.Join(logDir, bucket, key)
s3Svc := s3.New(sess)
downloader := s3manager.NewDownloaderWithClient(s3Svc, func(d *s3manager.Downloader) {
d.PartSize = 2 * 1024 * 1024 // 2MB per part
})
f, err := os.OpenFile(tmpLogPath, os.O_CREATE|os.O_WRONLY, 0644)
if _, err = downloader.Download(f, &s3.GetObjectInput{
Bucket: aws.String(bucket),
Key: aws.String(key),
}); err != nil {
log.Fatal(err)
}
f.Close()
}
func main() {
bucket := "mybucket"
key := "myprefix"
listFile(bucket, key)
}
I can get the objects list in the function listFile(), but a 404 returned when call download, why?
I had the same problem with recent versions of the library. Sometimes, the object key will be prefixed with a "./" that the SDK will remove by default making the download fail.
Try adding this to your aws.Config and see if it helps:
config := aws.Config{
...
DisableRestProtocolURICleaning: aws.Bool(true),
}
I submitted an issue.

AWS S3 large file reverse proxying with golang's http.ResponseWriter

I have a request handler named Download which I want to access a large file from Amazon S3 and push it to the user's browser. My goals are:
To record some request information before granting the user access to the file
To not buffer the file into memory too much. Files may become too large.
Here is what I've explored so far:
func Download(w http.ResponseWriter, r *http.Request) {
sess := session.New(&aws.Config{
Region: aws.String("eu-west-1"),
Endpoint: aws.String("s3-eu-west-1.amazonaws.com"),
S3ForcePathStyle: aws.Bool(true),
Credentials: cred,
})
downloader := s3manager.NewDownloader(sess)
// I can't write directly into the ResponseWriter. It doesn't implement WriteAt.
// Besides, it doesn't seem like the right thing to do.
_, err := downloader.Download(w, &s3.GetObjectInput{
Bucket: aws.String(BUCKET),
Key: aws.String(filename),
})
if err != nil {
log.Error(4, err.Error())
return
}
}
I'm wondering if there isn't a better approach (given the goals I'm trying to achieve).
Any suggestions are welcome. Thank you in advance :-)
If you do want to stream the file through your service (rather than download directly as recommended in the accepted answer) -
import (
...
"github.com/aws/aws-sdk-go/aws"
"github.com/aws/aws-sdk-go/service/s3"
)
func StreamDownloadHandler(w http.ResponseWriter, r *http.Request) {
sess, awsSessErr := session.NewSession(&aws.Config{
Region: aws.String("eu-west-1"),
Credentials: credentials.NewStaticCredentials("my-aws-id", "my-aws-secret", ""),
})
if awsSessErr != nil {
http.Error(w, fmt.Sprintf("Error creating aws session %s", awsSessErr.Error()), http.StatusInternalServerError)
return
}
result, err := s3.New(sess).GetObject(&s3.GetObjectInput{
Bucket: aws.String("my-bucket"),
Key: aws.String("my-file-id"),
})
if err != nil {
http.Error(w, fmt.Sprintf("Error getting file from s3 %s", err.Error()), http.StatusInternalServerError)
return
}
w.Header().Set("Content-Disposition", fmt.Sprintf("attachment; filename=\"%s\"", "my-file.csv"))
w.Header().Set("Cache-Control", "no-store")
bytesWritten, copyErr := io.Copy(w, result.Body)
if copyErr != nil {
http.Error(w, fmt.Sprintf("Error copying file to the http response %s", copyErr.Error()), http.StatusInternalServerError)
return
}
log.Printf("Download of \"%s\" complete. Wrote %s bytes", "my-file.csv", strconv.FormatInt(bytesWritten, 10))
}
If the file is potentially large, you don't want it to go trough your own server.
The best approach (in my opinion) is to have the user download it directly from S3.
You can do this by generating a presigned url:
func Download(w http.ResponseWriter, r *http.Request) {
...
sess := session.New(&aws.Config{
Region: aws.String("eu-west-1"),
Endpoint: aws.String("s3-eu-west-1.amazonaws.com"),
S3ForcePathStyle: aws.Bool(true),
Credentials: cred,
})
s3svc := s3.New(sess)
req, _ := s3svc.GetObjectRequest(&s3.GetObjectInput{
Bucket: aws.String(BUCKET),
Key: aws.String(filename),
})
url, err := req.Presign(5 * time.Minute)
if err != nil {
//handle error
}
http.Redirect(w, r, url, http.StatusTemporaryRedirect)
}
The presigned url is only valid for a limited time (5 minutes in this example, adjust to your needs) and takes the user directly to S3. No need to worry about downloads anymore!