Generate Torrent from Bucket via Go SDK (Amazon S3) - amazon-web-services

I'm trying to figure out a way to generate torrent files from a bucket, using the AWS SDK for Go.
I'm using a pre-signed url (since its a private bucket):
svc := s3.New(session.New(config))
req, _ := svc.GetObjectRequest(&s3.GetObjectInput{
Bucket: aws.String("bucketName"),
Key: "key",
})
// sign the url
url, err := req.Presign(120 * time.Minute)
From the docs, to generate a torrent, the syntax:
GET /ObjectName?torrent HTTP/1.1
Host: BucketName.s3.amazonaws.com
Date: date
Authorization: authorization string
How do I add the ?torrent parameter to a presigned url in GO?

Try GetOBjectTorrent method on the AWS Go SDK. It will return you .torrent file in response, here is example code:
svc := s3.New(session.New())
input := &s3.GetObjectTorrentInput{
Bucket: aws.String("bucketName"),
Key: aws.String("key"),
}
result, _ := svc.GetObjectTorrent(input)
fmt.Println(result)
Please see http://docs.aws.amazon.com/sdk-for-go/api/service/s3/#S3.GetObjectTorrent for more details. Hope it helps.

Related

Signed url for gcp bucket object fails with access denied

As I generate a signed download url with a service account for an object within a storage gcp bucket, I expect it to be usable by anyone without authentication. However, I keep getting "Anonymous caller does not have storage.objects.get access to the Google Cloud Storage object". What am I doing wrong?
url, err := gcs.SignedURL(bktName, so.Name(), &gcs.SignedURLOptions{
GoogleAccessID: serviceAccountName,
Method: "GET",
Expires: time.Now().Add(duration),
ContentType: md.RenditionMetadata[0].ContentType,
Headers: []string{fmt.Sprintf("x-goog-meta-filename: %s", md.RenditionMetadata[0].FileName)},
SignBytes: func(b []byte) ([]byte, error) {
signedBlob, err := iam.SignBlob(s.GoogleIamService(), serviceAccountName, b)
if err != nil {
return nil, err
}
return []byte(signedBlob), err
},
})
The service account I'm using has Storage Object Creator and Storage Object Viewer roles ...
Follow the guide for Creating a signed URL to download an object:
https://cloud.google.com/storage/docs/samples/storage-generate-signed-url-v4
Alternatively, you can make use of gsutil commands to create signed URl:
https://cloud.google.com/storage/docs/gsutil/commands/signurl
if you specify headers when creating a signed url,
you must include them when 'curling' the generated url ;-)

Cannot presign URL for years?

Signature Version 4 is maximum for a week. In Python I did:
s3_client = boto3.client('s3',
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
config=botocore.client.Config(signature_version='s3')
)
return s3_client.generate_presigned_url(
'get_object',
Params={
'Bucket': bucket_name,
'Key': key
},
ExpiresIn=400000000) # this is a max: ~ten years
But for Go I found only func (*Request) Presign:
req, _ := s3Client.GetObjectRequest(&s3.GetObjectInput{
Bucket: aws.String(bucketName),
Key: &key,
})
tenYears := time.Now().AddDate(10, 0, 0).Sub(time.Now())
url, err := req.Presign(tenYears)
HTTP response for such URL is:
AuthorizationQueryParametersError: X-Amz-Expires must be less than a week (in seconds); that is, the given X-Amz-Expires must be less than 604800 seconds.
No way to presign URL in Go using AWS SDK for years?
If you want to pre-sign URL for longer than a week, then your use case for pre-signed URLs is not valid. According to the spec it is really just one week.
Pre-signed URLs are often used to serve content from S3 to authenticated users only.

Calling AppSync Mutation from Lambda with Golang

I'm trying to invoke a mutation from lambda (specifically using golang). I used AWS_IAM as the authentication method of my AppSync API. I also give appsync:GraphQL permission to my lambda.
However, after looking at the docs here: https://docs.aws.amazon.com/sdk-for-go/api/service/appsync/
I can't find any documentation on how to invoke the appsync from the library. Can anyone point me to the right direction here?
P.S. I don't want to query or subscribe or anything else from lambda. It's just a mutation
Thanks!
------UPDATE-------
Thanks to #thomasmichaelwallace for informing me to use https://godoc.org/github.com/machinebox/graphql
Now the problem is how can I sign the request from that package using aws v4?
I found a way of using plain http.Request and AWS v4 signing. (Thanks to #thomasmichaelwallace for pointing this method out)
client := new(http.Client)
// construct the query
query := AppSyncPublish{
Query: `
mutation ($userid: ID!) {
publishMessage(
userid: $userid
){
userid
}
}
`,
Variables: PublishInput{
UserID: "wow",
},
}
b, err := json.Marshal(&query)
if err != nil {
fmt.Println(err)
}
// construct the request object
req, err := http.NewRequest("POST", os.Getenv("APPSYNC_URL"), bytes.NewReader(b))
if err != nil {
fmt.Println(err)
}
req.Header.Set("Content-Type", "application/json")
// get aws credential
config := aws.Config{
Region: aws.String(os.Getenv("AWS_REGION")),
}
sess := session.Must(session.NewSession(&config))
//sign the request
signer := v4.NewSigner(sess.Config.Credentials)
signer.Sign(req, bytes.NewReader(b), "appsync", "ap-southeast-1", time.Now())
//FIRE!!
response, _ := client.Do(req)
//print the response
buf := new(bytes.Buffer)
buf.ReadFrom(response.Body)
newStr := buf.String()
fmt.Printf(newStr)
The problem is that that API/library is designed to help you create/update app-sync instances.
If you want to actually invoke them then you need to POST to the GraphQL endpoint.
The easiest way for testing is to sign-in to the AWS AppSync Console, press the 'Queries' button in the sidebar and then enter and run your mutation.
I'm not great with go, but from what I can see there are client libraries for GraphQL in golang (e.g. https://godoc.org/github.com/machinebox/graphql).
If you are using IAM then you'll need to sign your request with a v4 signature (see this article for details: https://docs.aws.amazon.com/general/latest/gr/signing_aws_api_requests.html)

How to implement aws ses SendRawEmail with attachment in golang

I need to implement Amazon ses SendRawEmail with attachment in golang,
i tried with the following code :
session, err := session.NewSession()
svc := ses.New(session, &aws.Config{Region: aws.String("us-west-2")})
source := aws.String("XXX <xxx#xxx.com>")
destinations := []*string{aws.String("xxx <xxx#xxx.com>")}
message := ses.RawMessage{ Data: []byte(` From: xxx <xxx#xxx.com>\\nTo: xxx <xxx#xxx.com>\\nSubject: Test email (contains an attachment)\\nMIME-Version: 1.0\\nContent-type: Multipart/Mixed; boundary=\"NextPart\"\\n\\n--NextPart\\nContent-Type: text/plain\\n\\nThis is the message body.\\n\\n--NextPart\\nContent-Type: text/plain;\\nContent-Disposition: attachment; filename=\"sample.txt\"\\n\\nThis is the text in the attachment.\\n\\n--NextPart--" `)}
input := ses.SendRawEmailInput{Source: source, Destinations: destinations, RawMessage: &message}
output, err := svc.SendRawEmail(&input)
but in the mail I receive, it shows the content which I have given in the message, instead of the attachment. Not sure what exactly is wrong???
Refer to AWS example for Sending RAW email with attachment.
Implementation Suggestion: for an easy to compose email and get email as bytes and send it to SES as mentioned in the above reference example.
Use library gopkg.in/gomail.v2 to compose your email message with attachment and then call WriteTo method.
var emailRaw bytes.Buffer
emailMessage.WriteTo(&emailRaw)
// while create instance of RawMessage
RawMessage: &ses.RawMessage{
Data: emailRaw.Bytes(),
}
Good luck!
EDIT: For the comment.
Compose the email-
msg := gomail.NewMessage()
msg.SetHeader("From", "alex#example.com")
msg.SetHeader("To", "bob#example.com", "cora#example.com")
msg.SetHeader("Subject", "Hello!")
msg.SetBody("text/html", "Hello <b>Bob</b> and <i>Cora</i>!")
msg.Attach("/home/Alex/lolcat.jpg")
var emailRaw bytes.Buffer
msg.WriteTo(&emailRaw)
message := ses.RawMessage{ Data: emailRaw.Bytes() }
// Remaining is same as what you mentioned the question.
if you're trying to attach a file from bytes:
msg.Attach("report.pdf", gomail.SetCopyFunc(func(w io.Writer) error {
_, err := w.Write(reportData)
return err
}))

Saving file to S3 using aws-sdk-go

I'm having a bit of trouble saving a file in golang with the AWS S3 go sdk (https://github.com/awslabs/aws-sdk-go).
This is what I have:
import (
"fmt"
"bytes"
"github.com/awslabs/aws-sdk-go/aws"
"github.com/awslabs/aws-sdk-go/aws/awsutil"
"github.com/awslabs/aws-sdk-go/service/s3"
)
func main() {
cred := aws.DefaultChainCredentials
cred.Get() // i'm using environment variable credentials and yes, I checked if they were in here
svc := s3.New(&aws.Config{Region: "us-west-2", Credentials:cred, LogLevel: 1})
params := &s3.PutObjectInput{
Bucket: aws.String("my-bucket-123"),
Key: aws.String("test/123/"),
Body: bytes.NewReader([]byte("testing!")),
}
resp, err := svc.PutObject(params)
fmt.Printf("response %s", awsutil.StringValue(resp))
}
I keep receiving a 301 Moved Permanently response.
Edit: I created the bucket manually.
Edit #2: Example response:
---[ RESPONSE ]--------------------------------------
HTTP/1.1 301 Moved Permanently
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Tue, 05 May 2015 18:42:03 GMT
Server: AmazonS3
POST sign is http as well.
According to Amazon:
Amazon S3 supports virtual-hosted-style and path-style access in all regions. The path-style syntax, however, requires that you use the region-specific endpoint when attempting to access a bucket. For example, if you have a bucket called mybucket that resides in the EU, you want to use path-style syntax, and the object is named puppy.jpg, the correct URI is http://s3-eu-west-1.amazonaws.com/mybucket/puppy.jpg. You will receive a "PermanentRedirect" error, an HTTP response code 301, and a message indicating what the correct URI is for your resource if you try to access a bucket outside the US Standard region with path-style syntax that uses either of the following:
http://s3.amazonaws.com
An endpoint for a region different from the one where the bucket resides, for example, http://s3-eu-west-1.amazonaws.com for a bucket that was created in the US West (Northern California) region
I think the problem is that you are trying to access a bucket in the wrong region. Your request is going here:
https://my-bucket-123.s3-us-west-2.amazonaws.com/test/123
So make sure that my-bucket-123 is actually in us-west-2. (I tried this with my own bucket and it worked fine)
I also verified that it's using HTTPS by wrapping the calls: (their log message is just wrong)
type LogReadCloser struct {
io.ReadCloser
}
func (lr *LogReadCloser) Read(p []byte) (int, error) {
n, err := lr.ReadCloser.Read(p)
log.Println(string(p))
return n, err
}
type LogRoundTripper struct {
http.RoundTripper
}
func (lrt *LogRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) {
log.Println("REQUEST", req)
res, err := lrt.RoundTripper.RoundTrip(req)
log.Println("RESPONSE", res, err)
res.Body = &LogReadCloser{res.Body}
return res, err
}
And then:
svc := s3.New(&aws.Config{
Region: "us-west-2",
Credentials: cred,
LogLevel: 0,
HTTPClient: &http.Client{Transport: &LogRoundTripper{http.DefaultTransport}},
})
I think you're better off using the S3 Uploader. Here's an example from my code, it's a web app, I use gin framework, and in my case I get a file from a web form, upload it to s3, and retrieve the URL to present a page in other HTMLs:
// Create an S3 Uploader
uploader := s3manager.NewUploader(sess)
// Upload
result, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(bucket),
Key: aws.String(fileHeader.Filename),
Body: f,
})
if err != nil {
c.HTML(http.StatusBadRequest, "create-project.html", gin.H{
"ErrorTitle": "S3 Upload Failed",
"ErrorMessage": err.Error()})
} else {
// Success, print URL to Console.
///"result.Location is the URL of an Image"
fmt.Println("Successfully uploaded to", result.Location)
}
You can find an entire example here, explained step by step:
https://www.matscloud.com/docs/cloud-sdk/go-and-s3/