AWS presigned url acl public read invalid signature - amazon-web-services

I have a private bucket, I want create a pre signed url that allows a user to upload a file to within the time limit and set the ACL to public read only.
When creating a PutObjectRequest like below it works fine I can PUT the file no problem. When I add ACL: aws.String("public-read"), I get the error 'signature doesn't match' and the PUT fails, here is a sample of the url the GO sdk is generating.
https://<MY-BUCKET>.s3.eu-west-2.amazonaws.com/<MY-KEY>?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<AWS_ACCESS_KEY>/20170505/eu-west-2/s3/aws4_request&X-Amz-Date=20170505T793528Z&X-Amz-Expires=900&X-Amz-SignedHeaders=host;x-amz-acl&X-Amz-Signature=2584062aaa76545665bfed7204fcf0dfe233a45016f698e7e8a11c34a5a7921e
I have tried with the root aws user and a normal user.
I have tried with bucket policy and without, and with bucket policy and IAM policy of FULL S3 access and without. Basically all combinations. Any time I add the ACL field the signature error appears.
I am not sure if it's related to the GO SDK or to the AWS service. Can someone advice on what I am to do?
svc := s3.New(session.New(&aws.Config{Region: aws.String("eu-west-2")}))
req, _ := svc.PutObjectRequest(&s3.PutObjectInput{
ACL: aws.String("public-read"),
Bucket: aws.String("MY BUCKET NAME"),
Key: aws.String("MY KEY"),
})
str, err := req.Presign(15 * time.Minute)

It was and error on the aws service end, the url is not being signed.

Related

Access Denied when listing object using amazon s3 client

From the aws lambda I want to list objects inside s3 bucket. When testing the function locally I'm getting access denied error.
public async Task<string> FunctionHandler(string input, ILambdaContext context)
{
var secretKey = "***";
var uid = "***";
var bucketName = "my-bucket-name";
AmazonS3Client s3Client = new AmazonS3Client(uid, secretKey);
ListObjectsRequest listObjectsRequest = new ListObjectsRequest();
listObjectsRequest.BucketName = bucketName;
var listObjectsResponse = await s3Client.ListObjectsAsync(listObjectsRequest);
// exception is thrown
...
}
Amazon.S3.AmazonS3Exception: Access Denied at
Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleExceptionStream(IRequestContext
requestContext, IWebResponseData httpErrorResponse,
HttpErrorResponseException exception, Stream responseStream) at
Amazon.Runtime.Internal.HttpErrorResponseExceptionHandler.HandleExceptionAsync(IExecutionContext
executionContext, HttpErrorResponseException exception) ....
The bucket I'm using in this example "my-bucket-name" is Publicly accessible and it has
Any idea?
First of all, IAM policies are a preferred way how to control access to S3 buckets.
For S3 permissions it is always very important to distinguish between bucket level actions and object level actions and also - who is calling that action. In your code I can see that you do use ListObjects, which is a bucket level action, so that is OK.
https://docs.aws.amazon.com/AmazonS3/latest/API/API_ListObjectsV2.html
What did catch my I is the following:
var secretKey = "***";
var uid = "***";
var bucketName = "my-bucket-name";
AmazonS3Client s3Client = new AmazonS3Client(uid, secretKey);
That means that you are using for your access an AWS role. But even in your screenshot you can see that "Authenticated users group (anyone with an AWS account)" does not have any permissions assigned.
If you already have a role I would suggest to give the read-bucket permissions to that particular role (user) via an IAM policy. But adding read ACL to your AWS users should help as well.

S3 on EC2 with IAM: Error NoCredentialProviders: no valid providers in chain. Deprecated

My application use s3 and running on EC2. The IAM is configured on the instance, so the auth happen keyless (without the access key and secret key).
I'm able to upload or download file using aws cli. However when I tried to perform download operation using aws-sdk-go, I get error below:
AccessDenied: Access Denied
status code: 403, request id: F945BDB5410E1A00, host id: m74jJ8z/AEzdkaJkWKdIqPEwPIYPZfWnLLfa5UpEwHwaBcXOuXTPY1aw/u/5HGralKg+ewAWEJA=
I followed the official guide from https://docs.aws.amazon.com/sdk-for-go/api/aws/credentials/ec2rolecreds/ and from this issue https://github.com/aws/aws-sdk-go/issues/430 but got the error above.
Below is my code:
s3UploadPath = config.GetString("upload assets to s3.bucket")
s3Config := aws.NewConfig()
s3Config.CredentialsChainVerboseErrors = aws.Bool(true)
session, err := session.NewSession(s3Config)
if err != nil {
Logger.Fatal("Error initializing s3 uploader. " + err.Error())
os.Exit(0)
}
// the upload code
uploader = s3manager.NewUploader(session)
res, err := uploader.Upload(&s3manager.UploadInput{
Bucket: aws.String(s3UploadPath),
Key: aws.String(filename),
Body: f,
})
if err != nil {
log.Fatal("error on upload. " + err.Error())
}
// then continue with the download code
Attached screenshot showing that the download and upload operations are success through aws cli
Am I doing it wrong?
You dont need to specify credentials when using IAM role on EC2 instance.
I see you are getting Access Denied which means your Go program is able to pick the EC2 profile creds but probably due to lack of permissions, its getting this error.
Reading your code, it seems you want to write object to S3. Can you make sure you have given s3:Get*, s3:List*, s3:PutObject, s3:PutObjectAcl to your IAM Role and there is no explicit Deny on S3 Bucket policy?
I managed to solve this error by doing two things.
The first one is by using stscreds.NewCredentials(session, roleArn) as the credentials during session creation.
s3Config := aws.NewConfig()
s3Config.CredentialsChainVerboseErrors = aws.Bool(true)
s3Config.WithLogLevel(aws.LogDebugWithHTTPBody)
s3Config.Region = aws.String(region)
s3Config.WithHTTPClient(&http.Client{
Transport: &http.Transport{
Proxy: http.ProxyFromEnvironment,
},
Timeout: 10 * time.Second,
})
sess, err := session.NewSession(s3Config)
if err != nil {
log.Fatal("Error initializing session. " + err.Error())
}
sess.Config.Credentials = stscreds.NewCredentials(sess, arn)
_, err = sess.Config.Credentials.Get()
if err != nil {
log.Fatal("Error getting role. " + err.Error())
}
And the 2nd thing is by defining NO_PROXY environment variable with value is 169.254.169.254. The particular IP is AWS global IP used for getting the EC2 metadata.
And since my application uses proxy to communicate with the S3 server, I need to exclude that IP.

Digital Ocean Spaces | Add expire date for files with s3cmd

I try add expire days to a file and bucket but I have this problem:
sudo s3cmd expire s3://<my-bucket>/ --expiry-days=3 expiry-prefix=backup
ERROR: Error parsing xml: syntax error: line 1, column 0
ERROR: not found
ERROR: S3 error: 404 (Not Found)
and this
sudo s3cmd expire s3://<my-bucket>/<folder>/<file> --expiry-day=3
ERROR: Parameter problem: Expecting S3 URI with just the bucket name set instead of 's3:////'
How to add expire days in DO Spaces for a folder or file by using s3cmd?
Consider configuring Bucket's Lifecycle Rules
Lifecycle rules can be used to perform different actions on objects in a Space over the course of their "life." For example, a Space may be configured so that objects in it expire and are automatically deleted after a certain length of time.
In order to configure new lifecycle rules, send a PUT request to ${BUCKET}.${REGION}.digitaloceanspaces.com/?lifecycle
The body of the request should include an XML element named LifecycleConfiguration containing a list of Rule objects.
https://developers.digitalocean.com/documentation/spaces/#get-bucket-lifecycle
The expire option is not implemented on Digital Ocean Spaces
Thanks to Vitalii answer for pointing to API.
However API isn't really easy to use, so I've done it via NodeJS script.
First of all, generate your API keys here: https://cloud.digitalocean.com/account/api/tokens
And put them in ~/.aws/credentials file (according to docs):
[default]
aws_access_key_id=your_access_key
aws_secret_access_key=your_secret_key
Now create empty NodeJS project, run npm install aws-sdk and use following script:
const aws = require('aws-sdk');
// Replace with your region endpoint, nyc1.digitaloceanspaces.com for example
const spacesEndpoint = new aws.Endpoint('fra1.digitaloceanspaces.com');
// Replace with your bucket name
const bucketName = 'myHeckingBucket';
const s3 = new aws.S3({endpoint: spacesEndpoint});
s3.putBucketLifecycleConfiguration({
Bucket: bucketName,
LifecycleConfiguration: {
Rules: [{
ID: "autodelete_rule",
Expiration: {Days: 30},
Status: "Enabled",
Prefix: '/', // Unlike AWS in DO this parameter is required
}]
}
}, function (error, data) {
if (error)
console.error(error);
else
console.log("Successfully modified bucket lifecycle!");
});

S3 objects not expiring using the Golang SDK

Using the AWS Golang SDK, I'm attempting to set an expiration date for some of the objects that I'm uploading. I'm pretty sure that the header is being set correctly, however, when logging into S3 and viewing the properties of the new object, it doesn't appear to have a expiration date.
Below is a snippet of how I'm uploading objects
exp := time.Now()
exp = exp.Add(time.Hour * 24)
svc := s3.New(session.New(config))
_, err = svc.PutObject(&s3.PutObjectInput{
Bucket: aws.String("MyBucketName"),
Key: aws.String("201700689.zip"),
Body: fileBytes,
ContentLength: aws.Int64(size),
ContentType: aws.String(fileType),
Expires: &exp,
})
And here is what I see when logging into the site
Any idea what is going on here? Thanks
Well, Expires is just the wrong field:
// The date and time at which the object is no longer cacheable.
What you want is Object Expiration which can be set as a bucket rule and not per object.
Basically, you add a Lifecycle rule (on the bucket properties) specifying:
Each rule has the following attributes:
Prefix – Initial part of the key name, (e.g. logs/), or the entire key name. Any object in the bucket with a matching prefix will be subject to this expiration rule. An empty prefix will match all objects in the bucket.
Status – Either Enabled or Disabled. You can choose to enable rules from time to time to perform deletion or garbage collection on your buckets, and leave the rules disabled at other times.
Expiration – Specifies an expiration period for the objects that are subject to the rule, as a number of days from the object’s creation date.
Id – Optional, gives a name to the rule.
This rule will then be evaluated daily and any expired objects will be removed.
See https://aws.amazon.com/blogs/aws/amazon-s3-object-expiration/ for a more in-depth explanation.
One way to expire objects in S3 using Golang SDK is to tag your upload with something like
Tagging: aws.String("temp=true")
Then, Go to S3 Bucket Managment Console and Set a LifeCycle Rule targeting for that specific tag like this.
You can configure the time frame to expire the object during the creation of the Rule in LifeCycle.
you need to set s3.PresignOptions.Expires, like this:
func PreSignPutObject(cfg aws.Config, bucket, objectKey string) (string, error) {
client := s3.NewFromConfig(cfg)
psClient := s3.NewPresignClient(client)
input := &s3.PutObjectInput{
Bucket: &bucket,
Key: &objectKey,
}
resp, err := psClient.PresignPutObject(context.Background(), input, func(options *s3.PresignOptions){
options.Expires = 3600 * time.Second
})
if err != nil {
return "", err
}
return resp.URL, nil
}

How do I set permissions so an IAM can upload to an Amazon S3 bucket in C#?

I've created a user in IAM, and attached 2 managed policies: AmazonS3FullAccess and AdministratorAccess. I would like to upload files to an S3 bucket called "pscfront".
I am using the following code to do the upload:
AWSCredentials credentials = new BasicAWSCredentials(Constants.AmazonWebServices.AccessKey, Constants.AmazonWebServices.SecretKey);
using (var client = new AmazonS3Client(credentials, RegionEndpoint.USEast1))
{
var loc = client.GetBucketLocation("s3.amazonaws.com/pscfront");
var tu = new TransferUtility(client);
tu.Upload(filename, Constants.AmazonWebServices.BucketName, keyName);
}
This fails with the exception "AccessDenied" (inner exception "The remote server returned an error: (403) Forbidden.") at the call to GetBucketLocation, or at the tu.Upload call if I comment that line out.
Any idea what gives?
smdh
Nothing wrong with the persmissions -- I was setting the bucket name incorrectly.You just pass the plan bucket name -- "pscfront" in this case.