Amazon S3 upload works with one credentials but not other - amazon-web-services

I have two S3 buckets under two different accounts. Permissions and CORS setting of both buckets are same.
Regions of two buckets are as following (First one working)
Region: Asia Pacific (Singapore) (ap-southeast-1) works
Region: US East (Ohio) (us-east-2) does not work
I created Upload script with Node.js and supplied region plus following
Key : __XXXX__
secret: __XXXXX____,
bucket: _____XXXX__
'x-amz-acl': 'public-read',
ACL: 'public-read'
Code works fine with first, uploaded files is also accessible publicly. But with 2nd account(Region: us-east-2), script runs successfully and return URL also, but when I look in bucket there is no upload and url is saying permission denied which means resource is not available. Strange things are
Why URL is returned if file is not uploaded in bucket?
Why same code does not working for other account,
I tried AWS documentation also but that seems like it's not written for human like me. Help will be highly appreciated.

script runs successfully and return URL also, but when I look in bucket there is no upload
If you can really see no resources in the bucket, then the upload really failed (I've seen too many scripts just ignoring any error response) or the upload is executed to different place than you expect. Care to share the script?
and url is saying permission denied which means resource is not available
Unfortunatelly that's something you have find out yourself. If the object is in the bucket, it has public access, correct cors settings, then maybe the url is not correct.

Related

Error in object URL of static website in S3 bucket: Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature

I have created a static website in an S3 bucket AWS. I have created two files in bucket one in index.html and 2nd is error.html. When I open index.html and click on object URL in AWS it gives below error:
This XML file does not appear to have any style information associated with it. The document tree is shown below.
<Error>
<Code>InvalidArgument</Code>
<Message>Requests specifying Server Side Encryption with AWS KMS managed keys require AWS Signature Version 4.</Message>
<ArgumentName>Authorization</ArgumentName>
<ArgumentValue>null</ArgumentValue>
<RequestId>R69TKNDJTYZ8E0SW</RequestId>
<HostId>OAOZKRsA6ATOgH6jBr5jO1fS0zi+GSh4at34nLq8V/Ug8Icvuy8c6NOlCoNqqjpBcORg8bDlzJ0=</HostId>
</Error>
I have checked every possible solution but nothing works. My bucket has public access like below my bucket name it is written in red Publicly accessible. But still I could not find what is issue.

How to access aws s3 current bucketlist content info

I have been provided with the access and secret key for an Amazon S3 container. No more details were provided other than to drop some files into some specific folder.
I downloaded Amazon CLI and also the Amazon SDK. So far, seems to be no way for me to check the bucket name or list the folders where I'm supposed to drop my files. Every single command seems to require the knowledge of a bucket name.
Trying to list with aws s3 ls gives me the error:
An error occurred (AccessDenied) when calling the ListBuckets operation: Access Denied
Is there a way to list the content of my current location (I'm guessing the credentials I was given are linked directly to a bucket?). I'd like to see at least the folders where I'm supposed to drop my files, but the SDK client for the console app I'm building seems to always require a bucket name.
Was I provided incomplete info or limited rights?
Do you know the bucket name or not? If you don't and you don't have permission to ListAllMyBuckets and GetBucketLocation on * and ListBucket on the bucket in question, then you can't get the bucket name. That's how it is supposed to work. If you know the bucket, then you can run aws s3 s3://bucket-name/ to get objects in the bucket.
Note, that S3 buckets don't have the concept of "folder". It's User interface "sugar" to make it look like folders and files. Internally, it's just the key and the object
Looks like it was just not possible without enhanced rights or with the actual bucketname. I was able to procure both later on from the client and able to complete the task. Thanks for the comments.

AWS Sagermaker ClientError: Data download failed:PermanentRedirect (301)

ClientError: Data download failed:PermanentRedirect (301): The bucket is in this region: us-west-1. Please use this region to retry the request
Found it on my own.
The S3 bucket I am using is in a different location. Used a different s# bucket. Everything works well now.

AWS CloudFront with Signed URL: 403 Access Denied

I'm configuring an environment with a Amazon S3 Bucket for storage of media files and Amazon CloudFront for restricted distribution purposes.
The access to those media files needs to be private and should be done via a signed URL. So I created the S3 Bucket on South America (São Paulo) region and uploaded some test files. Then I created a CloudFront Distribution with that previous bucket as Origin and it's Bucket Access is restricted. I created a new OAI (Origin Access Identity) and also selected the option Yes, Update Bucket Policy so that it auto-configures the S3 Bucket Policies.
I'm only using the default Behavior and it's configured with HTTP and HTTPS viewer protocol policy and GET, HEAD allowed methods. Restrict Viewer Access (Use Signed URLs or Signed Cookies) is set and the Trusted Signer is set to Self.
Here's some images to clarify the setup:
S3 Bucket Policy
Distribution's Origin
Distribution's Behavior
I'm getting a HTTP 403 while trying to access the signed URL generated with either awscli or cfsign.pl
<Error>
<Code>AccessDenied</Code>
<Message>Access denied</Message>
</Error>
Is there something missing that I don't know? It looks like I made everything the docs said to do.
I received the same Access Denied error and spent the last couple hours trying to figure out what was going on. I finally realized that the Expires parameter was set in the past since I was using my local time instead of UTC. Make sure to set the Expires in the future according to UTC.
In my case the problem was with URL I was passing to URL signing code (I was using AWS SDK for Node.js).
cloudFront.getSignedUrl({
url: `${distributionUrl}/${encodeURI(key)}`,
expires: Math.floor(new Date().getTime() / 1000) + 60 * 60
})
Note encodeURI. I was not doing that. The resulting signed URL would still have URI components encoded, BUT would have invalid signature, thus causing 403 error.
EDIT: ...And you have to wrap it into url.format() like this:
cloudFront.getSignedUrl({
url: url.format(`${distributionUrl}/${encodeURI(key)}`),
expires: Math.floor(new Date().getTime() / 1000) + 60 * 60
})
I guess they should be doing that in SDK.
After recreating both the Amazon S3 Bucket and Amazon CloudFront Distribution I was still experiencing the issue. After a session with my rubber duck I found out that the Private Key file that I was using belongs to a deleted CloudFront Key-pair.
Now that I'm using the correct key to encrypt things everything is working fine. That doesn't explain why the first bucket and distribution weren't working because in that specific case I was using the same set of configurations and the right Private Key file.
I also encountered the same issue. Probably, we have to re-generate Clouf Front key-pair.

What URL should I use for Amazon CloudFront content?

This has also been posted in the AWS forum. But it's languishing a bit (and I'm in a hurry to solve this problem).
This question is with regard to a 'download' distribution, not 'streaming'.
I've recently signed up and created an Amazon S3 bucket and then created an Amazon CloudFront (CF) distribution out of that bucket. Here's the relevant info:
Bucket Name: stella_media
Folder In Bucket: visia
Which results in a working URL (with public-read access) like this: http://s3.amazonaws.com/stella_media/visia/720_125M_Zero_Dark_Thirty.mp4
So you'll see that if you use the above URL (and you're not using Firefox) it loads the MP4 video. All the media in my bucket is set to public-read.
My Problem Is With Access To The Same Content Via Cloudfront:
And my CF distrubution has the following properties:
Delivery Method: download
Distribution Status: deployed
Price Class: US & Europe
State: Enabled
Domain Name: d2322fq9z81lph.cloudfront.net
However, when I use the URL provided to me when I setup my CF distribution on that bucket I get a "noSuchKey" error: http://d2322fq9z81lph.cloudfront.net/stella_media/visia/720_125M_Zero_Dark_Thirty.mp4
<Error>
<Code>NoSuchKey</Code>
<Message>The specified key does not exist.</Message>
<Key>stella_media/visia/720_125M_Zero_Dark_Thirty.mp4</Key>
<RequestId>5E2FA36884444757</RequestId>
<HostId>
HiphTLuv4P2tiJemBRNvIGsq8DRbSCMocdAvm4oto9NVnnKNHuDZWuFHz+xyCt6B
</HostId>
</Error>
So ... exactly what URL am I supposed to be using to point to my video OR is there some sort of permissions setting that I've overlooked to make the content in my CF distribution public?
Thanks for any help.
For the benefit of any others that may come across this, I've figured this out.
Apparently CloudFront URL's DO NOT INCLUDE THE BUCKET NAME. So it would play out like so:
S3 URL
http://s3.amazonaws.com/stella_media/visia/720_125M_Zero_Dark_Thirty.mp4
CloudFront URL
http://d2322fq9z81lph.cloudfront.net/visia/720_125M_Zero_Dark_Thirty.mp4
Hopefully that helps anyone else from going out of their mind trying to figure out what's wrong.