S3 TVM Issue – getting access denied - amazon-web-services

I'm trying to let my iOS app upload to S3 using credentials it gets from a slightly modified anonymous token vending machine.
The policy statement my token vending machine returns is:
{"Statement":
[
{"Effect":"Allow",
"Action":"s3:*",
"Resource":"arn:aws:s3:::my-bucket-test",
"Condition": {
"StringLike": {
"s3:prefix": "66-*"
}
}
},
{"Effect":"Deny","Action":"sdb:*","Resource":["arn:aws:sdb:us-east-1:MYACCOUNTIDHERE:domain/__USERS_DOMAIN__","arn:aws:sdb:us-east-1:MYACCOUNTIDHERE:domain/TokenVendingMachine_DEVICES"]},
{"Effect":"Deny","Action":"iam:*","Resource":"*"}
]
}
The object I'm trying to put has the same bucket name and key 66-3315F11E-84FA-417F-9C32-AC4BE364AD99.natural.mp4.
As far as I understand this should work fine, but it doesn't, and throws an access denied message. Is there anything wrong with my policy statement?

You don't need to use prefix to refer to the resource for the context of Object operations. I'd also recommend restricting the S3 actions. Here is a recommend policy, based on the one from an article on an S3 Personal File Store. Feel free to remove the ListBucket if it doesn't make sense for you app.
{"Statement":
[
{"Effect":"Allow",
"Action":["s3:PutObject","s3:GetObject","s3:DeleteObject"],
"Resource":"arn:aws:s3:::my-bucket-test/66-*",
},
{"Effect":"Allow",
"Action":"s3:ListBucket",
"Resource":"arn:aws:s3:::my-bucket-test",
"Condition":{
"StringLike":{
"s3:prefix":"66-*"
}
}
},
{"Effect":"Deny","Action":"sdb:*","Resource":["arn:aws:sdb:us-east-1:MYACCOUNTIDHERE:domain/__USERS_DOMAIN__","arn:aws:sdb:us-east-1:MYACCOUNTIDHERE:domain/TokenVendingMachine_DEVICES"]},
{"Effect":"Deny","Action":"iam:*","Resource":"*"}
]
}

Related

AWS S3 Bucket Policy for CORS

I am trying to figure out how to follow these instructions to set up an S3 bucket on AWS. I have been trying most of the year but still can't make sense of the AWS documentation. I think this github repo readme may have been prepared at a time when the AWS S3 interface appeared differently (there is no CORS setting in the form to make the S3 bucket permissions now).
I asked this question earlier this year, and I have tried using the upvoted answer to make a bucket policy, which is precisely as shown in that answer, as:
[
{
"AllowedHeaders": [
"*"
],
"AllowedMethods": [
"POST",
"GET",
"PUT",
"DELETE",
"HEAD"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": []
}
]
When I try this, I get an error that says:
Ln 1, Col 0Data Type Mismatch: The text does not match the expected
JSON data type Object. Learn more
The Learn More link goes to a page that suggests I can resolve this error by updating text to use the supported data type. I do not know what that means in this context. I don't know how to find out which condition keys require which type of data.
Resolving the error
Update the text to use the supported data type.
For example, the Version global condition key requires a String data
type. If you provide a date or an integer, the data type won't match.
Can anyone help with current instructions for how to create the CORS permissions that are consistent with the spirit of the instructions in the github readme for the repo?

Google Cloud : There were concurrent policy changes. Please retry the whole read-modify-write with exponential backoff

POST METHOD URL
https://cloudresourcemanager.googleapis.com/v1/projects/project-name:setIamPolicy
Request:
{
"resource": "projects/project-name",
"policy": {
"bindings": [
{
"role": "roles/resourcemanager.organizationAdmin",
"members": [
"user:test12345678#domain.com"
]
}
],
"etag": "BwWWja0YfJA=",
"version": 3
}
}
Response:
{
"error": {
"code": 409,
"message": "There were concurrent policy changes. Please retry the
whole read-modify-write with exponential backoff.",
"status": "ABORTED" }
}
Documentation recommends using the read-modify-write pattern to update policy for a resource.
Reading the current policy by calling getIamPolicy().
Editing the returned policy, either by using a text editor or programmatically, to add or remove any desired members and their role grants.
Writing the updated policy by calling setIamPolicy().
Looks like in your case the policy you're trying to set and the policy that is currently active on the resource have diverged. One of the ways this can happen is if you did:
getIamPolicy() > policy.json
addIamPolicyBinding() or removeIamPolicyBinding()
setIamPolicy() policy.json
The policy version on the resource after #2, is out of sync with what #3 is trying to set, and so it throws an exception. To confirm you can compare the etag field in the policy your strying to set with the etag currently on the resource. There should be a mismatch.
This means that more than one change was performed at the same time. You should try to perform only one request to change policies at the same time.
Implementing Exponential backoff should help you with this error. It is as simple as handle your request retry with a time magnitude of n+1 + random_number_milliseconds seconds and retry the request
I was able to fix this issue by removing
"etag": "BwWWja0YfJA=",
"version": 3
from the template when using gcloud projects set-iam-policy command. It will ask you to overwrite the existing policy before committing the changes

Cloud Recording issue

I am trying agora cloud Recording API and trying to record into AWS S3 bucket. The calls appear to go through fine. While doing stop record, I get success message. I have reproduced part of it here:
{
insertId: "5d66423d00012ad9d6d02f2b"
labels: {
clone_id:
"00c61b117c803f45c35dbd46759dc85f8607177c3234b870987ba6be86fec0380c162a"
}
lotextPayload: "Stop cloud recording success. FileList :
01ce51a4a640ecrrrrhxabd9e9d823f08_tdeaon_20197121758.m3u8, uploading
status: backuped"
timestamp: "2019-08-28T08:58:37.076505Z"
}
It shows the status 'backuped'. As per agora documentation, it uploaded the files into agora cloud. Then within 5 minutes it is supposed to upload into my AWS S3 bucket.
I am not seeing this file in my AWS bucket. I have tested the bucket secret key. the same key works fine for other application. Also I have verified CORS settings.
Please suggest how I could debug further.
Make sure you are inputing your S3 credentials correctly within the storageConfig settings
"storageConfig":{
"vendor":{{StorageVendor}},
"region":{{StorageRegion}},
"bucket":"{{Bucket}}",
"accessKey":"{{AccessKey}}",
"secretKey":"{{SecretKey}}"
}
Agora offers a Postman collection to make testing easier: https://documenter.getpostman.com/view/6319646/SVSLr9AM?version=latest
I had faced issue due to wrong uid.
Recording uid needs to be a random id which will be used for 'joining by the recording client which 'resides' on cloud. I had passed my main client's id.
Other two reasons I had faced:
S3 credentials
S3 CORS settings : Go to AWS S3 permissions and set the allowed CORS headers.
EDIT:
It could be something like this on S3 side ..
[
{
"AllowedHeaders": [
"Authorization",
"*"
],
"AllowedMethods": [
"HEAD",
"POST"
],
"AllowedOrigins": [
"*"
],
"ExposeHeaders": [
"ETag",
"x-amz-meta-custom-header",
"x-amz-storage-class"
],
"MaxAgeSeconds": 5000
}
]

Why is my S3 lifecycle policy not taking effect?

I have an S3 lifecycle policy to delete objects after 3 days, and I am using a prefix. My problem is that the policy works for all but one sub-directory. For example, lets say my bucket looks like this:
s3://my-bucket/myPrefix/env=dev/
s3://my-bucket/myPrefix/env=stg/
s3://my-bucket/myPrefix/env=prod/
When I check the stg and prod directories, there are no objects older than 3 days. However, when I check the dev directory, there are objects a lot older than that.
Note - There is a huge difference between the volume of data in dev compared to the other 2. Dev holds a lot more logs than the others.
My initial thought was that it was taking longer for Eventual Consistency to show what was deleted and what wasn't, but that theory is gone considering the time that has passed.
The issue seems related to the amount of data in this location under the prefix compared to the others, but I'm not sure what I can do to resolve this. Should I have another policy specific to this location, or is there a somewhere I can check to see what is causing the failure? I did not see anything in Cloudtrail for this event.
Here is my policy:
{
"Rules": [
{
"Expiration": {
"Days": 3
},
"ID": "Delete Object When Stale",
"Prefix": "myPrefix/",
"Status": "Enabled"
}
]
}

Simple example to restrict access to Cloudfront(S3) files from some users but not others

I'm just getting started with permissions on AWS S3 and Cloudfront so please take it easy on me.
Two main questions:
I'd like to allow access to some users (e.g., those that are logged in) but not others. I assume I need to be using ACLs instead of a bucket policy since the former is more customizable in that you can identify the user in the URL with query parameters. First of all is this correct? Can someone point me to the plainest english description of how to do this on a file/user-by-file/user basis? The documentation on ACL confuses the heck out of me.
I'd also like to restrict access such that people can only view content on my-site.com and not your-site.com. Unfortunately the S3 documentation example bucket policy for this has no effect on access for my demo bucket (see code below, slightly adapted from AWS docs). Moreover, if I need to foremost be focusing on allowing user-by-user access, do I even want to be defining a bucket policy?
I realize i'm not even touching on how to make this work in the context of Cloudfront (the ultimate goal) but any thoughts on questions 1 and 2 would be greatly appreciated and mentioning Cloudfront would be a bonus at this point.
`
{
"Version": "2008-10-17",
"Id":"http referer policy example",
"Statement": [
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::my-bucket/*",
"Condition": {
"StringLike": {
"aws:Referer": [
"https://mysite.com/*",
"https://www.mysite.com/*"
]
}
}
}
]
}
To restrict access to CDN, to serve what we call "private content" you need to use the API to generated signed URLs and you can define the expiration of the URL. More information is here.
You can use the Origin Access Identity—as explained here—to prevent the content from being served outside cloudfront.
I thought I had some code here from a past project to share and didn't. But, at least I was able to dig into my bookmarks and find one of the references that helped me in the process, and there is another post here at stackoverflow that mentions the same reference. See below the link to the reference and to the post.
http://improve.dk/how-to-set-up-and-serve-private-content-using-s3/
Cloudfront private content + signed urls architecture
Well, it is two years old, you might have to change it a little bit here and there, but you'll get the idea.