s3 downloads not appearing in local folder - amazon-web-services

Noob issue here- i'm using CLI to download an entire bucket of images from S3 (76 files of various image formats), however they do not seem to appear in my local folder after the code executes. After a few attempts, I then received the below error:
"download failed: s3://cpskitchenaidimages/KA/KA - 5KSM2APC_1.tif to ./KA - 5KSM2APC_1.tif [Errno 28] No space left on device"
Code run =
aws s3 sync s3://cpskitchenaidimages/KA .
My local directory = "C:\Users\Darren\Downloads\KA"
The bucket is publicly accessible via the below policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::cpskitchenaidimages/*"
},
{
"Effect": "Allow",
"Principal": "*",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::cpskitchenaidimages"
}
]
}

Related

How can i download a local copy of an S3 snapshot of an AWS Postgres DB?

There is a snapshot in S3 of a Postgres DB, but the download button is grayed out... if i navigate to each file in each table, i am able to download the .gz.parquet files individually, but that is crazy.
So I installed the aws cli, configured a default user, tried to run aws s3 cp s3://<your-bucket-name>/<your-snapshot-name> <local-path> but always get:
fatal error: An error occurred (404) when calling the HeadObject operation: Key <your-snapshot-name> does not exist
But it does exist, and I can see it in the aws website and see the root folder if i run aws s3 ls.
So i tried aws s3 cp --recursive s3://<your-bucket-name>/<your-snapshot-name> <local-path> and it goes through all the folders, copies them to my computer, but theyre all empty folders, and i get the following error for every folder its going through:
An error occurred (AccessDenied) when calling the GetObject operation: The ciphertext refers to a customer master key that does not exist, does not exist in this region, or you are not allowed to access.
The permissions I'm using are a generic (what i thought was) all access to S3:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*",
"s3-object-lambda:*"
],
"Resource": "*"
}
]
}
Plus two from here:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::snapshots"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::snapshots/*"
]
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetBucketLocation",
"s3:ListAllMyBuckets"
],
"Resource": "*"
},
{
"Effect": "Allow",
"Action": [
"s3:ListBucket"
],
"Resource": [
"arn:aws:s3:::snapshots"
]
},
{
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject"
],
"Resource": [
"arn:aws:s3:::snapshots/*"
]
}
]
}
What am I missing here?
The first error you are experiencing is probably because the aws s3 cp command works on objects, not directories. A good way to copy a whole directory (including subdirectories) is to use aws s3 sync.
The second error mentions "customer master key". This is probably referring to a KMS key that was used to encrypt the file when it was created by Amazon RDS. Try giving yourself kms:* permissions (although you probably only need kms:Decrypt) and it should be able to read the file.

Resize Images on the Fly with AWS Lambda and Amazon API Gateway

I followed the tutorial on this page HERE
but when I try to get a resized picture I get an "Access Denied"
Good: https://xxxx.amazonaws.com/mybucket/test.jpg
Error: https://xxxx.amazonaws.com/mybucket/300x300/test.jpg (access denied)
<Error>
<Code>AccessDenied</Code>
<Message>Access Denied</Message>
</Error>
Below my settings:
Bucket policy editor
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket/*"
}
]
}
When I created the trigger, I selected Security: OPEN. I'm just confused about the YOUR_API_HOSTNAME_HERE. In the example, the api hostname is h3ll0w0rld?
GetObject action is not enough. You should give lambda permission to list the content as well . Also notice Resources section that I put.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddPerm",
"Effect": "Allow",
"Action": [
"s3:Get*",
"s3:List*"
],
"Principal": { "Service": "lambda.amazonaws.com" },
"Resource": [
"arn:aws:s3:::mybucket/*",
"arn:aws:s3:::mybucket"
]
}
]
}
#AbdennourTOUMI you're right. The "bucket policy" must be:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucketNAME/*"
}
]
}
Are you sure that your bucket contains 300x300 folder containing the file. because, as #Michael-sqlbot said, it can indicate that the file does not exist.
Yes, In the example, the api hostname is h3ll0w0rld.execute-api.us-west-2.amazonaws.com.
To get the resized picture, you need use your static website hosting endpoint - http://bucket_name.s3-website-us-west-2.amazonaws.com/300x300/test.jpg
then in your bucket will be created a folder 300x300, which will contain 'test.jpg'

AWS S3 sharing access to static website - 403 access denied

I've configured my bucket policy (for a static website hosted on an S3 bucket) so that another account can perform actions on this bucket. The policy looks something like this:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "PublicReadGetObject",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::mybucket.com/*"
},
{
"Sid": "Example permissions",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::00000000000:user/username"
},
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::mybucket.com"
},
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::000000000000:user/username"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::mybucket.com/*"
}
]
}
The first object in "Statement" specifies that this bucket should be readable by the public, so that anyone can access the site (I am using Route 53 as well).
The second account is able to upload files to the bucket, however once he uploads a file, then access is restricted to that file, i.e. if he uploads index.html to the top-level directory of the bucket, then navigating to the website will produce a 403 access denied error.
I have looked into IAM roles, which I think may be related but would appreciate any help with this.

Inconsistent upload/PUT access to Amazon AWS S3 with custom permissions

I have an application that uploads videos to an S3 bucket, and then creates a custom policy to allow another user (for the Zencoder service) to grab the files, and upload the transcoded files back into the bucket.
Below is the current custom policy I give to the user during transcoding. Basically I give full read permission to the entire bucket, but I only allow the user to PUT files into a specific nested folder.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AllowUserToListContentsOfBucket",
"Effect": "Allow",
"Action": [
"s3:ListBucket",
"s3:ListBucketMultipartUploads",
"s3:GetBucketLocation"
],
"Resource": [
"arn:aws:s3:::MY-BUCKET"
]
},
{
"Sid": "AllowUserToListContentsOfBucketFolders",
"Effect": "Allow",
"Action": [
"s3:ListBucketMultipartUploads",
"s3:GetObjectAcl",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::MY-BUCKET/*"
]
},
{
"Sid": "AllowUserS3ActionsOfSpecificFolder",
"Effect": "Allow",
"Action": [
"s3:PutObjectAcl",
"s3:PutObject"
],
"Resource": [
"arn:aws:s3:::MY-BUCKET/some/nested/folder/*"
]
}
]
}
This works for the most part, but in the ~1,000 files transferred over by Zencoder, there's usually one or two that fail with a 403 Forbidden error. I'm not sure why, since files were correctly transferred both before and after the error.
Is there any reason Amazon AWS S3 / IAM would send a 403 Access Denied when such a permission is provided?

403 when trying to view s3 files / images

I have the s3 url saved to my mongoose object, then on the client side, i'm attempting to use this s3 url as an src.
I keep getting a 403.
I've looked at a few similar questions, which state I need to configure my permissions / policy.
I've done that:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "UploadFile",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::acct#:root"
},
"Action": [
"s3:PutObject"
],
"Resource": "arn:aws:s3:::whiskey-upload/*"
},
{
"Sid": "crossdomainAccess",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::whiskey-upload/crossdomain.xml"
},
{
"Sid": "AllowPublicRead",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::whiskey-upload/*"
}
]
}
Any clue on what else I may be doing wrong?
If you have it as http://www. As a prefix in src it wont work. I had encountered such problem before. You can test this directly too. Suppose you have an image src to the s3 bucket. Try to view the image on web browser with www. And without www. Prefix. You might understand better.
But if its directly s3.url then it should work. Please show me the src url that you have to debug the issue.