S3 bucket policy to list multiple objects in public bucket - amazon-web-services

I have set up a public bucket in S3 and copied multiple objects into it. In this case they are jpeg photos.
I want to share all these objects with anonymous public users (friends), but I want to send them one static website address for the bucket and for the objects to show up as a list (or at least show all the images) when they click on that one address link.
Is this possible to display the objects this way using S3 to public users who don't have an S3 account?
The alternative I know of is to send them a unique link to each of the objects in the bucket (which would take forever!).
Any advice would be helpful.

S3 doesn't have anything built-in to do a "directory index" like nginx and Apache can do. It can be done with AWS Lambda, though.
I built a rudimentary image index with lambda, you might be able to adapt it to solve your problem.

yes.
you can host an static webpage inside a s3 bucket: http://docs.aws.amazon.com/AmazonS3/latest/dev/WebsiteHosting.html
just generate a static html page with links to all the photos, upload it in the bucket, set the bucket to serve as a static webpage and give the link to it.
Or, for the extra lazy :) https://github.com/rgrp/s3-bucket-listing

Thanks for your answers, they helped me to find a really simple solution. On a different forum I found someone has written some script and put it in a link that you just upload straight into your bucket and that puts all the objects into a simple list...... genius!
This is the link:
http://regexp.s3.amazonaws.com/list.html
So for the less techy people (like me) you literally upload that link above into your bucket. Even if you haven't downloaded it onto your PC, just copy and paste it into the upload file path.
When I uploaded it, the file appeared in the S3 bucket as list.html
Make sure the file is readable and you've set the ACL appropriately. And make sure your bucket has a policy that allows anyone to access it.
Your bucket objects(content) are then shown at the url link below.
http://<your bucket name>.s3.amazonaws.com/list.html
Where <your bucket name> is written above, replace that part with just the name of your bucket.
And you should be able to click on that link and see the list of objects in your bucket. Once you get your head around it, it is actually very simple.

Related

How to safely share files via amazon S3 bucket

I need to share ~10K files with ~10K people (one-to-one matching) and I would like to use Amazon S3 for this (by giving public access to these files).
The caveat is that I do not want anyone to be able to download all these files together. What are the right permissions for this.
Currently, my plan is:
Create non-public buckets (foo)
Name each file with a long string so one cannot guess the link (bar)
Make all files public
Share links of the form https://foo.s3.amazonaws.com/bar
It seems that by having a non-public bucket, I ensure that no one can list files in my bucket and hence won't be able to guess names of the files inside. Is it correct?
I would approach this using pre-signed urls as this allows you to grant access on an object-level, even when your bucket and objects are kept private. This means that the only way to access an object in the bucket is by using the link you provide to each individual user.
Therefore to follow best practice, you should block all public access and make all objects private. This will prevent anyone from listing bucket objects.
To automate this, you could upload the files naming them after each user, or some other identifying string like an id number. You can then generate a presigned url giving the user a limited time to retrieve the file without granting them access to the bucket as a whole with some kind of loop.
I use bash so that's the example I'll give but there's probably a similar powershell solution for this too.
The easiest way to do this would be with the aws-cli:
aws s3 presign s3://<YOUR-BUCKET-NAME>/<userIdNumber>.file \
--expires-in 604800
Put all of your userid's or whatever you've used to identify your user's files in a text file and loop over them with bash to generate all your presigned url's like so:
Contents of users.txt:
user1
user2
user3
user4
user5
The loop:
for i in $(cat users.txt) ;
do
echo "$i" ;
aws s3 presign "s3://my-bucket/$i.file" --expires-in 604800 ;
done
This should spit out a list of usernames with a url below each user. Just send the link to each user and they will be able to get their document.

Storing of S3 Keys vs URLs

I have some functionality that uploads Documents to an S3 Bucket.
The key names are programmatically generated via some proprietary logic for the layout/naming convention needed.
The results of my S3 upload command is the actual url itself. So, it's in the format of
REGION/BUCKET/KEY
I was planning on storing that full url into my DB so that users can access their uploads.
Given that REGION and BUCKET probably wouldn't change, does it make sense to just store the KEY - and then dynamically generate the full url when the client needs it?
Just want to know what the desired pattern here is and what others do. Thanks!
Storing the full URL is a bad idea. As you said in the question, the region and bucket are already known, so storing the full URL is a waste of disk space. Also, if in the future say, you want to migrate your assets to a different bucket may be in a different region, having full URLs stored in the DB just make things harder.

What is the best practice to restrict data in the cloud

I tried to use DigitalOcean Spaces which is like AWS S3 to give certain users the ability to view a file like a video. I only can give them a custom link (to one file, not a hole direcotry) with a defined period of time to view .
I would like to know what is the best practice in the cloud, how to share files privately only to registred users.
You can create a private S3 bucket and using the SDK, you can create pre-signed URL for a file and set the expiry time of the link.
https://docs.aws.amazon.com/AmazonS3/latest/dev/ShareObjectPreSignedURL.html

AWS CloudFront Behavior

I've been setting up aws lambda functions for S3 events. I want to set up a new structure for my bucket, but it's not possible--so I set up a new bucket the way I want and will migrate old things and send new things there. I wanted to have some of the structure the same under a given base folder name old-bucket/images and new-bucket/images. I set up CloudFront to serve from old-bucket/images now, but I wanted to add new-bucket/images as well. I thought the behavior tab would set it such that it would check the new-bucket/images first then old-bucket/images. Alas, that didn't work. If the object wasn't found in the first, that was the end of the line.
Am I misunderstanding how behaviors work? Has anyone attempted anything like this?
That is expected behavior. An origin tells Amazon CloudFront where to obtain the data to serve to users, based upon a prefix, suffix, etc.
For example, you could serve old-bucket/* from one Amazon S3 bucket, while serving new-bucket/* from a different bucket.
However, there is no capability to 'fall-back' to a different origin if a file is not found.
You could check for the existence of files before serving the link, and then provide a different link depending upon where the files are stored. Otherwise, you'll need to put all of your files in the location that matches the link you are serving.

How to set Amazon S3 Bucket default image

I have a bucket on S3 with some public images in it. If I browse to the folder without specifying a file name, it serves me an image.
So using a link like this, I am still getting an image back.
https://s3.amazonaws.com/bucket_name/folder_name/
The image served is one of mine that I've obviously uploaded at some point in the past but I don't recall ever setting it as a folder default. Is there an option somewhere to do this?
Thanks.
I suspect that /bucket_name/folder_name/ is the object path. In S3, there really is not a true concept of folders. / in an object path, can be displayed as folders, but they are still just part of the key in the end.