I'm new to AWS S3, and I'm studying S3 SDK now.
If I would like to put a file on S3, there are two ways:
1) Using SDK client, $s3->putObject method
2) Using s3:// protocol
What's the difference between two?
Thank you! :)
Accessing AWS services via an SDK will make fully-authenticated API calls. Such calls require IAM credentials. They are the best way to interact with AWS services. Some commands, such as creating buckets, are only available via API calls.
Amazon S3 has the additional ability to provide access to objects via normal HTTP/HTTPS requests. For example, if an object is public, it can be accessed via https://s3.amazonaws.com/bucket-name/path/object
This means that content from Amazon S3 can be incorporated into web pages via <a> and <img> tags.
If you wish to use such links to access private objects, then the URL will need additional authentication information attached. This is know an a Amazon S3 pre-signed URL.
Related
Suppose you have to share data with a third party over the internet and the data is stored in AWS. What would be the most secure and easy way to do this?
Since sending mail is not very secure, i thought of the solution of creating a S3 bucket and run a SFTP server (with AWS Family) on it. Is there a better solution in AWS to achieve this?
This depends on how you want to "share data" and where that data resides.
Let's say you have an object in Amazon S3 that you would like to make available. There are several options for sharing access:
You could create an Amazon S3 pre-signed URL, which provides time-limited access to a private object. This is similar to storing something in DropBox and using the "Get Link" command to obtain a special URL that provides access to the object.
If the other people have their own AWS Account, you could share a specific bucket or an object with them. This has the benefit that you could put objects in a bucket and they can retrieve any of them whenever they wish.
You could write a web application that requires users to authenticate and then gives them the ability to access objects in Amazon S3. This would be similar to a photo-sharing website, where people login and can access/share photos. You would be responsible for writing this application and managing the authentication.
Update
Based on the information you provided (S3, few users, automated), the easiest method would probably be to have the other users sign-up to AWS or provide them with IAM access credentials from your own AWS Account (not recommended if you have large numbers of such users).
You can grant permission for them to access your data and they could use the AWS Command-Line Interface (CLI) to access/download the data. This can be automated with the aws s3 cp and aws s3 sync commands.
Use Case:
I want to be able to:
Upload images and audio files from my backend to S3 bucket
List and view/play content on my backend
Return the objects URLs in API responses
Mobile apps can view/play the URLs with or without? authentication from the mobile side
Is that possible without making the S3 bucket public ?
Is that possible without making the S3 bucket public ?
Yes, it should be possible. Since you are using EC2 instance for backend, you, you can setup instance role to enable private and secure access of your backed application to S3. In the role, you would allow S3 read/write. This way, if your application is using AWS SDK, you can seamlessly access S3 without making S3 public.
Regarding the links to the object, the general way is to return S3 pre-signed links. This allows for temporary access to your objects without the need for public access. The alternative is to share your objects through CloudFront as explained in Amazon S3 + Amazon CloudFront: A Match Made in the Cloud. In either case, bucket can be private.
I am trying to setup an S3 bucket that faces the public internet so that an external process can make PUT requests to upload data to the bucket.
I am trying to limit access so that only signed requests are able to be authenticated. I found instructions in the developer guide on creating the signature in the external process but I can't find instructions on how to setup the S3 bucket.
What are the necessary steps to setup an S3 bucket, with customer managed encryption keys, to allow only authenticated PUT REST requests?
As I understand it: I need to create a user with a role that can access the KMS key & S3 bucket. But I can't find instructions on the S3 bucket setup. I'm looking for specifics related to public access settings, ACLs, etc.
Note: the external process cannot use an AWS SDK, only standard REST requests (e.g. curl).
Thank you in advance for your consideration and response.
You probably want a combination of S3, API Gateway and Lambda:
https://docs.aws.amazon.com/AmazonS3/latest/API/API_PutObject.html
The API Gateway can be leveraged for securing access based on the client certificate:
https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-setup-ssl-certificate.html
Is it possible to use EBS like S3? By that I mean can you allow users to download files from a link like you can in S3?
The reason for this is because my videos NEED to be on the same domain/server to work correctly. I am creating a Virtual Reality video website however, IOS does not support cross-origin resource sharing through WebGL (which is used to create VR).
Because of this, my S3 bucket file system will not work as it will be classed as cross origin, but looking into EBS briefly it seems that it attaches to the all your instances as local storage which would get past the cross-origin problem I am facing.
Would it be simply like a folder on my web server, that could be reached by 'www.domain.com/ebs-file-system/videos/video.mp4'?
Thanks in advance for any comments.
Amazon S3 CORS
You can configure your Amazon S3 bucket to support Cross-Origin Resource Sharing (CORS):
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
CloudFront Behaviours
Another option is to use Amazon CloudFront, which can present multiple systems as a single URL. For example, example.com/video could point to an S3 bucket, while example.com/stream could point to a web server. This should circumvent CORS problems.
See:
Format of URLs for CloudFront Objects
Values that You Specify When You Create or Update a Web Distribution
Worst Case
Worst case, you could serve everything via your EC2 instance. You could copy your S3 content to the instance (eg using the AWS Command-Line Interface (CLI) aws s3 sync command) and serve it to your users. However, this negates all the benefits that Amazon S3 provides.
I have data from multiple users inside a single S3 account. My desktop app has an authentication system which let the app know who the user is and which folder to access on S3. but the desktop app has the access code to the whole S3 folder.
somebody told me this is not secure since a hacker could break the request from the app to the S3 and use the credentials to download all the data.
Is this true? and if so how can I avoid it? (he said I need to a client server in the AWS cloud but this isn't clear to me... )
btw. I am using Boto python library to access S3.
thanks
I just found this:
Don't store your AWS secret key in the app. A determined hacker would be able to find it eventially. One idea is that you have a web service hosted somewhere whose sole purpose is to sign the client's S3 requests using the secret key, those requests are then relayed to the S3 service. Therefore you get your users to authenticate agaist your web service using credentials that you control. To re-iterate: the clients talk directly to S3, but get their requests "rubber-stamped"/approved by you.
I don't see S3 necessarily as a flat structure - if you use filesystem notation "folder/subfolder/file.ext" for the keys.
Vanity URLs are supported by S3 see http://docs.amazonwebservices.com/AmazonS3/2006-03-01/VirtualHosting.html - basically the URL "http://s3.amazonaws.com/mybucket/myfile.ext" becomes "http://mybucket.s3.amazonaws.com/myfile.ext" and you can then setup a CNAME in your DNS that maps "www.myname.com" to "mybucket.s3.amazonaws.com" which results in "http://www.myname.com/myfile.ext"
Perfect timing! AWS just announced a feature yesterday that's likely to help you here: Variables in IAM policies.
What you would do is create an IAM account for each of your users. This will allow you to have a separate access key and secret key for each user. Then you would assign a policy to your bucket that restricts access to a portion of the bucket, based on username. (The example that I linked to above has good example of this use case).