URL expiration clarification for uploading file through S3 pre-singed URL - amazon-web-services

Lets assume we generate a pre-signed URL to upload a file with an expiration time of 15sec. And we start uploading a large file. Should the file upload be completed within 15sec of the URL generation or it can go beyond that if the file upload start within the 15sec time?

Upload action should start before the expiry time and there is no known restriction on time taken for completing the uploading after it starts. Since the S3 service evaluates the permissions for uploading the file while starting the upload action, it should not be affected by the time taken for actual uploading of the file.
In your case, considering the file size, if the upload fails for any reason then users wont be able to retry after 15 sec.
Below are more details on this point from "Uploading using Pre-signed urls" doc
That is, you must start the action before the expiration date and time. If the action consists of multiple steps, such as a multipart upload, all steps must be started before the expiration, otherwise you will receive an error when Amazon S3 attempts to start a step with an expired URL. ```

Related

AWS CloudFront still caching when set to "CachingDisabled"

We are using an S3 bucket to hold customer zip files they created and made ready for them to download. We are using CloudFront only to handle the SSL. We have caching disabled.
The customer receives an email to download their zip file, and that works great. The S3 lifecycle removes the file after 2 weeks. Now, if they add more photos to their account and re-request their zip file, it overwrites the current zip file with the new version. So the link is exactly the same. But when they download, it's the previous zip file, not the new one.
Additionally after the two weeks, the file is removed and they try to download they get an error that basically says they need to login and re-request their photos. So they generate a zip file but their link still gives them the error message.
I could have the lambda that creates the zip file invalidate the file when it creates it, but I didn't think I needed to invalidate since we aren't caching?
Below is the screenshot of the caching policy I have selected in CloudFront

Remove Incomplete Multipart Upload files from AWS S3

actually we have one application for file storage (Dropbox) which is using the AWS s3 bucket .
we have diffrect plans for end users like Free and silver/paied depanding on the size of file.
Some time users upload the file druing the upload process its intrept due to some reason like
1 - user cancel the uploading Process in middle
2 - Network glitch between user's internet and AWS S3
In above cases if for example user try to upload 1GB file and in the middle of upload process user/he/she cancel it, in this cases 50% (0.5GB) file was already uploaded to S3.
so that uploaded file is there on the s3 backet and it occoupied the space on s3 and also we have to pay for that 0.5GB file.
I want if upload process kill by end user or due to the network issue the uploaded part of file should be delete from s3 after some time or on the same time when user upload it and it was not completed/intercepted.
how can i define a life cycle for S3 bucket to accomplished my requirement.
You can create a new rule for incomplete multipart uploads using the Console:
1) Start by opening the console and navigating to the desired bucket
2) Then click on Properties, open up the Lifecycle section, and click on Add rule:
3) Decide on the target (the whole bucket or the prefixed subset of your choice) and then click on Configure Rule:
4) Then enable the new rule and select the desired expiration period:
5) As a best practice, we recommend that you enable this setting even if you are not sure that you are actually making use of multipart uploads. Some applications will default to the use of multipart uploads when uploading files above a particular, application-dependent, size.
Here’s how you set up a rule to remove delete markers for expired objects that have no previous versions:
You can refer this AWS Blog Post
Note: If you are on New Console Select Bucket --> Click Management
(4th Tab) --> Select Lifecycle Tab (1st) --> Click Add Lifecycle Rule
Butto
n.

Amazon S3: Do not allow client to modify already uploaded images?

We are using S3 for our image upload process. We approve all the images that are uploaded on our website. The process is like:
Clients upload images on S3 from javascript at a given path. (using token)
Once, we get back the url from S3, we save the S3 path in our database with 'isApproved flag false' in photos table.
Once the image is approved through our executive, the images start displaying on our website.
The problem is that the user may change the image (to some obscene image) after the approval process through the token generated. Can we somehow stop users from modifying the images like this?
One temporary fix is to shorten the token lifetime interval i.e. 5 minutes and approve the images after that interval only.
I saw this but didn't help as versioning is also replacing the already uploaded image and moving previously uploaded image to new versioned path.
Any better solutions?
You should create a workflow around the uploaded images. The process would be:
The client uploads the image
This triggers an Amazon S3 event notification to you/your system
If you approve the image, move it to the public bucket that is serving your content
If you do not approve the image, delete it
This could be an automated process using an AWS Lambda function to update your database and flag photos for approval, or it could be done manually after receiving an email notification via Amazon SNS. The choice is up to you.
The benefit of this method is that nothing can be substituted once approved.

Regarding boto and aws. While uploading to s3 as a multipart file I am not able to get the result for get_all_multipart_uploads

I have a system where in we are performing upload of videos to aws by multi part upload. I have put this process as a work flow manager task. When the process completes I will update my database with the status of the payload as complete.
If the payload is not in completed status even after 24 hours I should delete the associated parts of the multipart upload from s3.
Now what all I have.
1. I have the video details(name)
2. I have the bucket in which I will be uploading the video.
When I perform the command bucket.get_all_multipart_uploads() I am not getting the asset which I have uploaded to the system, ie I dont find the name of the video which I had put on S3. I am pretty new to this. Can any one help me with proper documents and how to identify the uploads which hang on s3.

Can I recover lost information about an S3 multipart upload?

In this multipart upload example, one needs to save the upload ID and a set of etags corresponding to each uploaded part until the upload is "closed." If I lose my upload ID, I guess I can recover it by looking through open multipart uploads with ListMultipartUploads, but what if I lose an etag? Can those be recovered somehow, or must I abort the whole transfer and start over?
Once you have retrieved the upload ID from ListMultipartUploads, you can then use ListParts to get the list of parts (and their etags) that have been completed for this upload. You can use this information to then restart your upload from the last completed part.
Multipart Upload API and Permissions
Example of resuming multipart uploads using AWS SDK for iOS