Best practices of uploading a file to S3 and metadata to RDS? - amazon-web-services

Context
I'm building a mock service to learn AWS. I want a user to be able to upload a sound file (which other users can listen to). To do this I need the sound file to be uploaded to S3 and metadata such as file name, name of uploader, length, S3 ID to RDS. It is preferable that the user uploads directly to S3 with a signed URL instead of doubling the data transfered by first uploading it to my server and from there to S3.
Optimally this would be transactional but from what I have gathered there's no functionality for that given. In order to implement this and minimize the risk of the cases where the file being successfully uploaded to S3 but not the metadata to RDS and vice versa my best guess is as follows:
My solution
With words:
First is an attempt to upload the file to S3 with a key (uuid) I generate locally or server-side. If this is successful I make a request to my API to upload the metadata including the key to RDS. If this is unsuccessful I remove the object from S3.
With code:
uuid = get_uuid_from_server();
s3Client.putObject({.., key: uuid, ..}, function(err, data) {
if (err) {
reject(err);
} else {
resolve(data);
// Upload metadata to RDS through API-call to EC2 server. Remove s3 object with key:
uuid if this call is unsuccessful
}
});
As I'm learning, my approaches are seldom the best practices but I was unable to find any good information on this particular problem. Is my approach/solution above in line with best practices?
Bonus question: is it beneficial for security purposes to generate the file's key (uuid) server-side instead of client-side?

Here are 2 approaches that you can pick, assuming the client is a web browser or mobile app.
1. Use your server as a proxy to S3.
Your server acts as a proxy between your clients and S3, you have full control of the upload flow, control the supported file types and can inspect file contents, for example: to make sure the file is a correct sound file, before uploading to S3.
2. Use your server to create pre-signed upload URLs
In this approach, your client first requests server to create a single or multiple (for multi-part upload) pre-signed URLs. Clients then upload to your S3 using those URLs. Your server can save those URLs to keep track later.
To be notified when the upload finishes successfully or unsuccessfully, you can either
(1) Ask clients to call another API,e.g: /ack after the upload finishes for a particular signed URL. If this API is not called after some time, e.g: 1 hour, you can check with S3 and delete the file accordingly. You can do this because you have the signed URL stored in your DB at the start of the upload.
or
(2) Make use of S3 events. You can configure ObjectCreated event in S3, which is fired whenever an object is created, and send all the events to a queue in SQS, and have your server process each event from there. This way, you do not rely on clients to update your server after an upload finishes. S3 will notify your server accordingly, for all successful uploads.

Related

Why is lambda getting randomly timed out while trying to read the head object for a key on S3 bucket?

I am working on a feature where a user can upload multiple files which need to be parsed and converted to PDF if required. For that, I'm using AWS and when the user selects N files for upload then the following happens:
The client browser is connected to an AWS WebSocket API which is responsible for sending back the parsed data to respective clients later.
A signed URL for S3 is get from the webserver using which all of the user's files are uploaded onto an S3 bucket.
As soon as each file is uploaded, a lambda function is triggered for it which fetches the object for that file in order to get the content and some metadata to associate the files with respective clients.
Once the files are parsed, the response data is sent back to the respective connected clients via the WebSocket and the browser JS catches the event data and renders it.
The issue I'm facing here is that the lambda function randomly times out at the line which fetches the object of the file (either just head_object or get_object). This is happening for roughly 50% of the files (Usually I test by just sending 15 files at once and 6-7 of them fail)
import boto3
s3 = boto3.client("s3")
def lambda_handler(event, context):
bucket = event["Records"][0]["s3"]["bucket"]["name"]
key = urllib.parse.unquote_plus(event["Records"][0]["s3"]["object"]["key"], encoding="utf-8")
response = s3.get_object(Bucket=bucket, Key=key) # This or head_object gets stuck for 50% of the files
What I have observed is that even if the head_object or get_object is fetched for a file which already exists on S3 instead of getting it for the file who's upload triggered the lambda. Then also it times out with the same rate.
But if the objects are fetched in bulk via some local script using boto3 then they are fetched under a second for 15 files.
I have also tried using my own AWS Access ID and Secret key in lambda to avoid any issue caused by the temporarily generated keys.
So it seems that the multiple lambda instances are having trouble in getting the S3 file objects in parallel, which shouldn't happen though as AWS is supposed to scale well.
What should be done to get around it?

How to return the response to front end while processing csv file from S3 using Lamda?

New to AWS Lambda, Cognito, AWS API-Gateway and server-less computing.
My user uploads a csv file, and i want to insert it into Amazon RDS, and return a success or failure response to the user.
I understand that I can upload the file to S3(used cognito identity pools) and then write a lambda that trigger on upload to S3, which then insert the data from csv to Amazon RDS. I want to show the success or failure response from lambda to the user.
One way i thought about is:
After upload to S3, show a message, "Upload successful. File processing"
Then redirect user to a file list page, show the status of file there.
Meanwhile, my lambda function will insert the file name into file table with status column as "IN PROGRESS", and update it's status depending upon the success/failure of csv insert.
I will keep checking the status of file table every 10 seconds or so, and change the status in the file list page for the recent file, when the status is changed in file table.
Is there a better way to do this using aws server-less computing?
Going with the serverless approach is good. If you're trying to perform real time notifications you can take a look at using API Gateway with Web Socket APIs.
You would enhance your suggestion by replacing the 10 second poll by opening a web socket connection instead.
Once the file is processed your Lambda would notify the web socket connection and then you would notify the customer.
This is how real time notification systems and instant messenger style applications tend to work.

What is the correct way to set up S3 for loading content in the browser?

I want to do the following: a user in a browser types some text and after he presses a 'Save' button, the text should be saved in a file (for example: content.txt) in a folder (for example: /username_text) on the root of an S3 bucket.
Also, I want the user to be able, when he visits the same page, load the content from S3 and continue working on the file. Then, if he/she is done, save the file to S3 again.
Probably important to mention, but I plan on using NodeJS for my back-end...
My question now is: What is the best way to set this storing-and-retrieving thing up? Do I create an API gateway + Lambda function to GET and POST files through that? Or do I for example use the aws-sdk in Node to directly push and pull files from S3? Or is there a better way to do this?
I looked at the following two guides:
Using AWS S3 Buckets in a NodeJS App – Codebase – Medium
Image Upload and Retrieval from S3 Using AWS API Gateway and Lambda
Welcome to StackOverflow!
I think you are worrying too much about the not-so-important stuff. S3 is nothing but a storage system. You could have decided to store the content of these files on DynamoDB, RDS, etc. What would you do if you stored its contents on these real databases? You'd fetch for data and display it to the user, wouldn't you?
This is what you need to do with S3! S3 is a smart choice on your scenario because your "file" can grow very big and S3 is a great place for storing files. However, apparently, you're not actually storing files (think of .pdf, .mp4, .mov, etc.), you're essentially only storing human-readable text.
So here's one approach on how to solve your problem:
FETCHING FILE CONTENT
User logs in
You fetch the user's personal information based on some token. You can store all the metadata in DynamoDB, where given a user_id, fetch all the "files" from this user. These "files" (metadata only) would be the bucket and key for the actual file on S3.
You use the getObject API from S3 to fetch the file based on your query and display the body of your file to your user in a RESTful way. Your response should look something like this:
{
"content": "some content"
}
SAVING FILE CONTENT
User logs in
The user writes anything in a form and submits it. In your Lambda function, you grab the content of this form and process it. This request should look something like this:
{
"file_id": "some-id",
"user_id": "some-id",
"content": "some-content"
}
If the file_id exists, update the content in S3. Otherwise, upload a new file in S3 and then create a new entry in DynamoDB. You'd then, of course, have to handle if the user submitting the changes actually owns the file, but if you're using UUIDs it shouldn't be too much of a problem, but still worth checking in case an ID is leaked somehow.
This way, you don't need to worry about uploading/downloading files as these are CPU intensive tasks, so you can keep your costs low as well as using very little RAM in your functions (128MB should be more than enough), after all, you're now only serving text. Not only this will simplify your way of designing it, but will also make things simpler both in API Gateway and in your code as you won't have to deal with binary types. The maximum you'll do is convert the buffer from S3 to a String when serving some content, but this should be completely fine.
EDIT
On your question regarding whether you should upload it from the browser or not, I suggest you take a look into this answer where I cover the pros/cons of doing it via API Gateway vs from the Browser.

How to check an AWS S3 key for existence with the AWS CPP SDK?

I use the S3 SDK CPP and have the following cenario:
I get some information sent from a client to my server (client wants to download from S3)
With the information sent I create a S3 key
I want to check if the key exists (has a file) on the S3
I create a presigned URL that allows the client to download a file from S3
Send URL to client
Client downloads the file
Before I execute step 4 I want to check if the key really exists on the S3. The client can't download a file that does not exist anyway.
I have an AWS::S3Client object. Do I really need to create a TransferManager for this or is there a simple way to handle this with the client object?
The client itself does not have a relation to S3 so I can't check it there. The server has to do all the work.
I found a working solution:
auto client = Aws::MakeShared<Aws::S3::S3Client>("client", getCredentials(), getClientConfig());
Aws::S3::Model::HeadObjectRequest request;
request.WithBucket(<bucketname>).WithKey(<s3key>);
const auto response = client->HeadObject(request);
response.IsSuccess(); //Is key existing on s3?
Issue an authenticated HTTP HEAD request against the object. You can use:
HeadObject
HeadObjectAsync
To quote:
The HEAD operation retrieves metadata from an object without returning
the object itself. This operation is useful if you're only interested
in an object's metadata. To use HEAD, you must have READ access to the
object.

Amazon S3: Do not allow client to modify already uploaded images?

We are using S3 for our image upload process. We approve all the images that are uploaded on our website. The process is like:
Clients upload images on S3 from javascript at a given path. (using token)
Once, we get back the url from S3, we save the S3 path in our database with 'isApproved flag false' in photos table.
Once the image is approved through our executive, the images start displaying on our website.
The problem is that the user may change the image (to some obscene image) after the approval process through the token generated. Can we somehow stop users from modifying the images like this?
One temporary fix is to shorten the token lifetime interval i.e. 5 minutes and approve the images after that interval only.
I saw this but didn't help as versioning is also replacing the already uploaded image and moving previously uploaded image to new versioned path.
Any better solutions?
You should create a workflow around the uploaded images. The process would be:
The client uploads the image
This triggers an Amazon S3 event notification to you/your system
If you approve the image, move it to the public bucket that is serving your content
If you do not approve the image, delete it
This could be an automated process using an AWS Lambda function to update your database and flag photos for approval, or it could be done manually after receiving an email notification via Amazon SNS. The choice is up to you.
The benefit of this method is that nothing can be substituted once approved.