Why does aws s3 getObject executes slowly even with small files? - amazon-web-services

I am relatively new to amazon web services. There is problem that came up while I was coding my new web app. I am currently storing profile pictures in an s3 bucket.
I don’t want these profile pictures to be seen by the public, only authorized members. So I have a php file like this:
This php file executes getObject and sends out a header to show the picture but only if the user is allowed to see the picture. I query the database and also check session to make sure that the currently logged in user has access to the picture. All is working fine, but it takes around 500 milliseconds to the get request to execute, even on small files (40kb). On bigger files it gets even longer as well as if I embed the php file in an img tag multiple times with different query string values.
I need to mention that I’m testing this in a localhost environment with apache webserver.
Could be the the problem is that getObject is optimized to be run from an ec2 instance and that if I would test this on an ec2 the response time is much better?
My s3 is based in London, and I’m testing it in Hungary with a good internet connection so I’m not sure if this response time is what I should get here.
I read that other people had similar issues, but from my understanding the time it takes from s3 to transfer the files to an ec2 should be minimal as they are all in the cloud and the latency between these services and all the other aws services should be minimal (At least if they are in the same region).
Please don’t tell me in comments that I should just make my bucket public and embed the direct link to the file as it is not a viable option for obvious reasons. I also don’t want to generate pre-signed urls for various reasons.
I also tested this without querying the database and essentially the only logic in my code is to get the object and show it to the user. Even with this I get 400+ milliseconds response time.
I also tried using doesObjectExist() and I still need to wait around 300-400 milliseconds for that to give me a response.
Multiple get request to the same php file as image source
UPDATE
I tested it on my ec2 instance and I've got much better response time. I tested it with multiple files and all is fine. It seems like that if you use getObject on localhost, the time it takes to connect to s3 and fetch the data multiplies.
Thank you for the answers!

Related

How to add custom authentication to aws s3 download of large files

I'm trying to figure out how to implement these requirements for S3 downloads:
Signed URL (links should become invalid after some amount of time).
Download only 1 time - any other requests to the same URL should fail.
Need to restrict downloads to the user/browser who made the request to generate the signed URL - no other user should be able to download.
Be able to deal with large files (ideally, streaming, just like when someone downloads directly from a standard S3 access point).
Things that I've tried:
S3 Object Lambda + Access Point
Generate pre-signed URL to lambda access point, this works well.
Make use of S3 object metadata to store download state / restrict downloads to just 1 time. This works well.
No way to access user-agent or requestor's IP.
Large files are a problem. Timeout has been configured to 15 minutes (the max), but request still times out much earlier. This was done with NodeJS.
Lambda + Lambda URL
Pre-signed URL is generated and passed to lambda URL as encoded param - the lambda makes the request if auth/validation passes. This approach seems to work fine.
Can use same approach of leveraging S3 object metadata to limit downloads to just 1 time.
User-agent and requestor IP is available, this is great.
Large files are a problem. I've tried NodeJS and it behaves the same as the S3 Object Lambda (eventually times out, even earlier than the configured time), Also implemented the Java streaming handler but it dies with an "out of memory" error, even when I bump the memory up to 3GB (the file is only 1GB and I thought streaming would get around the memory problem anyway). I've tried several ways to stream (Java 11), but it really seems like the streaming handler is not really streaming, but buffering somewhere outside of the lambda.
I'm now unsure if AWS lambda will be able to handle all of these requirements, but I would really like to know if others might have ideas, or if I'm missing something.

best practice for streaming images in S3 to clients through a server

I am trying to find the best practice for streaming images from s3 to client's app.
I created a grid-like layout using flutter on a mobile device (similar to instagram). How can my client access all its images?
Here is my current setup: Client opens its profile screen (which contains the grid like layout for all images sorted by timestamp). This automatically requests all images from the server. My python3 backend server uses boto3 to access S3 and dynamodb tables. Dynamodb table has a list of all image paths client uploaded, sorted by timestamp. Once I get the paths, I use that to download all images to my server first and then send it to the client.
Basically my server is the middleman downloading the sending the images back to the client. Is this the right way of doing it? It seems that if the client accesses S3 directly, it'll be faster but I'm not sure if that is safe. Plus I don't know how I can give clients access to S3 without giving them aws credentials...
Any suggestions would be appreciated. Thank you in advance!
What you are doing will work, and it's probably the best option if you are optimising for getting something working quickly, w/o worrying too much about waste of server resources, unnecessary computation, and if you don't have scalability concerns.
However, if you're worrying about scalability and lower latency, as well as secure access to these image resources, you might want to improve your current architecture.
Once I get the paths, I use that to download all images to my server first and then send it to the client.
This part is the first part I would try to get rid of as you don't really need your backend to download these images, and stream them itself. However, it seems still necessary to control the access to resources based on who owns them. I would consider switching this to below setup to improve on latency, and spend less server resources to make this work:
Once I get the paths in your backend service, generate Presigned urls for s3 objects which will give your client temporary access to these resources (depending on your needs, you can adjust the time frame of how long you want a URL access to work).
Then, send these links to your client so that it can directly stream the URLs from S3, rather than your server becoming the middle man for this.
Once you have this setup working, I would try to consider using Amazon CloudFront to improve access to your objects though the CDN capabilities that CloudFront gives you, especially if your clients distributed in different geographical regions. AFA I can see, you can also make CloudFront work with presigned URLs.
Is this the right way of doing it? It seems that if the client accesses S3 directly, it'll be faster but I'm not sure if that is safe
Presigned URLs is your way of mitigating the uncontrolled access to your S3 objects. You probably need to worry about edge cases though (e.g. how the clients should act when their access to an S3 object has expired, so that users won't notice this, etc.). All of these are costs of making something working in scale, if you have that scalability concerns.

How do I transfer images from public database to Google Cloud Bucket without downloading locally

I have a a csv file that has over 10,000 urls pointing to images on the internet. I want to perform some machine learning task on them. I am using Google Cloud Platform infrastructure for this task. My first task is to transfer all this images from the urls to a GCP bucket, so that I can access them later via docker containers.
I do not want to download them locally first and then upload them as that is just too much work, instead just transfer them directly to bucket. I have looked at Storage Transfer Service and for my specific case I think, I will be using a URL list. Can anyone help me figure out how do I proceed next. Is this even a possible option?
If yes, how do I generate an MD5 has that is mentioned here for each url in my list and also get the number of bytes for image for each url ?
As you noted, Storage Transfer Service requires that you provide it with the MD5 of each file. Fortunately, many HTTP servers may provide you with the MD5 of an object without requiring that you download it. Issuing an HTTP HEAD request may result in the server providing you with a Content-MD5 header in its response, which may not be in the form that Storage Transfer service requires, but it can be converted into that form.
The downside here is that web servers are not necessarily going to provide you with that information. There's no way of knowing without checking.
Another option worth considering is to set up one or more GCE instances and run a script from there to download the objects to your GCE instance and from there upload them into GCS. This still involves downloading them "locally," but locally no longer means a place off of Google Cloud, which should speed things up substantially. You can also divide up the work by splitting your CSV file into, say, 10 files with 1000 objects each in them, and setting up 10 GCE instances to do the work.

Transfer file from AWS S3 to OneDrive with AWS Lambda

A client of ours requested that we have copies of their files on both AWS S3 and OneDrive.
The usual MO: File is sent from an iOS application to an AWS S3 bucket. This triggers an AWS Lambda Function which attaches the file to an email and sends a copy to the client, which they again store on OneDrive. Now, we want to skip the email part and transfer the file directly to OneDrive.
All my research so far points to Zapier or CloudRail or MS Graph REST Api. The problem I'm having is that we want to transfer the file with an AWS Lambda function (Java8), automagically. Almost all the tutorials and examples on MS Graph needs a client to log in manually. Mostly client side logic. The other methods have more overhead, and we don't (unnecessarily) want to make our stack more complicated than it already is.
I realize this is a very specific case. We are systematically replacing the client's file management system, without disrupting their day-to-day operations too much.
Any conclusive pointers/examples/tutorials to get this done server side would be greatly appreciated.
I'm not sure how well S3 aligns with OneDrive, they are quite different models. OneDrive is provisioned by user which begs the question, which user would you want to copy this file too? I would think Azure Storage would be a far better fit as it uses a similar model to S3.
You can use Microsoft Graph API to upload the file to a user's OneDrive. You would need to authenticate the user in order to obtain an Access and Refresh Token. Once this process is done, you can store that Refresh Token and retrieve an updated Access Token as needed.
Also with CloudRail it's necessary to authenticate the user, but there are methods to store and use an access token.
The services have two methods, loadAsString and saveAsString, and they are used to store and load credentials. You could call loadAsString with your access token, the string can be different from service to service, but will look something like this: [{“access_token”: “YOUR ACCESS TOKEN”}]
To add to this, Microsoft now has a cloud migration tool www.mover.io that allows you to sync files & folders from most clouds into Azure blob, Sharepoint or OneDrive directly, so without download/upload to a client machine.
Personally used it only for a one-time sync, but leaving it here for posterity.
The client only has to login once so if you already have the client and secret keys, you can do the manual flow once then save the generated token file together with your code files in AWS. Next time the code is ran, it uses the refresh token. Last time I did this I was able to set the refresh token to never expire but I think Microsoft has randomly removed that option and now the token can only last something like 2 or 3 years max

Log delay in Amazon S3

I have recently hosted in Amazon S3, and I need the log files to calculate the statistics for the "get", "put", "list" operations in the objects.
And I've observed that the log files are organized weirdly. I don't know when the log will appear(not immediatly, at least 20 minutes after the operation) and how many lines of logs will be contained in one log file.
After that, I need to download these log files and analyse them. But I can't figure out how often I will do this.
Can somebody help? Thanks.
What you describe (log files being made available with delays and being in unpredictable order) is exactly what is declared by AWS as behaviour to expect. This is by nature of distributed system, AWS S3 is using to provide S3 service, the same request may be served each time from different server - I have seen 5 different IP addresses being provided for publishing.
So the only solution is: accept the delay, see the delay you experience and add some extra time and learn living with this total delay (I would expect something like 30 to 60 minutes, but statistics could tell more).
If you need log records ordered, you have either sort them yourself, or search for some log processing solutions - I have seen some applications being offered exactly for this purpose.
In case, you really need to get your log file with very short delay, you have to make the logs yourself and this means, you have to write and run some frontend, which gives access to your files on S3 and at the same time keeps logging as needed.
I run such a solution, users get user name and password and url of my frontend. As they send the request, I evaluate, if they provide proper credentials and if they are allowed to see given resource, and if so, I create few minutes valid temporary url for that resource and redirect the request to that.
But such a fronted costs money (you have to run your frontend somewhere) and is less robust, then accessing directly the AWS S3.
Good luck, Lulu.
A lot has changed since the time that the question was originally posted. The delay is still there, but one of OP concerns was when to download the logs to analyze them.
One option right now would be to leverage Event Notifications: https://docs.aws.amazon.com/AmazonS3/latest/user-guide/setup-event-notification-destination.html
This way, whenever an object is created in the access logs bucket, you can trigger a notification either to SNS, SQS or Lamba, and based on that download and analyze the log files.