Setup:
We are running a E-commerce website consists of Cloudfront-->ALB-->EC2. we are serving the images from S3 via cloudfront behaviour.
Issue:
Our admin URL is like example.com/admin. We are uploading product images via admin panel as a zip file that goes via cloudfront.Each zip file size around 100MB-150MB consists of around 100 images. While uploading the zip file we are facing 502 gateway error from cloudfront since it took more than 30sec, which is default time out value for cloudfront.
Expected solution:
Is there a way we can skip the cloudfront for only uploading images?
Is there any alternate way increasing timeout value for cloudfront??
Note: Any recommended solutions are highly appreciated
CloudFront is a CDN service to help you speed up your services by caching your static files in edge location. So it won't help you in uploading side
In my opinion, for the uploading images feature, you should use the AWS SDK to connect directly with S3.
If you want to upload files directly to s3 from the client, I can highly suggest using s3 presigned URLs.
You create an endpoint in your API to create the presigned URL for a certain object (myUpload.zip), pass it back to the client and use that URL to do the upload. It's safe, and you won't have to expose any credentials for uploading. Make sure to set the expiration time to a reasonable time (one hour).
More on presigned URLs's here https://aws.amazon.com/blogs/developer/generate-presigned-url-modular-aws-sdk-javascript/
Related
I am trying to build a file upload and download app using AWS APi gateway,AWS lambda and S3 for storage.
AWS lambda puts a cap of 6 mb on the file size and API gateway a limit of 10 mb.
Therefore we decided to use pre sign url for uploading n downloading files.
Step 1- Client sends the list of filename(let's say 5 files) to lambda.
Step 2- Lamda creates and returns the list of pre sign url(PUT) for those files(5 urls).
Step 3- Client uploads the file to S3 using the urls which it received.
Note - The filename are S3 bucket keys.
Similar approach with downloading file .
Now the issue is with the latency, it takes quite a long time and performance is the collateral damage.
The question is, the above approach the only way to do file upload n download in lambda.
It looks like the case of S3 Transfer Acceleration. You'll still create pre-signed URLs but enable this special setting in S3 which will reduce latency.
https://docs.aws.amazon.com/AmazonS3/latest/dev/transfer-acceleration.html
Alternatively you can use CloudFront with S3 origin to upload / download file. Might have to re-architect your solution but with CloudFront and AWS networking backbone, latency can be reduced a lot.
Our media image files are stored under
http://s3.ap-northeast-2.amazonaws.com/{bucket_nmae}/media/
I want to change the url to
http://static.example.com/media/ and serve them through cloudflare / cloudfront if possible
I've seen tutorials which describe the steps for using s3 as your endpoint or CDN as your endpoint. (https://ruddra.com/posts/aws-boto3-useful-functions/)
But I haven't found one that describes the steps to move from s3 to CDN .
Specifically,
Do I need to move files from s3 to CDN manually?
I think image field itself doesn't have URL attached to it, and once we move (or connect) s3 image to CDN, I believe one can use http://static.example.com instead of http:///s3.ap-northeast-2.amazonaws.com/{bucket_name}/
What about the image urls stored in the database..
For instance, when you upload image as a part of a posting, the posting html might have the full image url. These will require DB data migrations.. I belive
To create your custom domain for a bucket you will need:
create a distribution in CloudFront and set the origin as your bucket
create a Record Set in Route 53 and point your CloudFront endpoint created in CNAME record.
That is it.
I have to upload video files into an S3 bucket from my React web application. I am currently developing a simple react application and from this application, I am trying to upload video files into an S3 bucket so I have decided two approaches for implementing the uploading part.
1) Amazon EC2 instance: From the front-end, I am hitting the API and the server is running in the Amazon EC2 instance. So I can upload the files into S3 bucket from the ec2 instance.
2) Amazon API Gateway + Lambda: I am directly sending the local files into an S3 bucket through API + Lambda function by calling the https URL with data.
But I am not happy with these two methods because both are more costly. I have to upload files into an S3 bucket, and the files are more than 200MB. I don't know I can optimize this uploading process. Video uploading part is necessary for my application and I should be very careful to do this part and also I have to increase the performance and cost-effective.
If someone knows any solution please share with me, I will be very helpful for me to continue my process.
Thanks in advance.
you can directly upload files from your react app to s3 using aws javascript sdk and cognito identity pools and for the optimization part you can use AWS multipart upload capability to upload file in multiple parts I'm providing links to read about it further
AWS javascript upload image example
cognito identity pools
multipart upload to S3
also consider a look at aws managed upload made for javascript sdk
aws managed upload javascript
In order to bypass the EC2, you can use a pre-authenticated POST request to directly upload you content from the browser to the S3 bucket.
I have an application which is a static website builder.Users can create their websites and publish them to their custom domains.I am using Amazon S3 to host these sites and a proxy server nginx to route the requests to the S3 bucket hosting sites.
I am facing a load time issue.As S3 specifically is not associated with any region and the content being entirely HTML there shouldn't ideally be any delay.I have a few css and js files which are not too heavy.
What can be the optimization techniques for better performance? eg: Will setting headers ? or Leverage caching help? I have added an image of pingdom analysis for reference.
Also i cannot use cloudfront as when the user updates an image the edge locations have a delay of few minutes before the new image is reflected.It is not instant update,hence restricting the use for me. Any suggestions on improving it?
S3 HTTPS access from a different region is extremely slow especially TLS handshake. To solve the problem we invented Nginx S3 proxy which can be find over the web. S3 is the best as origin source but not as a transport endpoint.
By the way try to avoid your "folder" as a subdomain but specify only S3 regional(!) endpoint URL instead with the long version of endpoint URL, never use https://s3.amazonaws.com
One the good example that reduces number of DNS calls is the following below:
https://s3-eu-west-1.amazonaws.com/folder/file.jpg
Your S3 buckets are associated with a specific region that you can choose when you create them. They are not geographically distributed. Please see AWS doc about S3 regions: https://aws.amazon.com/s3/faqs/
As we can see in your screenshot, it looks like your bucket is located in Singapore (ap-southeast-1).
Are your clients located in Asia? If they are not, you should try to create buckets nearer, in order to reduce data access latency.
About cloudfront, it should be possible to use it if you invalide your objects, or just use new filenames for each modification, as tedder42 suggested.
I have a feature on my website where users can upload images. Users can see their own images but not others. The images are stored on Amazon S3 but uploaded and viewed on my website which is at a web hosting and not S3.
I have tried to show the pictures on my website through my private key when pictures are private at Amazon but failed.
Found this post: http://blog.learningtree.com/configuring-amazon-s3-to-serve-images which describes how to make the images/files more private even if they are set to public on S3. The site suggest to stop search engines with robots.txt file and only serves images to people who are coming from my domain to stop hot-linking.
Do you think this is enough if I make them public on S3 or should I think about something else?
You can also configure the images on S3 to be private, and then generate pre-signed URLs in your app. That way, you can include an expiry time within the link.
From Authenticating REST Requests in the S3 docs:
For example, if you want to enable a user to download your private data directly from S3, you can insert a pre-signed URL into a web page before giving it to your user.
People can then only use the generated URL for a certain time. If they come through your app, it will always generate a link for some time in the future (say, 15 minutes as an example). If people pass around the links to these images, these links auto-expire.
Most S3 SDKs have higher-level methods to pre-sign those URLs.
Relevant: How secure are presigned URLs in AWS S3? here on SO.