I have a Cross-origin resource sharing on my web application. I'm trying to create a virtual reality 360-degree video website, but on safari/iPhone it fails because of CORS. For some reason, safari/ios doesn't support CORS on WebGL which runs the VR.
If my files were all in my EC2 instance, it would work fine as they come from the same origin, however, because I have my web app files on EC2, and my assets on S3 its causing an issue.
To get around this I have been told I can use Amazons CloudFront, to effectively serve files from my EC2, and S3 bucket but make it look like it's from the same origin to the browser. This would then bypass the CORS error I'm getting and run normally.
However I cannot work out how to do this, could someone please explain how I would do this in CloudFront?
Thanks
For achieving your use case you will have to set up a AWS Cloud Front distribution with multiple origin. Follow this developer guide that is the best one.
http://www.bucketexplorer.com/documentation/amazon-s3--how-to-create-distributions-post-distribution-with-multiple-origin-servers.html
Related
I use an amazon S3 bucket and cloudflare to serve some glb and js files for threeJs. I was wondering how can I strictly whitelist only the domains that I want to have access to those files. I have been receiving emails from amazon about suspicious activity on my storage and indeed seen some requests from unknown referrers.
Example of usage: "mywebsite.com" has a threejs script that loads a 3d model from "somethingsomething.cloudfront.net/models/model.glb". I want only "mywebsite.com" to be able to request that file.
I have been googling for hours but the terminology is killing me. I would appreciate the slightest nudge in the right direction.
Screenshot of referrers on cloudflare
A client of mine has his website domain and hosting with. We'd like to use Amazon CloudFront as CDN, but we don't want to use S3 – we'd like to keep the site files where they are on DreamHost's servers.
I'm pretty sure this is possible, since CloudFront does allow custom origins, and I signed up for CloudFront, but I am unsure how to fill out the form (what to put for origin name, etc...) even after reading the pop-up help. We are on the bellfountain server of DreamHost.
What I've Tried
I did see the "create amazon cloudfront distribution not using amazon S3 bucket" question, and that is basically what I am after, but it wasn't specific enough for my needs.
I have also tried posting on the CloudFront forum, but that was less than helpful (no one responded after almost a month).
I've scoured Amazon's documentation (which is very thorough, I'll admit), but the most detailed information is for users of S3, and the stuff about using a custom domain again wasn't specific enough for me to figure it out. We do not have a paid support plan.
I tried chatting with DreamHost support, but they didn't even know what Amazon CloudFront was, and couldn't help me fill in the CloudFront information form. I looked around DreamHost's settings, etc. for things with similar names as what was being requested on the CloudFront form, but couldn't find anything.
Pretty much if you just put in: http://www.yourdomain.com, cloudfront figures out the rest - and you can customize from there if you need/want to - but just doing that one entry, and creating the distribution will setup a cloudfront end-point to serve the files from your external webserver - just make sure you include the 'http://' in front of the url so it can figure out the rest.
I created a bucket and configured a static website hosting
I want to use SSL so instead of using
http://my-bucket.s3-website.us-east-2.amazonaws.com/
I have to use
https://s3.us-east-2.amazonaws.com/my-bucket/
the problem with this is that the static website hosting endpoint is still http://my-bucket.s3-website.us-east-2.amazonaws.com/
I created a redirection rule on it (basically if the requested file returns 404 then I call an API) but is not working because (I assume) the endpoint is the bad one and when I try to access a file that doesn´t exist instead of getting the redirection configured in the static website I get Access Denied. how to deal with this?
notes: I tried to use s3-website.us-east-2.amazonaws.com/my-bucket/file.jpg but I get redirected to an amazon page.
You can do this by serving your content through cloudfront and then configuring your cloudfront distribution to use https
I worked at getting SSL working for a static web site on AWS using a custom domain for two days and, having Googled much and stopped by this posting, finally found this excellent and concise tutorial Example Walkthroughs - Hosting Websites on Amazon S3 on AWS at https://docs.aws.amazon.com/AmazonS3/latest/dev/hosting-websites-on-s3-examples.html. While it seems obvious now, the thing that got the SSL working for me was the final step of Update the Record Sets for Your Domain and Subdomain The guide is very to the point, well written and easy to follow so thought this would help others.
Instead of using Cloudfront (or other Amazon services except for S3) you can use this tool: https://github.com/igorkasyanchuk/amazon_static_site which allows you to publish a site and use Cloudflare. You will get https too.
To simplify life you can use a generator and then just edit config and deploy files to S3/Cloudflare.
Is it possible to use EBS like S3? By that I mean can you allow users to download files from a link like you can in S3?
The reason for this is because my videos NEED to be on the same domain/server to work correctly. I am creating a Virtual Reality video website however, IOS does not support cross-origin resource sharing through WebGL (which is used to create VR).
Because of this, my S3 bucket file system will not work as it will be classed as cross origin, but looking into EBS briefly it seems that it attaches to the all your instances as local storage which would get past the cross-origin problem I am facing.
Would it be simply like a folder on my web server, that could be reached by 'www.domain.com/ebs-file-system/videos/video.mp4'?
Thanks in advance for any comments.
Amazon S3 CORS
You can configure your Amazon S3 bucket to support Cross-Origin Resource Sharing (CORS):
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
CloudFront Behaviours
Another option is to use Amazon CloudFront, which can present multiple systems as a single URL. For example, example.com/video could point to an S3 bucket, while example.com/stream could point to a web server. This should circumvent CORS problems.
See:
Format of URLs for CloudFront Objects
Values that You Specify When You Create or Update a Web Distribution
Worst Case
Worst case, you could serve everything via your EC2 instance. You could copy your S3 content to the instance (eg using the AWS Command-Line Interface (CLI) aws s3 sync command) and serve it to your users. However, this negates all the benefits that Amazon S3 provides.
I have an application which is a static website builder.Users can create their websites and publish them to their custom domains.I am using Amazon S3 to host these sites and a proxy server nginx to route the requests to the S3 bucket hosting sites.
I am facing a load time issue.As S3 specifically is not associated with any region and the content being entirely HTML there shouldn't ideally be any delay.I have a few css and js files which are not too heavy.
What can be the optimization techniques for better performance? eg: Will setting headers ? or Leverage caching help? I have added an image of pingdom analysis for reference.
Also i cannot use cloudfront as when the user updates an image the edge locations have a delay of few minutes before the new image is reflected.It is not instant update,hence restricting the use for me. Any suggestions on improving it?
S3 HTTPS access from a different region is extremely slow especially TLS handshake. To solve the problem we invented Nginx S3 proxy which can be find over the web. S3 is the best as origin source but not as a transport endpoint.
By the way try to avoid your "folder" as a subdomain but specify only S3 regional(!) endpoint URL instead with the long version of endpoint URL, never use https://s3.amazonaws.com
One the good example that reduces number of DNS calls is the following below:
https://s3-eu-west-1.amazonaws.com/folder/file.jpg
Your S3 buckets are associated with a specific region that you can choose when you create them. They are not geographically distributed. Please see AWS doc about S3 regions: https://aws.amazon.com/s3/faqs/
As we can see in your screenshot, it looks like your bucket is located in Singapore (ap-southeast-1).
Are your clients located in Asia? If they are not, you should try to create buckets nearer, in order to reduce data access latency.
About cloudfront, it should be possible to use it if you invalide your objects, or just use new filenames for each modification, as tedder42 suggested.