Using Amazon EBS like S3 - amazon-web-services

Is it possible to use EBS like S3? By that I mean can you allow users to download files from a link like you can in S3?
The reason for this is because my videos NEED to be on the same domain/server to work correctly. I am creating a Virtual Reality video website however, IOS does not support cross-origin resource sharing through WebGL (which is used to create VR).
Because of this, my S3 bucket file system will not work as it will be classed as cross origin, but looking into EBS briefly it seems that it attaches to the all your instances as local storage which would get past the cross-origin problem I am facing.
Would it be simply like a folder on my web server, that could be reached by 'www.domain.com/ebs-file-system/videos/video.mp4'?
Thanks in advance for any comments.

Amazon S3 CORS
You can configure your Amazon S3 bucket to support Cross-Origin Resource Sharing (CORS):
Cross-origin resource sharing (CORS) defines a way for client web applications that are loaded in one domain to interact with resources in a different domain. With CORS support in Amazon S3, you can build rich client-side web applications with Amazon S3 and selectively allow cross-origin access to your Amazon S3 resources.
CloudFront Behaviours
Another option is to use Amazon CloudFront, which can present multiple systems as a single URL. For example, example.com/video could point to an S3 bucket, while example.com/stream could point to a web server. This should circumvent CORS problems.
See:
Format of URLs for CloudFront Objects
Values that You Specify When You Create or Update a Web Distribution
Worst Case
Worst case, you could serve everything via your EC2 instance. You could copy your S3 content to the instance (eg using the AWS Command-Line Interface (CLI) aws s3 sync command) and serve it to your users. However, this negates all the benefits that Amazon S3 provides.

Related

Protect webfonts via Amazon CloudFront from download

Is it possible to protect data embedded on my website via Amazon CloudFront from hotlinking or other downloads? I am mainly interested in protecting webfonts from being downloaded.
Amazon CloudFront is connected to a S3 Bucket
S3 Bucket Policy controls allowed domains for files via CloudFront
You think that could work?
Since you have cloudfront setup connected to your s3 bucket, you can use Cloudfront Signed Urls to prevent download by anyone from the public.
You can put your fonts in a folder called fonts for example, and setup a separate behaviour in cloudfront for any path that contains /fonts/ and in there you can activate Restrict Viewer Access.
In your website, you will need to add some way to generate the presigned url for this only when your webpage is loaded and you can put a short expiry time for this URL.

Hiding web content in S3

This is more of a theoretical question for AWS S3 website hosting.
Say I have a website hosted in S3. Obviously I want the content to be public, but I don't want people to be able to download the backend scripts, images, css by simply changing the domain url. I want to hide those folders, but if I deny GetObject access in the bucket policy for the folders the application "breaks" because it can't reach those folders.
How can I secure my content to ensure the most security for my backend when it sits in an S3 bucket?
You need to access the website via cloudfront with restricted access, better known as Origin Access Identity. This will allow cloudfront distribution access to s3 bucket.
More details can be found in the AWS Docs or https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html#private-content-creating-oai

Low upload speed to ec2 instance running on another region

I have a few EC2 instances (t2.micro) behind a load balancer on the us-east-1 region (N. Virginia) and my users are accessing the application from South America. This is my current setup mainly because costs are about 50% of what I would pay for the same services here in Brasil.
My uploads all go to S3 buckets, also in the us-east-1 region.
When a user requests a file from my app, I check for permission because the buckets are not public (hence why I need all data to go through EC2 instances) and I stream the file from S3 to the user. The download speeds for the users are fine and usually reach the maximum the user connection can handle, since I have transfer acceleration enabled for my buckets.
My issue is uploading files through the EC2 instances. The upload speeds suffer a lot and, in this case, having transfer acceleration enabled on S3 does not help in any way. It feels like I'm being throttled by AWS, because the maximum speed is capped around 1Mb/s.
I could maybe transfer files directly from the user to S3, then update my databases, but that would introduce a few issues to my main workflow.
So, I have two questions:
1) Is it normal for upload speeds to EC2 instances to suffer like that?
2) What options do I have, other than moving all services to South America, closer to my users?
Thanks in advance!
There is no need to 'stream' data from Amazon S3 via an Amazon EC2 instance. Nor is there any need to 'upload' via Amazon EC2.
Instead, you should be using Pre-signed URLs. These are URLs that grant time-limited access to upload to, or download from, Amazon S3.
The way it works is:
Your application verifies whether the user is permitted to upload/download a file
The application then generates a Pre-signed URL with an expiry time (eg 5 minutes)
The application supplied the URL to the client (eg a mobile app) or includes it in an HTML page (as a link for downloads or as a form for uploads)
The user then uploads/downloads the file directly to Amazon S3
The result is a highly scalable system because your EC2 system does not need to be involved in the actual data transfer.
See:
Share an Object with Others - Amazon Simple Storage Service
Uploading Objects Using Pre-Signed URLs - Amazon Simple Storage Service

Websites hosted on Amazon S3 loading very slowly

I have an application which is a static website builder.Users can create their websites and publish them to their custom domains.I am using Amazon S3 to host these sites and a proxy server nginx to route the requests to the S3 bucket hosting sites.
I am facing a load time issue.As S3 specifically is not associated with any region and the content being entirely HTML there shouldn't ideally be any delay.I have a few css and js files which are not too heavy.
What can be the optimization techniques for better performance? eg: Will setting headers ? or Leverage caching help? I have added an image of pingdom analysis for reference.
Also i cannot use cloudfront as when the user updates an image the edge locations have a delay of few minutes before the new image is reflected.It is not instant update,hence restricting the use for me. Any suggestions on improving it?
S3 HTTPS access from a different region is extremely slow especially TLS handshake. To solve the problem we invented Nginx S3 proxy which can be find over the web. S3 is the best as origin source but not as a transport endpoint.
By the way try to avoid your "folder" as a subdomain but specify only S3 regional(!) endpoint URL instead with the long version of endpoint URL, never use https://s3.amazonaws.com
One the good example that reduces number of DNS calls is the following below:
https://s3-eu-west-1.amazonaws.com/folder/file.jpg
Your S3 buckets are associated with a specific region that you can choose when you create them. They are not geographically distributed. Please see AWS doc about S3 regions: https://aws.amazon.com/s3/faqs/
As we can see in your screenshot, it looks like your bucket is located in Singapore (ap-southeast-1).
Are your clients located in Asia? If they are not, you should try to create buckets nearer, in order to reduce data access latency.
About cloudfront, it should be possible to use it if you invalide your objects, or just use new filenames for each modification, as tedder42 suggested.

How to access resources directly in S3 from Amazon beanstalk application

I have a java application deployed at elastic beanstalk tomcat and the purpose of the application is to serve resources from S3 in zipped bundles. For instance I have 30 audio files that I zip up and return in the response.
I've used the getObject request from the AWS SDK, however its super slow, I assume it's requesting each object over the network. Is it possible to access the S3 resources directly? The bucket with my resources is located next to the beanstalk bucket.
Transfer from S3 to EC2 is fast, if they are in the same region.
If you still want faster (and reliable) delivery of files, consider keeping files pre-zipped on S3 and serve from S3 directly rather than going through your server. You can use signed URL scheme here, so that the bucket need not be public.
Next level is speed up is by keeping the S3 behind Cloudfront as an origin server. Here the files are cached in locations near your users. Serving Private Content through CloudFront