I have website which I wanted to host on AWS. My website stores the data in back end in RDBMS/MongoDB and uses PHP/Javascript/python and etc...
My website will be receiving data from users and I will be using it for analysis. I want to do any installation.
Which is the best for my requirement AWS s3 or AES E2?
S3 is just file storage, you can't run dynamic applications (PHP/Python/etc.) or databases on S3. You probably need to run your application on EC2, your database on either EC2 or RDS, and store your application's static files on S3.
Related
I am a total beginner and just completed the first version of my web application.
I am using Docker, Nginx, Angular & Django. Note that the backend works on static files and uses a simple database for User Registration.
I want to deploy it to a free, cloud solution. I heard that I can use AWS Elastic Beanstalk but found a bit complicated both the configuration and the pricing policy.
Question
Can anybody guide me through what to consider or even better, what selection I have to make in order to host my web app on AWS without a charge?
PS: I don't know If I have to mention this, but in case the web app attracts a satisfying number of users, I would adjust the implementation in order the user to be able to upload and use my services upon their own data (and not upon the csv files). In this case, I might have to use other AWS services or migrate to another cloud solution. Just to say that both of them are welcome!
You can easily host an Angular app on AWS within the free tier (1 year) limits. I have hosted a handful of Angular apps so far using AWS S3 + AWS Cloudfront.
AWS S3 is used to host your static files. You first do a ng serve --prod where the Angular compiler will generate a /dist folder (in your project directory) containing all the static files (i.e. js, images, fonts, etc) required to run your angular app. Then you upload all your static files onto a AWS S3 bucket.
AWS Cloudfront is a cloud caching service. As the word "cache" suggests, it caches your static files. By setting up a Cloudfront cache in-front of your S3 bucket, it allows you to bypass the monthly 20,000 GET requests limited for free tier - because users' HTTP requests will be served from the Cloudfront cache instead of, directly from your S3 bucket. Cloudfront free tier gives you 2 million HTTP(S) requests per month.
The good thing about hosting on AWS S3 instead of an EC2 instance (P.S. Elastic beanstalk also creates EC2 instance) is that you can have multiple S3 buckets and Cloudfront distributions in 1 free tier account. As long as you stay within the limits, S3: 2,000 PUT Requests, 20,000 GET request; Cloudfront: 2million HTTPS requests. You can end up hosting several apps with one AWS free tier account. But if you're using EC2, you're almost limited to only 1 instance (because 31days x 24 hours = 744hrs which is 6 hours shy of the 750 hours limit). Unless you set up for your EC2 instances to turn on and off.
There are plenty of guides demonstrating how to do this, here are some of them:
Deploy an Angular with S3 and CloudFront
Use S3 and CloudFront to host Static Single Page Apps (SPAs) with HTTPs and www-redirects
Assuming you know how to deploy to a linux machine, Elastic Beanstalk is not useful for your use case (it's for automated scaling). I would do the following:
AWS S3 to deploy your angular app using Static Website hosting (one possible tutorial is this https://medium.com/#peatiscoding/here-is-how-easy-it-is-to-deploy-an-angular-spa-single-page-app-as-a-static-website-using-s3-and-6aa446db38ef
Amazon ECS( Container service) is used to deploy your containers. This will help with that step.
OR
Simple Amazon EC2 (its a linux server) with the entire environment set up with docker.
So I am thinking of migrating my website to Amazon S3 since it's super cheap and fast, however, I use PHP and AJAX to submit my contact forms. Would it be possible to host my site using AWS S3 and then send all HTTP POSTs to the EC2 instance?
Yes, this is very well possible. However, if you're running an EC2 instance anyways and your traffic is not enormous, you might as well serve your static files from your EC2 instance.
It is not possible to host php site on AWS S3 only static content like images, css or js can be put their.
For dynamic content you have to make use of aws instance.
https://forums.aws.amazon.com/message.jspa?messageID=453142
Correct Usage of Amazon Web Services S3 for Server - Side Scripting
I have a java application deployed at elastic beanstalk tomcat and the purpose of the application is to serve resources from S3 in zipped bundles. For instance I have 30 audio files that I zip up and return in the response.
I've used the getObject request from the AWS SDK, however its super slow, I assume it's requesting each object over the network. Is it possible to access the S3 resources directly? The bucket with my resources is located next to the beanstalk bucket.
Transfer from S3 to EC2 is fast, if they are in the same region.
If you still want faster (and reliable) delivery of files, consider keeping files pre-zipped on S3 and serve from S3 directly rather than going through your server. You can use signed URL scheme here, so that the bucket need not be public.
Next level is speed up is by keeping the S3 behind Cloudfront as an origin server. Here the files are cached in locations near your users. Serving Private Content through CloudFront
I trying to explore AWS S3 and I found out that we can store data and have a URL for a file which can be used on a website, but my intention is to store files on S3 and have users of my website post and retrieve files to/from S3 without my intervention. I am trying to have my server and JSP/Servlets pages on EC2 on which Tomcat (and MySQL server) will be running.
Is this possible and if yes, how can i achieve this.
Thanks,
SD
Yes, it's possible. A full answer to this question is tantamount to a consulting gig, but some resources that should get you started:
The S3 API
Elastic Beanstalk for your webtier
Amazon RDS for MySQL
I would like to build a web service for an iPhone app. As for file uploads, I'm wondering what the standard procedure and most cost-effective solution is. As far as I can see, there are two possibilities:
Client > S3: I upload a file from the iPhone to S3 directly (with the AWS SDK)
Client > EC2 > S3: I upload a file to my server (EC2 running Django) and then the server uploads the file to S3 (as detailed in this post)
I'm not planning on modifying the file in any way. I only need to tell the database to add an entry. So if I were to upload a file Client > S3, I'd need to connect to the server anyways in order to do the database entry.
It seems as if EC2 > S3 doesn't cost anything as long as the two are in the same region.
I'd be interested to hear what the advantages and disadvantages are before I start implementing file uploads.
I would definitely do it through S3 for scalability reasons. True, data between S3 and EC2 is fast and cheap, but uploads are long running, not like normal web requests. Therefore you may saturate the NIC on your EC2 instance.
Rather, return a GUID to the client, upload to S3 with the key set to the GUID and Content-Type set appropriately. Then call a web service/Ajax endpoint to create a DB record with the GUID key after the upload completes.