Deploying SPA(angular2) on S3 vs EC2 - amazon-web-services

I have angular application with nodejs backend(REST API). I am confused with S3 and EC2. which one is better and what are the pros and cons deploying to each.Considering average load. Help will be highly appreciate.

I figured it out by myself.
S3 is used to store static assets like videos, photos, text file and
any other format file. It is highly scalable, reliable, fast, inexpensive
data storage infrastructure.
EC2 is a like your own server. And it is on Cloud so computing
capacity can be decreased or increased instantly as per server need.
So here my confusion is clear as ...
When we build Angular2 Application it generates .js files which called
bundle in term of angular2. Now, these files can be hosted on S3 Bucket. And can be accessed through CloudFront in front of it. Which is very fast cache enabled. And pricing model is pay per request.
But EC2 is like running own server. And we have to configure server it
self so for angular application it is not good. It is good for node
Applicationas it can do computation.

You can setup a popular ubuntu server in EC2 with
Nginx to serve your angular frontend and proxy request for your
NodeJs Api
S3 is a file storage mainly for serving static content and media files (jpg, fonts, mp4 etc)
Theoretically you can host everything in your EC2 instance, but with S3 it is easier to scale, backup, migrate your static asset.
You can probably start with one simple EC2 instance to run everything, when everything's working fine you can try move the static asset to S3

Related

How to deploy nuxt frontend with express backend on AWS?

I have a Nuxt v2 SSR application as the frontend running on port 3000
I have an express API as the backend running on port 8000
I have a python script that loads data from external APIs and needs to run continuously
Currently all of them are separate projects with their own package.json and what not
How do I deploy this to AWS?
The only thing I have figured out so far is that I may have to deploy express API as an Elastic Beanstalk application.
Should I have a separate docker-compose file for each because they are separate projects currently or should I merge them into one project with a single docker-compose file
I saw similar questions asked about React, could really appreciate some direction here in Nuxt
None of these similar questions are based on Nuxt
How to deploy separated frontend and backend?
How to deploy a React + NodeJS Express application to AWS?
How to deploy backend and frontend projects if they are separate?
There are a couple of approaches depending on your workload and budget. Let's cover what the options are and which ones apply to the work at hand.
Serverless Approach with Server-side rendering (SSR)
Tutorial
By creating an API Gateway route that invokes NuxtRenderer in a Lambda. The resulting generated HTML/JS/CSS would be pushed to S3 bucket. The S3 bucket acts as a Cloudfront origin. Use the CDN to cache the least updated items and use the API call to update the cache when needed. This is the cheapest way to deploy, but if you don't have traffic some customers may experience brief lag when cache updates hit. Calculator
Serverless Approach via static generation
This is the easiest way to get going. All the routes that do not have dynamic template generation can simply live in an S3 bucket. The simplest approach is to run nuxt generate and then upload the resulting contents of dist to the bucket. No node backend here, which is not the requirement in the question but its worth mentioning.
Well documented on NuxtJS.org and free tier.
Serverless w/ Elastic Beanstalk
IMO this approach is unnecessary and slightly dated for the offering AWS provides in 2022. It still works of course, but the cost to benefits aren't attractive.
To do this you need to use the eb command line tool and set NPM_CONFIG_UNSAFE_PERM=true. AFAIK there is nothing else special that needs to happen for eb to know what to do from there. Calculator
Server-ish Approach(es)
Another alternative is to use Lightsail and NodeJS server. Lightsail offers low (but not lower than serverless) cost to a full time NodeJS server. In this case you would want to clone your project to the server then setup systemd script to keep nodejs running.
Yet another way to do this and have some scalability is to use ECS and Docker. In this case you would create a Dockerfile that builds the containers, executes npm start, and exposes the port Nuxt runs on to the host. This example shows how to run it using Fargate, which is essentially a serverless version of EC2 machine. Calculator
There are some ways for you to deploy your stack in AWS. I can give you some options, but you're best shot if you want to save some costs is by using Lambda Functions as your backend, S3 for your front-end and a batch Lambda job for your python script.
For your backend - https://github.com/vendia/serverless-express
For your nuxt frontend - https://nuxtjs.org/deployments/amazon-web-services
For your python job - https://docs.aws.amazon.com/AmazonCloudWatch/latest/events/RunLambdaSchedule.html
It's not way too simple to execute all of those, but with the following links you'll probably have an idea of how you may implement your solution.

what is AWS Lightsail bucket for?

I am new to server stuff.
I'd like to store some images on the server (willing to use Amazon lightsail) and
load those images from a mobile app and display them.
Can I do this with Amazon lightsail/storage/bucket and do I need to buy it?
I think I will store just a few images (probably less than 200 images, each image less than 1MB).
Not gonna upload them all at once.
I guess this is so simple question, but for beginners, it's not so simple.
Thanks in advance for any comments.
Amazon Lightsail is AWS' easy-to-use virtual private server. Lightsail offers virtual servers, storage, databases, networking, CDN, and monitoring for a low, predictable price.
The ideia of the Lightsail service is provide an easy to you host your application without the need of handle a lot of settings.
For your case, you simple need S3 buckets to host images, I do not think that Lightsail will the best option.
Simple save your files in a S3 bucket and load the images using presigned/public URLs, or via the SDK.
For more information about the Lightsail service take a look here: What can I do with Lightsail?

AWS 3-tier architecture issues

Guys I am trying to implement a 3-tier architecture to host a web app on aws.
The requirements given to me are as follows.
The app will leverage a 3-tier architecture:
A Web Server that will be running on S3
An application tier running on ECS Cluster on Fargate or a fleet of EC2s with ASG (your choice)
A data tier running on RDS Aurora PostgreSQL latest supported version
I understand perfectly what to do on the 2nd and 3rd instructions for the App and Database tier.
What I don't get is the “web server running on s3” . Is it possible to have a web server on S3?
What I know is, I can have a web server running on EC2.
Please, I need some explanation here.
Yes and no, S3 is a static file host, which means you have these HTML, CSS, and JS files where all you want to do is to send these files to the browser, then absolutely, yes. S3 can be used as a file serving service, https://docs.aws.amazon.com/AmazonS3/latest/userguide/WebsiteHosting.html
However, when you have the case where your website is doing some real-time HTML generation, something like SSR (Server Side Rendering), S3 won't cut it. S3 does not process the code in any way and only directly sends the files as-is to the frontend. In which case, you need a more traditional server on EC2/ECS/EKS.

Should I use docker (nginx) for serving a SPA?

I only have 1 javascript file(bundle.js packed by webpack) and 1 html. It's kinda like SPA.
I'm thinking how I host this SPA? I already have one clean VM on Amazon EC2.
I was planning setup a docker (Nginx) on this EC2. However, as I said, this VM is clean. Only this SPA will use this EC2 VM.
So I'm confused by this situation. Should I use docker(nginx) or just install a Nginx on this EC2 for serving this SPA?
AWS S3 service is capable of service static files, You just need to upload your files to a bucket, then make them public and note the public URL.
As a side note, Containerizing apps and using microservices architecture, will give you advantages, Some of them are:
Ease of Upgrade
Fault Containment
Ease of technology change
Increased Security
Efficient Resource Utilization
S3 is cheap enough for static files, almost free compared to EC2 unless you have backend in place. You can use Cyberduck for S3 and if you want to go FTP one day, same app would give you a common UX for uploading your files.
Though Docker setup would be over engineering for static serving in IaaS, you would need to build an image that contains nginx and your files as in KyleAMathews/docker-nginx project.

How to setup shared persistent storage for multiple AWS EC2 instances?

I have a service hosted on Amazon Web Services. There I have multiple EC2 instances running with the exact same setup and data, managed by an Elastic Load Balancer and scaling groups.
Those instances are web servers running web applications based on PHP. So currently there are the very same files etc. placed on every instance. But when the ELB / scaling group launches a new instance based on load rules etc., the files might not be up-to-date.
Additionally, I'd rather like to use a shared file system for PHP sessions etc. than sticky sessions.
So, my question is, for those reasons and maybe more coming up in the future, I would like to have a shared file system entity which I can attach to my EC2 instances.
What way would you suggest to resolve this? Are there any solutions offered by AWS directly so I can rely on their services rather than doing it on my on with a DRBD and so on? What is the easiest approach? DRBD, NFS, ...? Is S3 also feasible for those intends?
Thanks in advance.
As mentioned in a comment, AWS has announced EFS (http://aws.amazon.com/efs/) a shared network file system. It is currently in very limited preview, but based on previous AWS services I would hope to see it generally available in the next few months.
In the meantime there are a couple of third party shared file system solutions for AWS such as SoftNAS https://aws.amazon.com/marketplace/pp/B00PJ9FGVU/ref=srh_res_product_title?ie=UTF8&sr=0-3&qid=1432203627313
S3 is possible but not always ideal, the main blocker being it does not natively support any filesystem protocols, instead all interactions need to be via an AWS API or via http calls. Additionally when looking at using it for session stores the 'eventually consistent' model will likely cause issues.
That being said - if all you need is updated resources, you could create a simple script to run either as a cron or on startup that downloads the files from s3.
Finally in the case of static resources like css/images don't store them on your webserver in the first place - there are plenty of articles covering the benefit of storing and accessing static web resources directly from s3 while keeping the dynamic stuff on your server.
From what we can tell at this point, EFS is expected to provide basic NFS file sharing on SSD-backed storage. Once available, it will be a v1.0 proprietary file system. There is no encryption and its AWS-only. The data is completely under AWS control.
SoftNAS is a mature, proven advanced ZFS-based NAS Filer that is full-featured, including encrypted EBS and S3 storage, storage snapshots for data protection, writable clones for DevOps and QA testing, RAM and SSD caching for maximum IOPS and throughput, deduplication and compression, cross-zone HA and a 100% up-time SLA. It supports NFS with LDAP and Active Directory authentication, CIFS/SMB with AD users/groups, iSCSI multi-pathing, FTP and (soon) AFP. SoftNAS instances and all storage is completely under your control and you have complete control of the EBS and S3 encryption and keys (you can use EBS encryption or any Linux compatible encryption and key management approach you prefer or require).
The ZFS filesystem is a proven filesystem that is trusted by thousands of enterprises globally. Customers are running more than 600 million files in production on SoftNAS today - ZFS is capable of scaling into the billions.
SoftNAS is cross-platform, and runs on cloud platforms other than AWS, including Azure, CenturyLink Cloud, Faction cloud, VMware vSPhere/ESXi, VMware vCloud Air and Hyper-V, so your data is not limited or locked into AWS. More platforms are planned. It provides cross-platform replication, making it easy to migrate data between any supported public cloud, private cloud, or premise-based data center.
SoftNAS is backed by industry-leading technical support from cloud storage specialists (it's all we do), something you may need or want.
Those are some of the more noteworthy differences between EFS and SoftNAS. For a more detailed comparison chart:
https://www.softnas.com/wp/nas-storage/softnas-cloud-aws-nfs-cifs/how-does-it-compare/
If you are willing to roll your own HA NFS cluster, and be responsible for its care, feeding and support, then you can use Linux and DRBD/corosync or any number of other Linux clustering approaches. You will have to support it yourself and be responsible for whatever happens.
There's also GlusterFS. It does well up to 250,000 files (in our testing) and has been observed to suffer from an IOPS brownout when approaching 1 million files, and IOPS blackouts above 1 million files (according to customers who have used it). For smaller deployments it reportedly works reasonably well.
Hope that helps.
CTO - SoftNAS
For keeping your webserver sessions in sync you can easily switch to Redis or Memcached as your session handler. This is a simple setting in the PHP.ini and they can all access the same Redis or Memcached server to do sessions. You can use Amazon's Elasticache which will manage the Redis or Memcache instance for you.
http://phpave.com/redis-as-a-php-session-handler/ <- explains how to setup Redis with PHP pretty easily
For keeping your files in sync is a little bit more complicated.
How to I push new code changes to all my webservers?
You could use Git. When you deploy you can setup multiple servers and it will push your branch (master) to the multiple servers. So every new build goes out to all webserver.
What about new machines that launch?
I would setup new machines to run a rsync script from a trusted source, your master web server. That way they sync their web folders with the master when they boot and would be identical even if the AMI had old web files in it.
What about files that change and need to be live updated?
Store any user uploaded files in S3. So if user uploads a document on Server 1 then the file is stored in s3 and location is stored in a database. Then if a different user is on server 2 he can see the same file and access it as if it was on server 2. The file would be retrieved from s3 and served to the client.
GlusterFS is also an open source distributed file system used by many to create shared storage across EC2 instances
Until Amazon EFS hits production the best approach in my opinion is to build a storage backend exporting NFS from EC2 instances, maybe using Pacemaker/Corosync to achieve HA.
You could create an EBS volume that stores the files and instruct Pacemaker to umount/dettach and then attach/mount the EBS volume to the healthy NFS cluster node.
Hi we currently use a product called SoftNAS in our AWS environment. It allows us to chooses between both EBS and S3 backed storage. It has built in replication as well as a high availability option. May be something you can check out. I believe they offer a free trial you can try out on AWS
We are using ObjectiveFS and it is working well for us. It uses S3 for storage and is straight forward to set up.
They've also written a doc on how to share files between EC2 instances.
http://objectivefs.com/howto/how-to-share-files-between-ec2-instances