Hosting a dynamic website on AWS with auto scaling - amazon-web-services

I am confused how to host a dynamic Laravel website on AWS. Currently, i have an auto scale group configured to MIN 1 and MAX 1. What i'm trying to achieve is lunch a new EC2 server, when the current EC2 goes down.
What i don't understand is where should i store my website code to enable the new EC2 server to obtain it automatically! I have read about storing it in S3 bucket, but my website is dynamic, not sure if that is suitable.
Any guidance would be appreciated :)

You could copy the entire contents of /var/www/html and store it in an s3 bucket.
Then you can add a bootstrap script in the ec2 instance that copies the contents of the s3 bucket to its /var/www/html directory.

You can use any engine for cloud provisioning environments and applications like Ansible or Urbancode Deploy Blueprint designer and configure deployment scripts there which may get the source/artifacts from the repository, artifactory server (Jfrag) or wherever you want to get from.

Related

AWS How to deploy my internet site created with Typescript on AWS

I have create a website using VS Code in NodeJS with typescript language.
Now I want to try to deploy it on AWS. I read so many things about EC2 , Cloud9 , Elastic Beanstalk, etc...
So I'm totally lost about what to use to deploy my website.
Honestly I'm a programmer, not a site manager or sysops.
Right Now I create an EC2 instances. One with a Key name and One with no key Name.
In the Elastic Beanstalk, I have a button Upload and Deploy.
Can someone send me the way to create my project as a valid package to upload and deploy it ?
I never deploy a website. (Normally it was the sysops at the job). So I don't know what to do to have a correct distributing package.
Does I need to create both EC2 and Beanstalk ?
Thanks
If you go with ElasticBeanstalk, it will take care of creating the EC2 instances for your.
It actually takes care of creating EC2 instance, DB, loadbalancers, CloudWatch trails and many more. This is pretty much what it does, bundles multiple AWS services and offers on panel of administration.
To get started with EB you should install the eb cli.
Then you should:
go to your directory and run eb init application-name. You'll start a wizard from eb cli asking you in which region you want to deploy, what kind of db and so on
after that your need to run eb create envname to create a new env for your newly create application.
at this point you should head to the EB aws panel and configure the start command for your app, it usually is something like this npm run prod
because you're using TS there are a few steps you need to do before being able to deploy. You should run npm run build, or whatever command you have for transpiling from TS to JS. You'll be deploying compiled scripts and not your source code.
now you are ready to deploy, you can run eb deploy, as this is your only env it should work, when you have multiple envs you can do eb deploy envname. For getting a list of all envs you can run eb list
There are quite a few steps to take care before deploying and any of them can cause multiple issues.
If your website contains only static pages you can use Amazon S3 to deploy your website.
You can put your build files in S3 bucket directly and enable static web hosting.
This will allow anyone to access your website from a url globally, for this you have to make your bucket public also.
Instead you can also use cloudfront here to keep your bucket private but allowing access to bucket through cloudfront url.
You can refer to below links for hosting website through s3.
https://docs.aws.amazon.com/AmazonS3/latest/user-guide/static-website-hosting.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudfront-serve-static-website/

Application Data Folder in AWS Beanstalk

I am transferring an existing PHP application to the Elastic Beanstalk and have a newbie question. My application has a data folder that grows and changes over time and can grow quite large currently the folder is a subfolder in the root directory of the application. In the traditional development model I just upload the changed PHP files are carry on using the same data folder, how can I do this in the Elastic Beanstalk?
I dont want to have to download and upload the data folder everytime I deploy a new version of the application. What is the best practice to do this in the AWS Beanstalk?
TIA
Peter
This is a question of continious deployment.
Elastic BeanStalk supports CD from AWS CodePipeline: https://aws.amazon.com/getting-started/tutorials/continuous-deployment-pipeline/
To address "grows and changes over time and can grow quite large currently the folder is a subfolder in the root directory of the application", you can use CodeCommit to version your code using Git. If you version the data folder with your application, the deployment will include it.
If the data is something you can offload to an object store (S3) or a database (RDS/DynamoDB/etc), it would be a better practice to do so.
As per the AWS documentation here, Elastic Beanstalk applications run on EC2 instances that have no persistent local storage, as a result your EB application should be as stateless as possible, and should use persistent storage from one of the storage offerings offered by AWS.
A common strategy for persistent storage is to use Amazon EFS, or Elastic File Service. As noted in the documentation for using EFS with Elastic Beanstalk here:
Your application can treat a mounted Amazon EFS volume like local storage, so you don't have to change your application code to scale up to multiple instances.
Your EFS drive is essentially a mounted network drive. Files stored in EFS will be accessible across any instances that have the file system mounted, and will persist beyond instance termination and/or scaling events.
You can learn more about EFS here, and using EFS with Elastic Beanstalk here.

Where to store public data and how to deploy it from GitHub

I'm working on my project which is placed on AWS EC2 instance. I used CodeDeploy to deploy app from GitHub to EC2. But I want to store public data as stylesheets, JS, images etc on S3. It's even possible to deploy app on EC2 and S3 in one step? Or should I place all files to EC2 instance only?
I've been learning AWS documentation about Elastic Beanstal, CodeDeploy, CodePipeline, Ops Works and others for two days, but I confused.
It sounds like you want to have two steps in your deployment. One where you update your static assets in S3 and another where you update your servers and dynamic content on EC2 instances.
Here are some options:
Since they are static, just have every EC2 host upload the S3 assets to your bucket as a BeforeInstall script. You would need to include the static content as part of your bundle you use with CodeDeploy.
Use a leader election algorithm to do (1) from a single host. You could deploy something like Zookeeper as part of your CodeDeploy deployment.
Upload your static assets as a separately from your CodeDeploy deployment. You might want to look ad CodePipeline as a solution for a more complex multistage deployment (which can use CodeDeploy for your server deployment).
In either case, you will want to make sure that you aren't just overwriting your static assets or you'll end up in the situation where you old server code is trying to use new static assets. You should always be careful that you can run both versions of your code side by side during a deployment.
I won't complicate it. I'll put all files to EC2 include CSS and JS by CodeDeploy from GitHub, because there is no simple and ideal solution for this.

Integrating AWS EC2, RDS and... S3?

I'm following this tutorial and 100% works like a charm :
http://docs.aws.amazon.com/gettingstarted/latest/wah-linux/awsgsg-wah-linux.pdf
but, in that tutorial, it use Amazon EC2 and RDS only. I was wondering what if my servers scaled up into multiple EC2 instances then I need to update my PHP files.
do I have to distribute it manually across those instances? because, as far as I know, those instances are not synced each other.
so, I decided to use S3 as replacement of my /var/www so the PHP files is now centralised in one place.
so, whenever those EC2 scaled up, the files remains in one place and I don't need to upload to multiple EC2.
is this the best practice to have centralised file server (S3) for /var/www ? because currently I still having permission issue when it's mounted using s3fs.
thank you.
You have to put your /var/www/ in S3 and when your instances scaled up have to make 'aws s3 sync' from your bucket, you can do that in the userdata. Also you have to select a 'master' instance where you make changes, a sync script upload changes to S3 and with rsync it copy changes to your alive FE. This is because if you have 3 FE that downloaded /var/www/ from S3 and you want to make a new change you would have to make a s3 sync in all your instances.
You can manage changes in your 'master' instance with inotify. Inotify can detect a change in /var/www/ and exec two commands, one could be aws s3 sync and then a rsync to the rest of your instances. You can get the list of your instances from the ELB through the AWS API.
The last thing is check the instance terminate protection in your 'master' instance.
Your architecture should look like here http://www.markomedia.com.au/scaling-wordpress-in-amazon-cloud/
Good look!!

Autoscaling ec2 instance without loss of old data

Recently My Website shifted on Amazon.
I create EC2 Instance. Install lamp and Setup CodeIgnitor in /var/www/http folder.
the structure
Codeignitor folder I have folder name 'UPLOAD'.this folder is used for uploaded images and files.
I make AMI image from EC2 Instance.
I have setup Auto scaling of Ec2 instances.
When my old ec2 instance is failed then automatically new instance is created. But My all data from "UPLOAD" of folder on old ec2 instance has lost.
I want to separate "UPLOAD" folder in codeignitor from ec2 instance.
So whenever new instance is create it will get UPLOAD folder and its contents without loss.
I want to separate this upload folder. so when new instance is create then it will get this data.
how to do this.
Thanks for Advance.
Note . I have used MYSQL on Amazon RDS.
You can use a shared Elastic Block Storage mounted directory.
If you manually configure your stack using the AWS Console, go to the EC2 Service in the console, then go to Elastic Block Storage -> Volumes -> Create Volume. And in your launch configuration you can bind to this storage device.
If you are using the command line tool as-create-launch-config to create your launch config, you need the argument
--block-device-mapping "key1=value1,key2=value2..."
If you are using Cloudformation to provision your stack, refer to this template for guidance.
This assumes Codeignitor can be configured to state where its UPLOAD directory is.
As said by Mike, you can use EBS, but you can also use Amazon Simple Storage Service (S3) to store your images.
This way, whenever an instance starts, it can access all the previously uploaded images from S3. Of course, this means that you must change your code for the upload, to use the AWS API and not the filesystem to store your images to S3.