Autoscaling ec2 instance without loss of old data - amazon-web-services

Recently My Website shifted on Amazon.
I create EC2 Instance. Install lamp and Setup CodeIgnitor in /var/www/http folder.
the structure
Codeignitor folder I have folder name 'UPLOAD'.this folder is used for uploaded images and files.
I make AMI image from EC2 Instance.
I have setup Auto scaling of Ec2 instances.
When my old ec2 instance is failed then automatically new instance is created. But My all data from "UPLOAD" of folder on old ec2 instance has lost.
I want to separate "UPLOAD" folder in codeignitor from ec2 instance.
So whenever new instance is create it will get UPLOAD folder and its contents without loss.
I want to separate this upload folder. so when new instance is create then it will get this data.
how to do this.
Thanks for Advance.
Note . I have used MYSQL on Amazon RDS.

You can use a shared Elastic Block Storage mounted directory.
If you manually configure your stack using the AWS Console, go to the EC2 Service in the console, then go to Elastic Block Storage -> Volumes -> Create Volume. And in your launch configuration you can bind to this storage device.
If you are using the command line tool as-create-launch-config to create your launch config, you need the argument
--block-device-mapping "key1=value1,key2=value2..."
If you are using Cloudformation to provision your stack, refer to this template for guidance.
This assumes Codeignitor can be configured to state where its UPLOAD directory is.

As said by Mike, you can use EBS, but you can also use Amazon Simple Storage Service (S3) to store your images.
This way, whenever an instance starts, it can access all the previously uploaded images from S3. Of course, this means that you must change your code for the upload, to use the AWS API and not the filesystem to store your images to S3.

Related

AWS EC2 Apache file persistence

I'm new to AWS and a little perplexed as to the situation with var/www/html folder in an EC2 instance in which Apache has been installed.
After setting up an Elastic Beanstalk service and uploading the files, I see that these are stored in the regular var/www/html folder of the instance.
From reading AWS documents, it seems that instances may be deleted and re-provisioned, which is why use of an S3 bucket, EFS or EBS is recommended.
Why, then, is source code stored in the EC2 instance when using an apache server? Won't these files potentially be deleted with the instance?
If you manually uploaded some files and data to /var/www/html then off course they will be wiped out when AWS is going to replace/delete the instance, e.g. due to autoscalling.
All files that you use on EB should be part of your deployment package, and all files that, e.g. your users upload, should be stored outside of Eb, e.g. on S3.
Even if the instance is terminated for some reason, since the source code is part of your deployment package on Beanstalk it can provision another instance to replace with the exact same application and configurations. Basically you are losing this data, but it doesn't matter.
The data loss concern is for anything you do that is not part of the automated provisioning/deployment, ie any manual configuration changes or any data your application may write to ephemeral storage. This is what you would need a more persistent storage option for.
Seems that, when the app is first deployed, all files are uploaded to an S3 bucket, from where these are copied into the relevant directory of each new instance. In the event new instances have to be created (such as for auto-scaling) or replaced, these instances also pull the code from the S3 bucket. This is also how the app is re-deployed - the bucket is updated and each instance makes a new copy of the code.
Apologies if this is stating the obvious to some people, but I had seen a few similar queries.

Upload files to S3 during deploy

I want to create a bucket during the deployment process, but when I do this, a problem with assets appears, "must have values". So I decide to create a different stack to only upload files and other stack to deploy a EC2 instance. So, when I use this approach, the EC2.UserData didn't find the files on S3 to download them. I need this file to configure my instance. I could create the S3 manually before to deploy the EC2, but I want do automatize this process. How I could do this?
You need to configure S3 access at the machine where you wish to automate the process.
Use AWS CLI tools and run aws configure on your server and define the credentials.
OR
If it is an EC2 instance then create IAM role with S3 write
permissions and attach to this EC2.
You can do the following:
Create 2 separate stacks (we'll refer to them as s3Stack and ec2Stack)
Add ec2Stack.addDependency(s3Stack) where you create these stacks
In the s3 stack, create the bucket and upload the assets using aws-s3-deployment
Give permissions to the ec2 instance to get necessary files from the previously created bucket.
This will ensure you can deploy everything with just one command cdk deploy ec2Stack. It will check if the s3Stack needs to be created/updated first and only when those updates are done, your ec2Stack will be deployed.

Can I create multiple instance of same ec2 image

I have requirement of creating multiple instances of the same EC2 image from lambda as the EC2 image has some Windows Processing creating PDF files. Can I launch multiple instances of the same EC2 image and pass some parameters to each ec2 instance ( say name of the bucket in S3, and names are different).
Thanks in advance.
An AWS EC2 image provides essentially a snapshot of how the server should look.
This would include:
Any packages you need installed
Any configuration you need
If you want custom configuration applied on top you would need to either:
Make use of UserData when you launch the instance to run those additional actions
Create a custom AMI with the custom configuration included

Modify file on EC2 via Cloudformation-Update stack

I have used Cloudformation template to create my EC2 instance. In this instance there is a file
in home directory that I copy from S3 while creating stack.
I have that file stored locally as well. Now, I modify that file locally and want to copy it to S3
and from S3 to EC2 instance.
I want to automate this process through Cloudformation. So that, whenever I modify this file locally,
after doing update stack, it uploads the modified file to S3 and from S3 to my EC2 instance.
Can anyone please help how this can be achieved?
One thing that comes to mind (bearing in mind the application specific nature of what you are trying to do) is using ECS instead of just EC2.
Note: This may be overkill, but it would work. Also if updates were extremely frequent this would be a major pain so just uploading the file to S3 with a script alongside update-stack (if the update-stack is even necessary) and then polling for changes to that S3 file in your EC2 application would be fine.
Anyway, this is a pattern we use when we are doing something like training a model with new data, which we then wish to deploy to AWS, replacing an application with an older version of the model.
You build the Docker image locally and your special file gets included inside the container. You push the Docker image to DockerHub or AWS ECS Registry or wherever. You update the cloudformation template ECS configuration to use the tag of this new Docker image and update the stack. Then ECS pulls this new image and the new Docker container(s) take the place of the old one(s) and it will have your special file inside it.
I know of at least one way: set up your EC2 in an AutoScaling Group (ASG). Then, on creation, use cfn-init on your UserData and have it fetch the file under sources. Use both Creation and Update policies. On update, have the WillReplace attribute set to true under AutoScalingReplacingUpdate. When you update, CloudFormation will create a new ASG with your fresh new file copy. If you signal success on your update, it will remove the previous instance and ASG, giving you an immutable infrastructure setup. Put everything behind a load balancer, and you got yourself a highly available blue/green deployment too.

How to setup another instance for load balancing in EC2?

Right now we have one instance. How do we create another instance with the content and files as the first server? Do we just create an instance?
Also if we make a change to a file on server one, do we have to make the same changes on server two? thanks
The best way to achieve your use case is.
Install AWS CLI on your instance.
Create a S3 bucket and add all your application files to that S3 bucket.
Add a Cron Job on your instance that will run a S3 sync command some thing like this
aws s3 sync s3://mybucket /<path to your application root>
Now take a AMI of your instance.
Attach your instance to load balancer, if you want to add another instance create another instance from the same AMI.
And any file change you want to apply apply it in the S3 bucket so what will happen is no matter how many instance you add to your load balancer they all will be synced with the S3 bucket, so if you change a file add a new file to S3 bucket that file will be changed and added to all the instances that are running behind the load balancer and are in sync with the S3 bucket.
Suppose you have an application which you need to load balance in VM1 then you would require to follow given step
1.Take the snapshot of the VM1 and also of EBS if one is attached
2.Now create VM2 from this snapshot (this ensure you have same to same content of VM2 just different MAC and IP configuration rest data remains same)
3. Add VM1 and VM2 to load balancer which ever application you would like to load balance
4.If want any changes made to the VM1 data be reflected in VM2 as well without requiring a need to do it manually use rsync(remote sync) utility which takes directory and machine name/ip as input you wish to keep in sync between machine(changes to directory (provided to rsync) made in one machine are automatically updated to other machine)
Best thing to do will be:
create an AMI will all the necessary configuration and software
installed. Always try to use a golden AMI where possible. (explore something like packer.io)
If you cant use a golden AMI use a custom script as part of the user data when launching the EC2 to complete the configuration
Create an Auto Scaling Group using the baked AMI
In the console under Auto Scaling choose Auto Scaling Groups
On the Details tab, choose Edit
For Load Balancers, select your load balancer and save.
This way just by changing the number of instances in the auto scaling group will add (using the baked AMI) or remove instances. Better still, adding thresholds to increase or decrease instances automatically can be achieved. As the entire auto scaling group is associated with the ELB any new instances will be automatically configured with the ELB.
Note: Your ELB and the ASG should be in the same region
Please check the amazon docs link: Attaching a Load Balancer to Your Auto Scaling Group