AWS Central Mounting for Autoscaling - amazon-web-services

We need an option to mount /var/www/html/ folder on multiple servers to EFS or some other services for setting up autoscaling environment.
Firstly we have used EFS and faced an issue such that, at high throughput, and if the file size is not increasing, then it will burst.
So could you please suggest an alternative for high throughput and the file size not increasing rapidly.

I have tried using EFS with auto-scaling and I have faced the same issue as yours then I have done the below solution.
I have mounted a drive at any mount point at "/home/code" which is using XFS filesystem and deploy the code in this particular directory. I am taking up snapshot after every deployment using script (Link).
I am using the ec2 bootstrap script to create a volume from this snapshot with high IOPS which automatically attaches to the new instance which are coming in.
Alternatively, you can tar your code at every time you deploy and copy the code to the new servers which are coming in through bootstrap script itself.

Related

Automate AWS AMI creation without downtime and Data loss

I wanted to know is it possible to automate the creation of AMI in AWS without downtime and data loss, if possible how can we achieve it.
I have use system manager-> maintenance window in which i have set the reboot to true for data integrity, but i need a way so that the data is not lost.
Any help will be appreciated.
Thank-you.
Answering it as per comments discussion, question is somehow still vague to me
You have EBS right now. I'm not sure if your Instances are in Same AZ or not. If they are in same AZ then you can use EBS multi attach feature (available for IO volumes only) to share same storage with all of them.
Regarding backup you can choose EBS snapshots
Ideally my suggestion to you would be create a launch template, use EFS that can be mounted to multiple instances in same region, if you want it across regions then create mount targets. EFS is natively integrated with AWS backup.
Whenever any failover happens or your EC2 crashes for any reason and it goes less than your target capacity, auto scaling would automatically provision a new instance using launch template which would be using same EFS
but i need a way so that the data is not lost.
if you want to achieve this, then According to Docs, you need to ensure that application or os is not writing to ebs, which can be managed by either a script or a custom logic.
You can take a snapshot of an attached volume that is in use. However, snapshots only capture data that has been written to your Amazon EBS volume at the time the snapshot command is issued. This might exclude any data that has been cached by any applications or the operating system. If you can pause any file writes to the volume long enough to take a snapshot, your snapshot should be complete. However, if you can't pause all file writes to the volume, you should unmount the volume from within the instance, issue the snapshot command, and then remount the volume to ensure a consistent and complete snapshot.
if you achieved the above then you can automate the creation, retention, and deletion of EBS snapshots and EBS-backed AMIs it using Data Lifecycle Manager
I haven't tried this but I think exporting VM to S3 and then automating the entire pipeline with Ec2 image builder should do the trick, you can customise your further images with build components
Refers importing and exporting vm's
Unfortunately there is not of box solution other than compromising data integrity but you can try above mentioned which can ensure data integrity and automation

ECS with persistent data on EFS or EBS with CloudFormation

I am looking for some expert with AWS to help me with this thing. I've spent almost one week trying to deploy my backend docker image to AWS with no 100% of the desired behaviour.
Firstly I was suggested to try out the new Fargate service AWS recently provided. I managed to deploy everything that I needed but it quickly turned out that I need any kind of data persistence which is unavailable with Fargate for the moment from what I've read.
I found those templates which are very helpful because AWS is soooo big and overwhelming so I would do nothing without those and currently tried the deploy using EC2 instances. https://github.com/awslabs/aws-cloudformation-templates/tree/master/aws/services/ECS/EC2LaunchType
I've a question to this kind of deploy:
1: Why this deploy creates 2 instances of EBS for the cluster? One 8GB with snapshot and second with 22GB in size without snapshot.
2: Is it possible to reduce the size of those EBS volumes ?? If so how ?
3: Is it possible to have just only one of those EBS ?
4: Is it possible to mount volue from my docker backend image to those EBS volumes to persist data ? If so how ? I need to mount two volumes for my backend that is
/root/.local/share/Bisq /root/.local/share/bisq-api or ~/.local/share/Bisq ~/.local/share/bisq-api im not quite sure how this works with AWS. What are the paths compared to the steps on the local environment.
5: Would it be better to use EBS or EFS for data persistance? Problem with EFS is that i Can't find any related documents how to connect EFS to this kind of ECS deploy. Everything MUST be with use of CloudFormation templates
Overall the requirements that would match 100% desired behaviour are:
1: CloudFormation Template/tempalte's that deploy neccesarry services as less as possible in order to not build huge infrastructure to preserve the COSTs and ability to just click button and get external link to the backend service. (There can't be any manuall configuration everything must be automated)
2: Ability to stop/start EC2 instace for the backend container (EC2 will be running up from few minutes to hours per day / few days a month. (dependent on each user scenario how often he will be using the backend)
3: Ability to preserve the data when user stops the instance and then starts it in the future point in time.
I would apprecieate any help/suggestions because Im starting to lose my HEAD over everything what connects to AWS services. This is really huge problem to understand any use cases for AWS so I would appreciate help
Thank you!
1: Why this deploy creates 2 instances of EBS for the cluster? One 8GB with snapshot and second with 22GB in size without snapshot.
As per the docs:
Amazon ECS-optimized AMIs from version 2015.09.d and later launch with an 8-GiB volume for the operating system that is attached at /dev/xvda and mounted as the root of the file system. There is an additional 22-GiB volume that is attached at /dev/xvdcz that Docker uses for image and metadata storage.
Here the reference: ecs-ami-storage-config
2: Is it possible to reduce the size of those EBS volumes ?? If so how ?
Also from the same docs:
You can increase these default volume sizes by changing the block device mapping settings for your instances when you launch them; however, you cannot specify a smaller volume size than the default.
3: Is it possible to have just only one of those EBS ?
For this you can better use a custom AMI and configure it as needed.
4: Is it possible to mount volume from my docker backend image to those EBS volumes to persist data ? If so how ? I need to mount two volumes for my backend that is /root/.local/share/Bisq /root/.local/share/bisq-api or ~/.local/share/Bisq ~/.local/share/bisq-api im not quite sure how this works with AWS. What are the paths compared to the steps on the local environment.
This is configured in the task definition:
AWS::ECS::TaskDefinition
Type: AWS::ECS::TaskDefinition
Properties:
Volumes:
- Volume Definition
...
5: Would it be better to use EBS or EFS for data persistance? Problem with EFS is that i Can't find any related documents how to connect EFS to this kind of ECS deploy. Everything MUST be with use of CloudFormation templates
It depends on your use case scenario. Each one has advantages and disadvantages depending on your needs, so better read the docs for each and chose the best one accordingly. In the templates you found, you can customize the LaunchConfiguration UserData to run the attach commands. You can do all this in CloudFormation.
Additionally I will leave you this documentation for mounting automatically an EFS: Mounting Your Amazon EFS File System Automatically

AWS ELB Autoscaling group with common filesystem (e.g. EBS)

I am using AWS elastic beanstalk with autoscaling group
I wish to log events into files and be able to finish processing the files before instances terminate during a shutdown.
I read that lifecycle hooks can answer my requirement.
My question is: is there an alternative like using a common EBS file system for all the instances in the group that will always be kept live. If that is possible, is there any cons using that approach? Is IO slower?
EBS volume can not be attached to several EC2 instances at the same time.
But shared storage is possible with EFS — Elastic File System. It's pricey, so EFS is not suitable for large amounts of data. But it is as fast as any NFS share and can be mounted to hundreds of servers at the same time.
The only consideration is how you will mount EFS volume. Since Elastic Beanstalk doesn't support cloud-init, you will have to build an AMI or issue a mount command from your code.

How to load ESB Volume by ID via .ebextensions

I'm trying to mount the same volume for a Beanstalk build but can't figure out how to make it work with the volume-id.
I can attach a new volume, and I can attach one based on a snapshot ID but neither are what I'm after.
My current .ebextension
commands:
01umount:
command: "umount /dev/sdh"
ignoreErrors: true
02mkfs:
command: "mkfs -t ext3 /dev/sdh"
03mkdir:
command: "mkdir -p /media/volume1"
ignoreErrors: true
04mount:
command: "mount /dev/sdh /media/volume1"
option_settings:
- namespace: aws:autoscaling:launchconfiguration
option_name: BlockDeviceMappings
value: /dev/sdh=:20
Which of course will mount a new volume, not attach an existing one. Perhaps snapshot is what I want and I just don't understand the terminology here?
I need the same data that was on the volume when the autoscaling kicks in to be on each EC2 instants that scales... A snapshot would surely just be the data that existed at the point the snapshot was created?
Any ideas or better approaches?
Elastic Block Store (EBS) allows you to create, snapshot/clone, and destroy virtual hard drives for EC2 instances. These drives ("volumes") can be attached to and detached from EC2 instances, but they are not a "share" or shared volume... so attaching a volume by ID becomes a non-useful idea after the first instance launched.
EBS volumes are hard drives. The analogy is imprecise (because they're on a SAN) but much the same way as you can't physically install the same hard drive in multiple servers, you can't attach an EBS volume to multiple instances (SAN != NAS).
Designing with a cloud mindset, all of your fixed resources would actually be on the snapshot (disk image) you deploy when you release a new version and then use to spawn each fresh auto-scaled instance... and nothing persistent would be stored there because -- just as important as scaling up, is scaling down. Autoscaled instances go away when not needed.
AWS has Simple Storage Service (S3) which is commonly used for storing things like documents, avatars, images, videos, and other resources that need to be accessible in a distributed environment. It is not a filesystem, and can't properly be compared to a filesystem, because it's an object store... but is a highly scalable and highly available storage service that is well-suited to distributed applications. s3fs allows an S3 "bucket" to be mounted into your machine's filesystem, but this is no panacea. That mechanism should be reserved for back-end process use, if you use it at all, because it's not appropriate for resources like code or templates, and will not perform as well for serving up content as S3 will perform if used as designed, with clients directly accessing it over https. You can secure the content through more than one mechanism, as documented.
AWS also now has Elastic File System (EFS) which sets up an array of storage that you can mount from all of your machines, using NFS. AWS provides the NFS server and the back-end storage. Unlike EBS, you do not need to know how much storage to provision up front, because it scales up and down based on what you've stored, billing you This service is still in "preview" as of this writing, so should not be used for production data.
Or, you can manually configure your own NFS server and mount it from the autoscaling machines. Making such as setup fail-safe is a bit tricky, though.

AWS Automatic Attach EBS Volume to EC2 Instances behind an Elastic Beanstalk

I am facing an architecture-related problem:
I have created a new environment in ElasticBeanstalk and pushed my app there. All good so far. I have set it to auto scale up/down.
My app depends on filesystem storage (it creates files and then serves them to users). I am using an EBS volume (5gb large) to create the files and then push them to S3 and delete them from EBS. The reason I'm using EBS is because of ephemeral filesystem in EC2 instances.
When AWS scales up new instances don't have the EBS volume attached because EBS can be attached to one instance at a time.
When it scales down, it shuts down the instance that had the EBS volume attached, which totally messes things up.
I have added to /etc/fstab a special line that will automatically mount the EBS volume to /data but that only applies for the instance I add the file to /etc/fstab. I guess the solution here would be to create a customized AMI image with that special line. But again, EBS can't be attached to more than one instance at a time, so it seems like a dead end.
What am I thinking wrong? What would be a possible solution or the proper way of doing it?
For some reason, I believe that using S3 is not the right way of doing it.
S3 is a fine way to do it: your application creates the file, uploads to S3, removes the file from the local filesystem, and hands a URL to access the file back to the client. Totally reasonable. Why you can't use ephemeral storage for this. Instance store-backed instances have additional storage available, mounted to /mnt by default. Why can't the application create the file there? If the files don't need to be persisted between instance start/stop/reboot then there's no great reason to use EBS (unless you want faster boot times for your autoscale instances I suppose).