Is there a way to update 100s of EC2 instances on one account without having to use modify-instance-metadata-options with --instance-id on every single instance?
I've looked around thinking surely I'm not the only one with this problem can't find much. I was toying with the idea of writing an ansible playbook, but that might be overthinking things.
Related
I currently have an ec2 instance and an rds instance running in AWS. I wish to know of the times that certain functions/routes/db changes were initiated/activated over the past couple of days. Is there any way to do this?
I'm attempting to set up a RethinkDB cluster with 3 servers total spread evenly across 3 private subnets, each in different AZ's in a single region.
Ideally, I'd like to deploy the DB software via ECS and provision the EC2 instances with auto scaling, but I'm having trouble trying to figure out how to instruct the RethinkDB instances to join a RethinkDB cluster.
To create/join a cluster in RethinkDB, when you start up a new instance of RethinkDB, you specify host:port combination of one of the other machines in the cluster. This is where I'm running into problems. The Auto Scaling service is creating new primary ENI's for my EC2 instances and using a random IP in my subnet's range, so I can't know the IP of the EC2 instance ahead of time. On top of that, I'm using awsvpc task networking, so ECS is creating new secondary ENI's dedicated to each docker container and attaching them to the instances when it deploys them and those are also getting new IP's, which I don't know ahead of time.
So far I've worked out one possible solution, which is to not use an autoscaling group, but instead to manually deploy 3 EC2 instances across the private subnets, which would let me assign my own, predetermined, private IP. As I understand it, this still doesn't help me if I'm using awsvpc task networking though because each container running on my instances will get its own dedicated secondary ENI and I wont know the IP of that secondary ENI ahead of time. I think I can switch my task networking to bridge mode, to get around this. That way I can use the predetermined IP of the EC2 instances (the primary ENI) in the RethinkDB join command.
So In conclusion, the only way to achieve this, that I can figure out, is to not use Auto Scaling, or awsvpc task networking, both of which would otherwise be very desirable features. Can anyone think of a better way to do this?
As mentioned in the comments, this is more of an issue around the fact you need to start a single RethinkDB instance one time to bootstrap the cluster and then handle discovery of the existing cluster members when joining new members to the cluster.
I would have thought RethinkDB would have published a good pattern in their docs for this because it's going to be pretty common when setting up clusters but I couldn't see anything useful in their docs. If someone does know of an official recommendation then you should definitely use this rather than what I'm about to propose especially as I have no experience with running RethinkDB.
This is more just spit-balling here and will be completely untested (at least for now) but the principle is going to be you need to start a single, one off instance of RethinkDB to bootstrap the cluster, then have more cluster members join and then ditch the special case bootstrap member that didn't attempt to join a cluster and leave the remaining cluster members to work.
The bootstrap instance is easy enough to consider. You just need a RethinkDB container image and an ECS task that just runs it in stand-alone mode with the ECS service only running one instance of the task. To enable the second set of cluster members to easily discover cluster members including this bootstrapping instance it's probably easiest to use a service discovery mechanism such as the one offered by ECS which uses Route53 records under the covers. The ECS service should register the service in the RethinkDB namespace.
Then you should create another ECS service that's basically the same as the first but in an entrypoint script should list the services in the RethinkDB namespace and then resolve them, discarding the container's own IP address and then uses the discovered host to join to with --join when starting RethinkDB in the container.
I'd then set the non bootstrap ECS service to just 1 task at first to allow it to discover the bootstrap version and then you should be able to keep adding tasks to the service one at a time until you're happy with the size of the non bootstrapped cluster leaving you with n + 1 instances in the cluster including the original bootstrap instance.
After that I'd remove the bootstrap ECS service entirely.
If an ECS task dies in the non bootstrap ECS service dies for whatever reason it should be able to auto rejoin without any issue as it will just find a running RethinkDB task and start that.
You could probably expand the checks for which cluster member to join to by checking that the RethinkDB port is open and running before using that as a member to join so it will handle multiple tasks being started at the same time (with my original suggestion it could potentially find another task that is looking to join the cluster and try to join to that first, with them all potentially deadlocking if they all failed to randomly pick the existing cluster members by chance).
As mentioned, this answer comes with a big caveat that I haven't got any experience running RethinkDB and I've only played with the service discovery mechanism that was recently released for ECS so might be missing something here but the general principles should hold fine.
I've an ECS cluster with running one task for my backend instance. I would like to be able to stop/start the EC2 instance whenever I want. Is it possible?? I was trying to stop instance directly but it terminates after few second when stopped and after that new instance is created automatically. I tried to change the Auto Scale Group to match desired=min=0 capacity but when I do that the instance gets auto terminated. I just want to turn off the Ec2 instance when its not needed to be used but at the same time I want data to persist betweet turning on and off. I am fighting with this for a few days now and wasn't able to achieve my goals.
Also how to link EBS volume with VOLUME /root/.local/share/XYZ from docker file image to persist the data from the XYZ folder
I would suggest you to do modifications in autoscaling group, when you want to turn off instance put 0 in auto scaling and when you want to turn on change value in autoscaling,
... you can do that with aws cli , and you can shcdule the period also by putting aws cli command in cron job
I would suggest using EFS. Here is an article from AWS on how to persist data from ECS containers using EFS.
Using Amazon EFS to Persist Data from Amazon ECS Containers
Start/Stop instances and auto-scale don't really fit together.
Auto-scale is specifically designed to solve scalein/scaleout.
One way to address this could be using customized termination policy (but I never tried this in ECS setup).
One note though, if your customized termination policy never terminates the instances and you continue adding instances to keep always, you might get good amount EC2 bill.
I am trying to wrap my head around EC2 instances, and I am having a bit of an issue. I heard from a friend of mine that Amazon will kill EC2 instances, and then they restart the image (thus losing all state). Unless it uses EBS as a backing store, you get no persistence.
But I have been looking into Xen and it seems like instances should easily migrate instead of being killed/restarted.
So, do Amazon EC2 instances randomly stop/start an image with all state being managed by something external like EBS?
Amazon EC2 instances will not be stopped/started/restarted unless you issue a command to do so.
In some situations (eg hardware maintenance), you might receive a request from Amazon asking you to stop & start your instance (which moves it to a different host). Such requests are typically issued with two weeks notice.
One AWS customer told me that their instance had been running continuously for over three years.
Yes it is quite possible that an EC2 instance dies and is replaced. Depending upon your data, you may need to use EBS, EFS or S3 to prevent data loss in such cases.
I created an EC2 instance on AWS to use as an ECS instance. I followed these steps here to do that.
I also created a new Cluster under ECS but for some reasons, I cannot see the instance under the cluster:
Any ideas on what might be the issue here?
I found the missing piece. It was stated here as part of 10th item on the list that:
By default, your container instance launches into your default cluster. If you want to launch into your own cluster instead of the default, choose the Advanced Details list and paste the following script into the User data field, replacing your_cluster_name with the name of your cluster.
I would strongly suggest letting ECS Cluster create its own EC2 Instance(s), especially if you are new to this. You can define the type of instance that you want when you create the cluster and everything will magically work.
Doing it the other way around (EC2 Instance first, then feeding/attaching it into the Cluster) might sound quite natural to you, but it means that you have to handle a lot of things by yourself. You have to manually spend time on
Making sure you pick the right AMI (ecs-optimized),
Making sure the VPC-Subnets are right,
Making sure the architecture is right (e.g. t4g medium instances don't take x86 containers),
Making sure the Cluster Name is in /etc/ecs/ecs.config and then after that restart docker & ecs-agent.
Making sure ecs-agent is properly set up and connecting,
... and blah blah blah.
Maybe, and hopefully, in future they will add an "attaching this EC2 Instance to that Cluster" button that does all these chores. But until then, save yourself this list of headaches and try to get the instance created by the cluster.
I already had a cluster configured, so the easiest way to do this for me was to go to EC2 Console > Auto scaling groups and select the autoscaling group for the cluster. Make sure to increase the desired capacity and maximum capacity. This should start a new EC2 instance.