I have a cloudformation stack that consists of a VPC, two subnets (public and private), several EC2 ubuntu instances and all of the routes, EIP addresses, etc. One of the EC2 instances is in a public subnet. It is bootstrapped as a Chef node on startup.
I'd like to figure out a way to delete the chef node when the cloudformation stack is deleted. So far I've tried dropping a cleanup script into EC2 instance /etc/rc0.d.
This script does what it should when run manually, however when I just delete the stack, it does not seem to run. Actually - it might very well run, but I'm guessing that by the time the EC2 instance shuts down all of the routing and EIP addresses might already be gone, so Chef server might not be reachable by the EC2 instance.
I've also tried locking down creation/deletion order with 'DependsOn' attributes, but that didn't work out either - I don't think it's possible to have the IP and routes depend on the instance that is using the said EIP and routes
Is there some way to setup some sort of monitoring that will make sure Chef cleanup runs before everything else?
Gist with the template and chef setup/cleanup script
Yes, most likely your IPs are disassociated/removed before the instance shuts down, making any attempt to reach the Chef server from the instance futile. You can always check your cloudformation action logs but disassociating the IP address before shutdown is what makes most sense.
I think some of the workaround are:
Build an app on top of your cloudformation creation so that every time you delete a stack it also deletes the node(s) you want from your chef server. This would a full blown application with a database to keep track of the servers/stacks running. This will require your app to call the chef server API or simply call a system knife command.
Run your clean up script from another instance running knife/chef-client. You can have some sort of cron/periodic job checking for stacks/servers that have been deleting on AWS and then run the appropriate knife command to delete the server from. This in essence very similar to 1. with just the difference that you don't necessarily have to build a full blown.application.
Hope it helps.
Related
I'm attempting to set up a RethinkDB cluster with 3 servers total spread evenly across 3 private subnets, each in different AZ's in a single region.
Ideally, I'd like to deploy the DB software via ECS and provision the EC2 instances with auto scaling, but I'm having trouble trying to figure out how to instruct the RethinkDB instances to join a RethinkDB cluster.
To create/join a cluster in RethinkDB, when you start up a new instance of RethinkDB, you specify host:port combination of one of the other machines in the cluster. This is where I'm running into problems. The Auto Scaling service is creating new primary ENI's for my EC2 instances and using a random IP in my subnet's range, so I can't know the IP of the EC2 instance ahead of time. On top of that, I'm using awsvpc task networking, so ECS is creating new secondary ENI's dedicated to each docker container and attaching them to the instances when it deploys them and those are also getting new IP's, which I don't know ahead of time.
So far I've worked out one possible solution, which is to not use an autoscaling group, but instead to manually deploy 3 EC2 instances across the private subnets, which would let me assign my own, predetermined, private IP. As I understand it, this still doesn't help me if I'm using awsvpc task networking though because each container running on my instances will get its own dedicated secondary ENI and I wont know the IP of that secondary ENI ahead of time. I think I can switch my task networking to bridge mode, to get around this. That way I can use the predetermined IP of the EC2 instances (the primary ENI) in the RethinkDB join command.
So In conclusion, the only way to achieve this, that I can figure out, is to not use Auto Scaling, or awsvpc task networking, both of which would otherwise be very desirable features. Can anyone think of a better way to do this?
As mentioned in the comments, this is more of an issue around the fact you need to start a single RethinkDB instance one time to bootstrap the cluster and then handle discovery of the existing cluster members when joining new members to the cluster.
I would have thought RethinkDB would have published a good pattern in their docs for this because it's going to be pretty common when setting up clusters but I couldn't see anything useful in their docs. If someone does know of an official recommendation then you should definitely use this rather than what I'm about to propose especially as I have no experience with running RethinkDB.
This is more just spit-balling here and will be completely untested (at least for now) but the principle is going to be you need to start a single, one off instance of RethinkDB to bootstrap the cluster, then have more cluster members join and then ditch the special case bootstrap member that didn't attempt to join a cluster and leave the remaining cluster members to work.
The bootstrap instance is easy enough to consider. You just need a RethinkDB container image and an ECS task that just runs it in stand-alone mode with the ECS service only running one instance of the task. To enable the second set of cluster members to easily discover cluster members including this bootstrapping instance it's probably easiest to use a service discovery mechanism such as the one offered by ECS which uses Route53 records under the covers. The ECS service should register the service in the RethinkDB namespace.
Then you should create another ECS service that's basically the same as the first but in an entrypoint script should list the services in the RethinkDB namespace and then resolve them, discarding the container's own IP address and then uses the discovered host to join to with --join when starting RethinkDB in the container.
I'd then set the non bootstrap ECS service to just 1 task at first to allow it to discover the bootstrap version and then you should be able to keep adding tasks to the service one at a time until you're happy with the size of the non bootstrapped cluster leaving you with n + 1 instances in the cluster including the original bootstrap instance.
After that I'd remove the bootstrap ECS service entirely.
If an ECS task dies in the non bootstrap ECS service dies for whatever reason it should be able to auto rejoin without any issue as it will just find a running RethinkDB task and start that.
You could probably expand the checks for which cluster member to join to by checking that the RethinkDB port is open and running before using that as a member to join so it will handle multiple tasks being started at the same time (with my original suggestion it could potentially find another task that is looking to join the cluster and try to join to that first, with them all potentially deadlocking if they all failed to randomly pick the existing cluster members by chance).
As mentioned, this answer comes with a big caveat that I haven't got any experience running RethinkDB and I've only played with the service discovery mechanism that was recently released for ECS so might be missing something here but the general principles should hold fine.
Last day I wanted, according to AWS recommendations, put my ec2 instance inside of an autoscaling group. I created my ec2 instance by using the standard linux AMI instance and then I installed a full LAMP server.
The next morning I tried accessing my apache and guess what? My LAMP wasn't there anymore! Everything was wiped away.
I guess this is because, for some reason, the autoscaling group deleted my instance and recreated it vanilla.
Now I still want to autoscale my instance but, of course, I want to keep my LAMP and the stored data.
So here's my questions:
How to create a customized image starting from my actual instance?
Would it be correct to create the mysql DB using AWS RDS so to not keep it linked to my instance?Is it more or less expensive than dedicating a EBS storage?
I also want to keep my /var/www/html data somewhere shared between instances: while it is true that, on production, I won't update those files often it is also true that I don't want to lose them just because the autoscaling resets my instance state. I also don't want to re-create an image each time I update said files... What's the best way?Maybe an s3 bucket? Or, still, an EBS storage shared between instances?
I would assume that the reason that your "LAMP [server] wasn't there anymore" was because the web server failed health checks and was terminated and replaced by AutoScaling.
Elastic Beanstalk would be a good way to manage some of the complexity here. If that's not an option then you should read up on AutoScaling, ALB, and health checks.
In response to your specific questions:
you can create an Amazon Machine Image (AMI) from an instance. When you, or AutoScaling, launch a new instance from that AMI, you can get the instance up to date by running a script in userdata
move the DB from the web/app server to RDS, or to a DB server that you manage yourself
maintain the html/js/css etc. in S3 and sync them to your web server periodically (there are other options, but that's simple)
I do not know much about how AWS works since the person who set the whole thing up does not work with us anymore, and I do not specialize in Amazon at all.
I need to set up an auto-scaling on my EC2 instance. I am currently reading all available tutorials to learn the how-to, but there is one thing I cannot find at all. The auto-scaling automaticaly start new instance of EC2 but I cannot find anything about how to do anything in those instance.
Currently, to start our webservices, we need to log into the instance, pull the code from git and launch the whole thing with PM2. I cannot find anything about how to do all those things automatically at the start of the instance.
I think this is supposed to be basic stuff, but as I said, I know next to nothing about how to start, and I do not have much time to learn (my boss just told me I had to be done by the end of the week !)
So if anyone know where to learn this, that would be really helpful. Thanks!
You need a Launch Configuration for setting up an Auto Scaling Group (ASG). The Launch Configuration is where you define all your instance configurations such as type, disk size, security groups, etc. One of these configurations is AMI ID. The AMI ID refers to the image to be used when launching a new instance in the ASG. So you basically need to launch a machine, install everything needed on it, create an image out of it, create a launch configuration using that image, and use that launch configuration in your ASG. This way you do not need to go to the newly added servers every time. But if you like them to run the updated (last) version of your application, you should have a scheduled job in your image which is triggered on-start. This job is responsible for copying the files (e.g. compiled files) from somewhere (a deployment machine for instance) to the newly added instance and then starting it.
The method for configuring an Amazon EC2 instance does not actually require Auto Scaling. The two main options for configuring an instance are:
Launching from a pre-configured AMI that already contains the desired software, or
Running a startup script via User Data, which executes once the instance has launched
You can choose one of the above and then test it by launching an instance via the management console or from a script that calls the AWS Command-Line Interface (CLI).
To incorporate it into Auto Scaling, configure the Auto Scaling Launch Configuration with the same parameters and then each new instance launched by Auto Scaling will automatically be configured.
I am currently working with EC2 instances and have made a Java CLI tool to quickly launch/kill instances for our developers. One of the options is to set a lease time, so that the instance terminates after a certain amount of hours. For spot instances this works great, terminates the instance, the request, and deletes all volumes associated with that instance.
However, for on demand instances, the lease time only affects instance termination and does not delete the stack that it was on.
Is there some way to automate this through AWS? I don't want to write a script to manually check or even monitor, it would be great if this could be automated through AWS itself. I have looked and have not found anything. Any advice is appreciated!
I'm just getting started with AWS EC2 and not entirely sure I understand it.
From what I've read, an instance is basically a virtual server, and you should be able to somehow "duplicate" that virtual server from the AWS console somehow. Then use Load Balancer or Elastic IP to route requests to one or the other.
The problem comes in when I try to "duplicate" my instance. I tried a million things, but the only thing that got me close was creating an AMI of my current instance then launching an instance from that, but when I did that, the new instance was basically the default server config. None of my files were there.
What am I doing wrong?
You don't really "duplicate" the instance. You more copy it as a "blueprint". Then when you boot an instance later, you can base that instance off of your snapshot or "blueprint".
The ELB can be configured to point at any instance you want, so when you boot a new server off this snapshot/"blueprint" it can be automatically added to the ELB.
Now that is cleared up, to answer the question:
I would make sure to use EBS backed instances. You can find them all over. But not S3 backed. If they EBS backed then the exact volume with all your configs will be there.
I would make sure your instance is configured how you like it and has proper scripts installed for when it boots up. You will want your services started, config files pulled down from repositories, etc. The config files should be there, but I would not rely on that. If you have them in a repository and then make a startup script to pull them down and copy them where you want, you will be in much better shape.
With the instance running and selected, click on the instance actions drop down and click "Create AMI"
The instance will REBOOT. So be careful.
Launch a new instance. And pick the AMI/Snapshot that #3 created.
Done. Check this https://stackoverflow.com/a/8919031/667608 that could help with the above.
Oh, one other thing, if you have any EBS Volumes attached, they will also be copied, but you will need to mount them once the server boots.
Under instances, click on the image you want to duplicate and then go to instance action(its near the top) and create ami.
This creates a snapshot of your image as it is right now. Then when you need to add more power, you can simply launch that ami and have the load balancer distrubute the traffic between those ami's.
On a side note, unless really required, I would not suggest you store data on the ami if its changing and you plan to use it on another launched ami. You'll pretty much have to keep taking ami snapshots to update it with the new data, so instead figure a way to maintain state somewhere else(not sure about your data but you can consider a database, s3, or another server that these servers can mount to get the same data).
Hope that helps!
Create a webserver AMI using EBS backed instance. This will serve as your template for running multiple web-server instance later.
For the app codes, depending on your strategy and amount of files to transfer, you can pull them from S3 or git or maybe using a centralized filesystem such as NFS.
Configure the ELB, add one or more web server instances to it. CNAME your ELB's public dns to your www.domain.com.