I have a OpsWorks stack where an instance is running. For some reason, I want to run a similar instance inside a different VPC. So, I created a new OpsWorks stack that uses the VPC. Baked an AMI using the old instance. I spun up an instance on the new stack. But, the problem is that setup never completes. It runs in 'running_setup' status forever. Since I don't want to configure anything on the new instance as it uses an AMI that has everything I want, the run_list (recipes list) is empty.
I ssh'ed into the server. I found that an aws-opsworks agent is already running. I manually killed the agent. But, no luck.
I'm running the new instance inside an OpsWorks stack because I might need to run some new recipes in future.
So, I'm looking for a way to spin up instance in OpsWorks using an AMI where OpsWorks agent is already installed.
Any help would be appreciated.
While you create a AMI running Opsworks you need to make sure there are certain steps that needs to be followed before you hit create AMI button in AWS.
Check this guide and make sure you followed all the steps mentioned before you created that AMI, as you mentioned Opsworks agent is already running this should not happen so you are definitely missing one of or all the steps mentioned in this guide.
http://docs.aws.amazon.com/opsworks/latest/userguide/workinginstances-custom-ami.html#workinginstances-custom-ami-create-opsworks
Related
I have custom chef 12.2 script to run the deployment on my opsworks, deployment and running recipe used work great on the instances (with Windows Custom AMI - OS server 2012).
Post migration of instances to a Domain. nothing seems to work.
opsworks agent is running on the instances. not sure what else to look at to solve the issue.
any suggestions how i can investigate the issue and solve it?
**note Reboot from opsworks reboots the instance
whenever opsworks runs for a long time use
opsworks-agent-cli show_log
command to get the current activity logs of opsworks agent. if its stuck inside some loop forever, you can get its pid and kill it abruptly.
We have an AutoScaling Group that runs containers (using ECS). When we add OR replace EC2 instances in the ASG, they don't have the docker images we want on them. So, we run a few docker pull commands using cloud-init to fetch the images when they boot up.
However, the ASG thinks that the new instance is ready, and terminates an old instance. But in reality, this new instance isn't ready until all docker images have been pulled.
E.g.
Let's say my ASG's desired count is 5, and I have to pull 5 containers using cloud-init. Now, I want to replace all EC2 instances in my ASG.
As new instances start to boot up, ASG will terminate old instances. But due to the docker pull lag, there will be a time during the deploy, when the actual valid instances will be lesser than 3 or 2.
How can I "mark an instance ready" only when cloud-init is finished?
Note:I think Cloudformation can bridge this communication gap using CFN-Bootstrap. But I'm not using Cloudformation.
What you're look got is AutoScaling Lifecycle Hooks. You can keep an instance in the Pending:Wait state until you're docker pull has completed. You can then move the instance to InService. all of this can be done with the AWS CLI so it should be achievable with an AWS AutoScaling command before and after your docker commands.
The link to the documentation I have provided explains this feature in detail and provides great examples on how to use it.
I have created Auto scaling group in AWS using a customized AMI. Now to rollout my new code I need to either update all instances running but then if a new instance comes up it won't be updated. So, I need a way to update AMI. One way could be creating new AMI and Autoscaling group.
Thanks in advance.
This is one way to go about it:
Spin up a stand-alone instance using the AMI
Make changes
Stop instance
Create new AMI from this instance
Create a new Launch Configuration that uses the new AMI
Update the Autoscaling Group to use the new Launch Configuration
Slowly terminate the old instances in the Autoscaling Group, and let them be automatically replaced with instances using the new AMI
Of course all this is a pain to deal with manually every time you need to make a change. Elastic Beanstalk and CloudFormat both provide mechanisms to deal with this in a more automated way.
If you are just changing the code you are deploying to your servers, then there are other ways to handle this, such as using AWS CodeDeploy. You could also update the running servers in some automated or manual fashion, and configure the AMI such that any new instances that are created will go get the latest code on startup.
I have configured an aws asg using ansible to provision new instances and then install the codedeploy agent via "user_data" script in a similar fashion as suggested in this question:
Can I use AWS code Deploy for pulling application code while autoscaling?
CodeDeploy works fine and I can install my application onto the asg once it has been created. When new instances are triggered in the ASG via one of my rules (e.g. high cpu usage), the codedeploy agent is installed correctly. The problem is, CodeDeploy does not install the application on these new instances. I suspect it is trying to run before the user_data script has finished. Has anyone else encountered this problem? Or know how to get CodeDeploy to automatically deploy the application to new instances which are spawned as part of the ASG?
AutoScaling tells CodeDeploy to start the deployment before the user data is started. To get around this CodeDeploy gives the instance up to an hour to start polling for commands for the first lifecycle event instead of 5 minutes.
Since you are having problems with automatic deployments but not manual ones and assuming that you didn't make any manual changes to your instances you forgot about, there is most likely a dependency specific to your deployment that's not available yet at the time the instance launches.
Try listing out all the things that your deployment needs to succeed and make sure that each of those is available before you install the host agent. If you can log onto the instance fast enough (before AutoScaling terminates the instance), you can try and grab the host agent logs and your application's logs to find out where the deployment is failing.
If you think the host agent is failing to install entirely, make sure you have Ruby2.0 installed. It should be there by default on AmazonLinux, but Ubuntu and RHEL need to have it installed as part of the user data before you can install the host agent. There is an installer log in /tmp that you can check for problems in the initial install (again you have to be quick to grab the log before the instance terminates).
Can AWS autoscaling invoke custom code when scaling up or down instances? In other words, given the pre-existence of some arbitrary launch_instance.sh script that launches and configures one's instances, can that be integrated into the autoscaling workflow?
I believe the answer to this question is "no, you need to bake the things that launch_instance.sh does into an AMI and execute that when the instance launches", but I'd appreciate confirmation in case I missed some documentation.
You can actually set that up in userdata within the launch configuration. Some AMI's have cloudinit and will execute userdata automatically, but otherwise you can bake something into the AMI that will check the instance metadata for userdata.
More information about Cloudinit: https://help.ubuntu.com/community/CloudInit