I plan to use Compute Engine with containers, but everytime I update the container image via gcloud compute instances update-container ... it takes some (down)time stopping and preparing the instance, which causes downtime in the production environment of my application. What pipeline or cloud strategy would you put in place to mitigate this behaviour?
From Updating a container on a VM instance docs,
When you update a VM running a container, Compute Engine performs two
steps:
Updates container declaration on the instance. Compute Engine stores the updated container declaration in instance metadata
under the gce-container-declaration metadata key.
Stops and restarts the instance to actuate the updated configuration, if the instance is running. If the instance is stopped,
update the container declaration and keep the instance stopped. The VM
instance downloads the new image and launches the container on VM
start.
To avoid Application downtime make use of Managed Instance Group.
Managed instance groups maintain high availability of your applications by proactively keeping your instances available, which means in Running state. A MIG automatically recreates an instance that is not Running.
So, Deploy a container on a managed instance group and Update a MIG to a new version of a container image.
Related
I have created a cluster to run our test environment on Aws ECS everything seems to work fine including zero downtime deploy, But I realised that when I change instance types on Cloudformation for this cluster it brings all the instances down and my ELB starts to fail because there's no instances running to serve this requests.
The cluster is running using spot instances so my question is there by any chance a way to update instance types for spot instances without having the whole cluster down?
Do you have an AutoScaling group? This would allow you to change the launch template or config to have the new instances type. Then you would set the ASG desired and minimum counts to a higher number. Let the new instance type spin up, go into service in the target group. Then just delete the old instance and set your Auto scaling metrics back to normal.
Without an ASG, you could launch a new instance manually, place that instance in the ECS target group. Confirm that it joins the cluster and is running your service and task. Then delete the old instance.
You might want to break this activity in smaller chunks and do it one by one. You can write small cloudformation template as well because by default if you update the instance type then your instances will be restarted and to avoid zero downtime, you might have to do it one at a time.
However, there are two other ways that I can think of here but both will cost you money.
ASG: Create a new autoscaling group or use the existing one and change the launch configuration.
Blue/Green Deployment: Create the exact set of resources but this time with updated instance type and use Route53's weighted routing policy to control the traffic.
It solely depends upon the requirement, if you can pour money then go with above two approaches otherwise stick with the small deployments.
I'm looking to see if I can create an instance and deploy applications to athis instance dynamically via the API. I only want these instances to be created when my application needs them, or I request for them to be created.
I have two applications that I need to deploy to each created instance which require some set up and installation of dependencies prior to their launch. When I am finished with this application, I want to terminate the instance.
Am I able to do this? If so, could someone please point me to the right section of the documentation. I have searched on the documentation and found some information about creating images but I am unsure as to what exactly I will need to achieve this task.
Yes. Using an Autoscaling Group, you can create a launch configuration that will launch you instances. Using CodeDeploy, you would link your deployment group to the auto-scaling group.
See Integrating AWS CodeDeploy with Auto Scaling
AWS CodeDeploy supports Auto Scaling, an AWS service that can launch
Amazon EC2 instances automatically according to conditions you define.
These conditions can include limits exceeded in a specified time
interval for CPU utilization, disk reads or writes, or inbound or
outbound network traffic. Auto Scaling terminates the instances when
they are no longer needed. For more information, see What Is Auto
Scaling?.
Assuming you set your desired/minimum instances to 0, then the default state of the ASG will be to have no instances.
When you application needs an instance spun up, it would simply change the desired instance value to 1. When your application is completed with the instance, it would set your desired count to 0 again, thereby terminating that instance.
To develop this setup, start by running your instance normally (manually) and get the application deployment working. When that works, then create your auto scaling group. Finally, update your deployment group to refer to the ASG so that your code is deployed when you have scaling events.
We have a terraform deployment that creates an auto-scaling group for EC2 instances that we use as docker hosts in an ECS cluster. On the cluster there are tasks running. Replacing the tasks (e.g. with a newer version) works fine (by creating a new task definition revision and updating the service -- AWS will perform a rolling update). However, how can I easily replace the EC2 host instances with newer ones without any downtime?
I'd like to do this to e.g. have a change to the ASG launch configuration take effect, for example switching to a different EC2 instance type.
I've tried a few things, here's what I think gets closest to what I want:
Drain one instance. The tasks will be distributed to the remaining instances.
Once no tasks are running in that instance anymore, terminate it.
Wait for the ASG to spin up a new instance.
Repeat steps 1 to 3 until all instances are new.
This works almost. The problem is that:
It's manual and therefore error prone.
After this process one of the instances (the last one that was spun up) is running 0 (zero) tasks.
Is there a better, automated way of doing this? Also, is there a way to re-distribute the tasks in an ECS cluster (without creating a new task revision)?
Prior to making changes make sure you have the ASG spanned across multiple availability zones and so are the containers. This ensures High Availability when instances are down in one Zone.
You can configure an update policy of Autoscaling group with AutoScalingRollingUpgrade where you can set MinInstanceInService and MinSuccessfulInstancesPercent to a higher value to maintain slow and safe rolling upgrade.
You may go through this documentation to find further tweaks. To automate this process, you can use terraform to update the ASG launch configuration, this will update the ASG with a new version of launch configuration and trigger a rolling upgrade.
I am looking at using AWS auto-scaling to scale my infrastructure up and down based on various performance metrics (CPU, etc.). I understand how to set this up; however, I don't like that instances are terminated rather than stopped when it is scaled down. This means that when I scale back up, I have to start from scratch with a new instance and re-install my software, etc. I'd rather just start/stop my instances as needed rather than create/terminate. Is there a way to do this?
No, it is not possible to Stop an instance under Auto Scaling. When a Scaling Policy triggers the removal of an instance, Auto Scaling will always Terminate the instance.
However, here's some ideas to cope with the concept of Termination...
Option 1: Use pre-configured AMIs
You can configure an Amazon EC2 instance with your desired software, data and settings. Then, select the EC2 instance in the Management Console and choose the Create Image action. This will create a new Amazon Machine Image (AMI). You can then configure Auto Scaling to use this AMI when launching a new instance. Each new instance will contain exactly the same disk contents.
It's worth mentioning that EBS starts up very quickly from an AMI. Instead of copying the whole AMI to the boot disk, it copies it across on "first access". This means the new instance can start-up immediately rather than waiting for the whole disk to be copied.
Option 2: Use a startup (User Data) script
Each Amazon EC2 instance has a User Data field, which is accessible from the instance. A script can be passed through the User Data field, which is then executed when the instance starts. The script could be used to install software, download data and configure the instance.
The script could do something very simple, like download a configuration script from a source code repository, then execute the script. This means that machine configuration can be centrally managed and version-controlled. Want to update your app? Just launch a new instance with the updated script and throw away the old instance (which is much easier than "updating" an app).
Option 3: Add/Remove instances to an Auto Scaling group
Rather than using Scaling Policies to Launch/Terminate instances for an Auto Scaling group, it is possible to attach/detach specific instances. Thus, you could 'simulate' auto scaling:
When you want to scale-down, detach an instance from the Auto Scaling group, then stop it.
When you want to add an instance, start the instance then attach it to the Auto Scaling group.
This would require your own code, but it is very simple (basically two API calls). You would be responsible for keeping track of which instance to attach/detach.
You can suspend scaling processes, see documentation here:
https://docs.aws.amazon.com/autoscaling/ec2/userguide/as-suspend-resume-processes.html#as-suspend-resume
Add that instance to Scale in protection and then stop the instance then it will not delete your instance as it's having the scale in protection.
Actually you have three official AWS options to reboot or even stop an instance which belongs to an Auto Scaling Group:
Put the instance into the Standby state
Detach the instance from the group
Suspend the health check process
Ref.: https://aws.amazon.com/premiumsupport/knowledge-center/reboot-autoscaling-group-instance/
As of April 2021:
Option 4: Use Warm Pools and an Instance Reuse Policy
By default, Amazon EC2 Auto Scaling terminates your instances when your Auto Scaling group scales in. Then, it launches new instances into the warm pool to replace the instances that were terminated.
If you want to return instances to the warm pool instead, you can specify an instance reuse policy. This lets you reuse instances that are already configured to serve application traffic.
This mostly automates option 3 from John's answer.
Release announcement: https://aws.amazon.com/blogs/compute/scaling-your-applications-faster-with-ec2-auto-scaling-warm-pools/
Documentation: https://docs.aws.amazon.com/autoscaling/ec2/userguide/ec2-auto-scaling-warm-pools.html
This is to expand a little on #mwalling's answer, because that is the right direction, but needs a little extra work to prevent instance termination.
There is now a way to stop or hibernate scaled in instances!
By default AWS Autoscaling scale in policy is to terminate an instance. Even if you have a warm pool configured. Autoscaling will create a fresh instance to put into the warm pool. Presumably to make sure you start with a fresh machine every time. However, with a instance reuse policy you can make AWS Autoscaling either stop or hibernate a running instance and store that instance in the warm pool.
Advantages include:
Local caches stay populated (use hibernate for in memory cache).
Burstable EC2 instances (those types with T*) keep built up burst credits instead of the newly created instance that have limited or no credits.
Practical example:
We use a burstable EC2 instance for CI/CD work that we scale to 0 instances outside working hours. With a reuse policy our local image repository stays populated with the most important Docker images. Also we keep the built up credit from the previous day and that speeds up automatic jobs we run first thing every morning.
How to implement:
There's currently no way of doing this completely via the management console. So you will need to use AWS CLI or SDK.
First create a warm pool as described in the AWS Documentation
Then execute this command to add a reuse policy:
aws autoscaling put-warm-pool --auto-scaling-group-name <Name-of-autoscaling-group> --instance-reuse-policy ReuseOnScaleIn=true
Reference docs for the command: AWS CLI Autoscaling put-warm-pool documentation
Flow diagram of possible life cycles of EC2 instances:
Image from AWS Documentation: Lifecycle state transitions for instances in a warm pool
I have an Amazon EC2 Micro instance running using EBS storage. This more than meets my needs 99.9% of the time, however I need to perform a very intensive database operation as a once off which kills the Micro instance.
Is there a simple way to restart the exact same instance but with lots more power for a temporary period, and then revert back to the Micro instance when I'm done? I thought this seemed more than possible under the cloud based model Amazon uses but it doesn't appear to simply be a matter of shutting down and restarting with more power as I first thought it might be.
If you are manually running the database operation, then you can just create the image of the server, launch a small or a high cpu instance using the same image, run the database operation and then create the image and launch it again as a micro instance. You can also automate this process by writing scripts using AWS APIs.
In case you're using an EBS-backed AMI you don't have to create a new image and launch it. Just stop the machine and issue a simple EC2 API command to change the instance type:
ec2-modify-instance-attribute --instance-type <instance_type> <instance_id>
Keep in mind that not all instance types work for every AMI. The applicable instance types depend on the machine itself and the kernel. You can find a list of available instance types here: http://aws.amazon.com/ec2/instance-types/