The best way to add post configuration to an ECS Instance - amazon-web-services

I wonder what is the best way to add a post config step after instance creation when instance are automatically created by an ECS Cluster.
It seems there is no way to add user-data to ECS instance ?
Note : the instance are created automatically by the ECS Cluster itself.
EDIT:
When using ECS, you configure a Cluster. While configuring the cluster you select instance type and other stuff (ssh key, ...) but there is nowhere to give some user-data to the instances that will be created by ECS. So the question is how to do some post-configuration on instances automatically created with ECS.

When using the management console, it's more of a wizard that creates everything needed for you, including the instances using the Amazon Linux ECS optimized AMI, and doesn't give you a whole lot of control beyond that.
To get more fine-grained control, you would have to use another method of creating your cluster, such as the AWS CLI or CloudFormation. These methods allow you (or require you, actually) to create each piece at a time.
Example:
$ aws ecs create-cluster --cluster-name MyEcsCluster
The above command creates you a cluster, and cluster only. You would still have to create an ECS task definition, ECS service—although you could still use the management console for those—and (here's the real answer to your question) the EC2 instances which you want to attach to the cluster (either individually or through an Auto Scaling group). You could create instances from the Amazon Linux ECS optimized AMI, but also add user-data at that time to further configure them (you would also probably use the user-data in this scenario to create the /etc/ecs/ecs.config file to make sure it attaches to the ECS cluster you've created, e.g. echo "ECS_CLUSTER=MyEcsCluster" > /etc/ecs/ecs.config).
The short answer is, it's a more work to gain that sort of flexibility, but it is doable.
Edit: Thinking about it further, you could likely use the management console wizards to create everything once, then manually terminate the instances it created for the cluster (or, rather, delete the Auto Scaling group that creates them) and add your own. This would save you some work.

Related

AWS ECS SDK.Register new container instance (EC2) for ECS Cluster using SDK

I've faced with the problem while using AWS SDK. Currently I am using SDK for golang, but solutions from other languages are welcome too!
I have ECS cluster created via SDK
Now I need to add EC2 containers for this cluster. My problem is that I can't use Amazon ECS Agent to specify cluster name via config:
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
or something like that. I can use only SDK.
I found method called RegisterContainerInstance.
But it has note:
This action is only used by the Amazon ECS agent, and it is not
intended for use outside of the agent.
It doesn't look like working solution.
I need to understand how (if it's possible) to create working ECS clusterusing SDK only.
UPDATE:
My main target is that I need to start specified count of servers from my Docker image.
While I am investigating this task i've found that I need:
create ECS cluster
assign to it needed count of ec2 instances.
create Task with my Docker image.
run it on cluster manually or as service.
So I:
Created new cluster via CreateCluster method with name "test-cluster".
Created new task via RegisterTaskDefinition
Created new EC2 instance with ecsInstanceRole role with ecs-optimized AMI type, that is correct for my region.
And there place where problems had started.
Actual result: All new ec2 instances had attached to "default" cluster (AWS created it and attach instance to it).
If I am using ECS agent I can specify cluster name by using ECS_CLUSTER config env. But I am developing tool that use only SDK (without any ability of using ECS agent).
With RegisterTaskDefinition I haven't any possibility to specify cluster, so my question, how I can assign new EC2 instance exactly to specified cluster?
When I had tried to just start my task via RunTask method (with hoping that AWS somehow create instances for me or something like that) I receive an error:
InvalidParameterException: No Container Instances were found in your cluster.
I actually can't sort out which question you are asking. Do you need to add containers to the cluster, or add instances to the cluster? Those are very different.
Add instances to the cluster
This is not done with the ECS API, it is done with the EC2 API by creating EC2 instances with the correct ecsInstanceRole. See the Launching an Amazon ECS Container Instance documentation for more information.
Add containers to the cluster
This is done be defining a task definition, then running those tasks manually or as services. See the Amazon ECS Task Definitions for more information.

Create AWS Batch Managed Compute Environment passing UserData to Container Instances

I would like to create a Managed Compute Environment for AWS Batch, but use EC2 User Data to configure the instances as they are brought into the ECS fleet that Batch is scheduling jobs onto.
It shouldn't matter, but the purpose of the User Data script is to pull down large data files onto an InstanceStore that the Docker containers will reference.
This is possible in ECS, but I have found no way to pass User Data to a Managed Batch Compute Environment.
At most, I can specify the AMI. But since we're going with Managed, we must use the Amazon ECS-optimized AMI.
I'd prefer to use EC2 User Data as the solution, as it gives a entry-point for any other bootstrapping we wish to perform. But I'm open to other hacks or solutions, so long as they are applicable to a Managed Compute Environment.
You can create an AMI based on the AWS provided AMI, and customize it. It will still be managed since the Batch and/or ECS daemon is running on it.
As a side note I’m trying to do the same thing but no luck so far. I may end up creating a custom AMI and include the configure script in the AMI itself in /etc/rc.local. Not ideal but I don’t think Batch can pass a user data script other than what it needs. I am still looking into this.
You can create a launch template containing your user-data. Then assign this launch template to your compute environment. Keep in mind that you might have to clean the cloud init directory in your AMI since it probably was already spun up once (at ami creation).
Launch template userguide

How to add EC2 instance to ECS cluster using AWS Node SDK

I am trying to programmaticly create a ECS cluster with EC2 instance in it. As far as I understand I should first create an ECS cluster , than EC2 instance and then register instance using this method :
http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ECS.html#registerContainerInstance-property
Is it how I should do it? Which arguments are mandatory? How to get instanceIdentityDocument and instanceIdentityDocumentSignature?
thanks
I would use the User Data of the EC2 instance to launch the instance directly into the ECS cluster. This is the User Data you'll want to use:
#!/bin/bash
echo ECS_CLUSTER=your_cluster_name >> /etc/ecs/ecs.config
The details of this are described in the AWS docs. You can also use this user data in an Auto Scale Group Launch Configuration.
Apart from that, it might be worth it to look into languages that where made to provision infrastructure, like Terraform (also for AWS) or CloudFormation (specifically for AWS).

Updating user-data for EC2 instances under an autoscaling group

I want to modify / update the user-data for an EC2 instance. This is attached to an autoscaling cluster.
I understand that the instance needs to be stopped before the user-data can be updated. The problem I am facing is, when I stop the instance to update user-data autoscaler automatically brings a new instance back up.
Is there a way to update user-data without removing the EC2 instance from the autoscaling group?
For instances in an autoscaling group, the user data is generally updated by creating a new launch configuration with your new user data.
Your AutoScaling group should be associated with a launch configuration already. There is an easy option to copy launch configurations from the AWS web console that will replicate all of your existing options. Simply find this launch configuration, copy it, and then replace the old user data before you save the new configuration.
Once the new launch configuration is created, apply it to your autoscaling group. You can begin using it immediately by increasing the desired size of the group to launch a new instance with the new configuration, and then detach the old instance once you're satisfied that the new instance (and any hosted applications) are operational.
You can likewise use this method to change any property of a launch configuration without causing an interruption to your application.
Further Resources:
AWS Documentation - Creating a Launch Configuration
The only way can achieve this is by disabling autoscaling temporarily using programmatic invocation using aws sdk.
You can restart the servers after the autoscaling is disabled.
(node API http://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/AutoScaling.html#suspendProcesses-property)

For AWS ASG, how to set up custom readiness check for new instances?

We have an AutoScaling Group that runs containers (using ECS). When we add OR replace EC2 instances in the ASG, they don't have the docker images we want on them. So, we run a few docker pull commands using cloud-init to fetch the images when they boot up.
However, the ASG thinks that the new instance is ready, and terminates an old instance. But in reality, this new instance isn't ready until all docker images have been pulled.
E.g.
Let's say my ASG's desired count is 5, and I have to pull 5 containers using cloud-init. Now, I want to replace all EC2 instances in my ASG.
As new instances start to boot up, ASG will terminate old instances. But due to the docker pull lag, there will be a time during the deploy, when the actual valid instances will be lesser than 3 or 2.
How can I "mark an instance ready" only when cloud-init is finished?
Note:I think Cloudformation can bridge this communication gap using CFN-Bootstrap. But I'm not using Cloudformation.
What you're look got is AutoScaling Lifecycle Hooks. You can keep an instance in the Pending:Wait state until you're docker pull has completed. You can then move the instance to InService. all of this can be done with the AWS CLI so it should be achievable with an AWS AutoScaling command before and after your docker commands.
The link to the documentation I have provided explains this feature in detail and provides great examples on how to use it.