From a brief search - there does not seem to be a method to set dynamic hostnames for members of an autoscaling group. The functionality exists within OpenStack Heat using index - but I cannot find anything on doing so with AWS autoscaling groups.
For example, using OpenStack Heat - nodes are automatically given a hostname based on the number of nodes in the autoscaling group:
instance_group:
type: OS::Heat::ResourceGroup
properties:
count: { get_param: instance_number }
resource_def:
type: OS::Nova::Server
properties:
name: instance%index%
Would give me the following if I were to have 3 instances in the autoscaling group
instance0
instance1
instance2
Is there a similar method I can use with the AWS autoscaling groups launch configuration and or cloud-init?
I've found a solution that works pretty well, if you stick to some not-unreasonable conventions.
Every kind of EC2 instance that I launch, whether there are N servers of this kind in an autoscaling group or it's stand-alone instance, I create an Instance Profile for it. This is a good idea anyway in my experience, even if you don't need the instance to access any aws services it doesn't hurt to have a role/profile with empty permissions, it makes it that much easier to give it access to an s3 bucket or whatever else in the future if you need to.
Then at server launch in the user_data script (or your configuration management tool if you're using something like puppet or ansible), I query the instance profile name from the metadata service and append something unique to each server like the private ip and set that as the hostname.
You'll end up with hostnames like webserver-10-0-12-58 which is both human readable and unique to each server.
(The downside of this vs incrementing integers is that these aren't predictable, and can't be used to set up unique behavior for a single server. For example if you had webserver-{0-8} and needed to run some process on exactly one server, you could use logic like if hostname == webserver-0 then run_thing.)
Related
I'm trying to create an autoscaling group manages EKS worker node provisioning. According to AWS' docs under the "Nodes fail to join cluster" section, in order for instances to join a cluster, the new instances must contain the tag kubernetes.io/cluster/my-cluster where my-cluster is the name of the cluster and the value of the tag must be owned. However, when the auto scaling group tries to provision new instances, I see the following error in the activity section:
Launching a new EC2 instance. Status Reason: Could not launch Spot
Instances. InvalidParameterValue -
'kubernetes.io/cluster/my-cluster' is not a valid tag
key. Tag keys must match pattern ([0-9a-zA-Z\-_+=,.#:]{1,255}), and
must not be a reserved name ('.', '.', '_index'). Launching EC2
instance failed.
Does anyone know why this is happening and how I can address this?
I worked with AWS Support and discovered the issue is coming from a new feature called instance tags on EC2 instance metadata service.
This feature provides an alternative solution to making API calls via AWS CLI by allowing developers to use the metadata service API to query instance tags. This is useful to reduce the number of API calls if you are having issues with exceeding the maximum number of requests to AWS.
However, this causes conflicts with auto scaling group when the special IAM key is required which includes non-supported characters.
The solution to the problem is to set 'Metadata accessible' to 'Don't include in launch template' or 'Disabled' when creating your launch template.
You can find this option when creating or modifying a launch template under: Advanced details section > Metadata accessible
I would like to try to setup AWS Launch Template, or just Spot request (persistance) and I need automatically attach my specific volume.
The main idea - spot instance will be process data and store it in a separate volume. When Spot will die, another Spot should be requested automatically (which will be built from an image with predefined software) and data should continue processing automatically (and again, storing in my second volume).
But, I can`t setup it in AWS console, so, looks like it is not possible. Am I wrong? Is it possible in some another way?
The same according IP address - I would like to have the same IP address for any of "versions" of Spot (after recreating for example)
It is possible to attach and detach Elastic IP using AWS CLI to achieve what you want.
However there are other possible workarounds in case you do not want to script AWS CLI :
Using Route53 you can define an A record with TTL of 60 seconds if you can accept few minutes of transition. That way you get to use a domain to access your underlying instance instead.
Setup a ALB and forward the request to the EC2 fleet
I need to create a dynamic number of subdomains depending on how many instances I want to create. my goal id to create something like
customer-code-100.example.com
customer-code-101.example.com
customer-code-102.example.com
customer-code-103.example.com
I've researched, but there doesn't seem to be a solution. I need to be able to run Puppet on multiple hosts, but they each need a different domain.
Ideally, I want to be able to use autoscaling or some sort of dynamic way to accomplish this, but I haven't been able to find any answers.
MyRecordSet:
Type: AWS::Route53::RecordSet
Properties:
HostedZoneName: example.com.
Name: !Join[".", [!Ref Alias, "example.com"]]
Type: A
The easier method is to have the instances register themselves with Amazon Route 53. This can be done with a startup script that uses the AWS CLI to register a domain name.
Admittedly, it can be hard to decide which number to assign an instance, especially if Auto Scaling is being used. For example:
If Instance 1 and Instance 2 exist, obviously the next is Instance 3
But if Instance 2 is terminated by Auto Scaling and only Instance 1 and Instance 3 exist, should the next instance be Instance 2 or Instance 4?
Or, use part of the Instance ID to generate the name.
Then there is the problem of "unregistering" a subdomain when the instance is terminated.
Actually, there should normally be no need to assign a subdomain to an Auto Scaling instance. This is because traffic is normally routed through a Load Balancer, or the instances are pulling work from a queue. There should be no need to uniquely address a specific instance.
Is it possible to do AutoScaling with Static IPs in AWS ? The newly created instances should either have a pre-defined IP or pick from a pool of pre-defined IPs.
We are trying to setup ZooKeeper in production, with 5 zooKeeper instances. Each one should have a static-IP which are to hard-coded in the Kafka's AMI/Databag that we use. It should also support AutoScaling, so that if one of the zooKeeper node goes down, a new one is spawned with the same IP or from a pool of IPs. For this we have decided to go with 1 zoo-keeper instance per AutoScaling group, but the problem is with the IP.
If this is the wrong way, please suggest the right way. Thanks in advance !
One method would be to maintain a user data script on each instance, and have each instance assign itself an elastic IPs from a set of EIPs assigned for this purpose. This user data script would be referenced in the ASGs Launch Configuration, and would run on launch.
Say the user script is called "/scripts/assignEIP.sh", using the AWS CLI you would have it consult the pool to see which ones are available and which ones are not (already in use). Then it would assign itself one of the available EIPS.
For ease of IP management, you could keep the pool of IPs in a simple text properties file on S3, and have the instance download and consult that list when the instance starts.
Keep in mind that each instance will need an to be assigned IAM instance profile that will allow each instance to consult and assign EIPs to itself.
I'm trying to write a script to stop several instances in our test environment on Friday and have them start back on Monday, to save little cost.
Is there a way to stop instances by IP addresses (and not by instance ID), or some other way I don't know about? (The reason being that instance ID's may change if an instance had to be deleted and recreated.)
This is a zero code solution:
Put your instances into autoscale groups and add a shutdown and startup schedule on the autoscale group. This can be done in the AWS console.
This can also be automated using the AWS CLI.
Use EC2 Tags to give your instances key/value tag pairs, then write a script using Boto which searches for instances with the right tags, and then terminates them.
You could also use Boto to list instances matching the specific IP address, and terminate them that way.
But... IP addresses are dynamically assigned (unless you are using Elastic IPs). So why not make a note of the instance IDs when launching the instances, instead of the IP address?