Dynamically distribute EC2s in available Subnets via Terraform [duplicate] - amazon-web-services

This question already has an answer here:
Terraform list element out of bounds?
(1 answer)
Closed 3 years ago.
The requirement is to create EC2s from the dynamically given list instance_names and distributed them evenly in the available subnets of the VPC.
I have tried looping and conditional statements with little luck.
Use Case 01 - (In a VPC with two subnets) If we are creating 2
servers, One EC2 should be in subnet 'a' and other in subnet 'b'
Use Case 02 - (In a VPC with two subnets) If we are creating 3
servers, two EC2s need to be in subnet 'a' and the other EC2 in subnet
'b'
Control Code
module "servers" {
source = "modules/aws-ec2"
instance_type = "t2.micro"
instance_names = ["server01", "server02", "server03"]
subnet_ids = module.prod.private_subnets
}
Module
resource "aws_instance" "instance" {
count = length(var.instance_names)
subnet_id = var.subnet_ids[count.index]
tags = {
Name = var.instance_names[count.index]
}
}

You can use element to loop around the subnet_ids list and get the correct id for each aws_instance.
In the docs you can see that element will give you the desired effect because:
If the given index is greater than the length of the list then the index is "wrapped around" by taking the index modulo the length of the list
Use Case 1
-> server01 - subnet 'a' <-> element(subnet_ids,0)
-> server02 - subnet 'b' <-> element(subnet_ids,1)
Use Case 2
-> server01 | subnet 'a' <-> element(subnet_ids,0)
-> server02 | subnet 'b' <-> element(subnet_ids,1)
# loop around the subnet id list the first id again
-> server03 | subnet 'a' <-> element(subnet_ids,2)
-> server04 | subnet 'b' <-> element(subnet_ids,3)
-> etc.
So the following update to the code should work:
resource "aws_instance" "instance" {
count = length(var.instance_names)
subnet_id = element(var.subnet_ids, count.index)
tags = {
Name = var.instance_names[count.index]
}
}

I found an interesting answer from Tom Lime for a similar question. I derived an answer from it to this scenario. For the Module, we provide the below logic for the subnet_id
subnet_id = "${var.subnet_ids[ count.index % length(var.subnet_ids) ]}"

Related

How to configure aws auto scaling group for scaling up/down using terraform

I have an ECS cluster (type: ec2) that has an auto-scaling group. let's say that I have for example 10 instances as a maximum and there are 6 instances as a desired count of running instances and each instance has 2 deployed services now I want to configure the auto-scaling group for dynamically scaling up / down based on the service counts and this means:
if the desired number of the deployed service is 6 and I made an update to scaling the replica number of a service. the cluster must scale up with an instance from the 10 instances for the new replica number and so on until the maximum number of 10 instances is full and if I decrease the number of replicas the cluster must terminate the unused instance and so on.
why i need this ?
because I don't wanna have any instance with a Status: Active and I don't use it. I think that any unused instance with an active status will pay for it. so if you even have a good idea or I am wrong in my thoughts tell me.
here is my configuration:
resource "aws_autoscaling_policy" "asg_policy" {
name = "asg-policy"
scaling_adjustment = 1
policy_type = "SimpleScaling"
adjustment_type = "ChangeInCapacity"
cooldown = 100
autoscaling_group_name = aws_autoscaling_group.ecs_asg.name
}
resource "aws_autoscaling_group" "ecs_asg" {
name = "ecs-asg"
vpc_zone_identifier = ["${aws_subnet.public_1.id}", "${aws_subnet.public_2.id}", "${aws_subnet.public_3.id}"]
launch_configuration = aws_launch_configuration.ecs_launch_config.name
desired_capacity = 6
min_size = 0
max_size = 10
health_check_grace_period = 100
health_check_type = "ELB"
force_delete = true
target_group_arns = [aws_lb_target_group.asg_tg.arn]
termination_policies = ["OldestInstance"]
}
i tried to configure the asg_policy but seems not working as expected.
tried to setup the max / min number but not working.
can anyone help me & tnx

Telegraf: How to extract from field using regex processor?

I would like to extract the values for connections, upstream and downstream using telegraf regex processor plugin from this input:
2022/11/16 22:38:48 In the last 1h0m0s, there were 10 connections. Traffic Relayed ↑ 60 MB, ↓ 4 MB.
Using this configuration the result key "upstream" is a copy of the initial message but without a part of the 'regexed' stuff.
[[processors.regex]]
tagpass = ["snowflake-proxy"]
[[processors.regex.fields]]
## Field to change
key = "message"
## All the power of the Go regular expressions available here
## For example, named subgroups
pattern = 'Relayed.{3}(?P<UPSTREAM>\d{1,4}\W.B),'
replacement = "${UPSTREAM}"
## If result_key is present, a new field will be created
## instead of changing existing field
result_key = "upstream"
Current output:
2022/11/17 10:38:48 In the last 1h0m0s, there were 1 connections. Traffic 3 MB ↓ 5 MB.
How do I get the decimals?
I'm quite a bit confused how to use the regex here, because on several examples in the web it should work like this. See for example: http://wiki.webperfect.ch/index.php?title=Telegraf:_Processor_Plugins
The replacement config option specifies what you want to replace in for any matches.
I think you want something closer to this:
[[processors.regex.fields]]
key = "message"
pattern = '.*Relayed.{3}(?P<UPSTREAM>\d{1,4}\W.B),.*$'
replacement = "${1}"
result_key = "upstream"
to get:
upstream="60 MB"

AWS Schedule AutoScaling via terraform (timezone issue)

I have a common TF code for different regions in my AWS, and similarly have a common autoscaling block in TF for all my regions.
Now I want to do time-based scheduling for my ASG, but since the timezones are different in different regions. I am not able to do the same with TF, because I do not see any option to specify timezone in aws_autoscaling_schedule resource of TF.
Following is my code :
resource "aws_autoscaling_schedule" "schedule1" {
scheduled_action_name = "scale_down"
min_size = var.min_size
max_size = var.max_size
recurrence = "0 18 * * 5"
desired_capacity = var.min_size
autoscaling_group_name = aws_autoscaling_group.example_asg.name
}
Now If you see the recurrence, I want to do scale down at 6pm as per respective region timings, but I am not able to achieve the same because the default timezone is UTC.
PS: I have different vars file for every region. And getting max_size and min_size, separately for every region.
I tried manipulating the recurrence block by passing one var in every regions's var file. But doesn't seem to work getting error as we can not pass vars in recurrence block :
resource "aws_autoscaling_schedule" "schedule1" {
scheduled_action_name = "scale_down"
min_size = var.min_size
max_size = var.max_size
recurrence = "0 var.temp_var * * 5"
desired_capacity = var.min_size
autoscaling_group_name = aws_autoscaling_group.example_asg.name
}
Any idea how can I achieve the same. Any help is appreciated.
Thanks!
To overcome your error and to correctly pass var.temp_var into recurrence you should do:
recurrence = "0 ${var.temp_var} * * 5"

AWS EC2 spot instance availability

I am using the API call request_spot_instances to create spot instance without specifying any availability zone. Normally a random AZ is picked by the API. The spot request sometimes would return a no capacity status whereas I could request for a spot instance successfully through the AWS console in another AZ. What is the proper way to check the availability of the spot instance of a specific instance type before calling the request_spot_instance?
There is no public API to check Spot Instance availability. Having said that, you can still achieve what you want by following the below steps:
Use request_spot_fleet instead, and configure it to launch a single instance.
Be flexible with the instance types you use, pick as many as you can and include them in the request. To help you pick the instances, check Spot Instance advisor for instance interruption and saving rates.
At the Spot Fleet request, configure AllocationStrategy to capacityOptimized this will allow the fleet to allocate capacity form the most available Spot instance from your instances list and reduce the likelihood of Spot interruptions.
Don't set a max price SpotPrice, the default Spot instance price will be used. The pricing model for Spot has changed and it's no longer based on bidding, therefore Spot prices are more stable and don't fluctuate.
This may be a bit overkill for what you are looking for but with parts of the code you can find the spot price history for the last hour (this can be changed). It'll give you the instance type, AZ, and additional information. From there you can loop through the instance type to by AZ. If a spot instance doesn't come up in say 30 seconds try the next AZ.
And to Ahmed's point in his answer, this information can be used in the spot_fleet_request instead of looping through the AZs. If you pass the wrong AZ or subnet in the spot fleet request, it may pass the dryrun api call, but can still fail the real call. Just a heads up on that if you are using the dryrun parameter.
Here's the output of the code that follows:
In [740]: df_spot_instance_options
Out[740]:
AvailabilityZone InstanceType SpotPrice MemSize vCPUs CurrentGeneration Processor
0 us-east-1d t3.nano 0.002 512 2 True [x86_64]
1 us-east-1b t3.nano 0.002 512 2 True [x86_64]
2 us-east-1a t3.nano 0.002 512 2 True [x86_64]
3 us-east-1c t3.nano 0.002 512 2 True [x86_64]
4 us-east-1d t3a.nano 0.002 512 2 True [x86_64]
.. ... ... ... ... ... ... ...
995 us-east-1a p2.16xlarge 4.320 749568 64 True [x86_64]
996 us-east-1b p2.16xlarge 4.320 749568 64 True [x86_64]
997 us-east-1c p2.16xlarge 4.320 749568 64 True [x86_64]
998 us-east-1d p2.16xlarge 14.400 749568 64 True [x86_64]
999 us-east-1c p3dn.24xlarge 9.540 786432 96 True [x86_64]
[1000 rows x 7 columns]
And here's the code:
ec2c = boto3.client('ec2')
ec2r = boto3.resource('ec2')
#### The rest of this code maps the instance details to spot price in case you are looking for certain memory or cpu
paginator = ec2c.get_paginator('describe_instance_types')
response_iterator = paginator.paginate( )
df_hold_list = []
for page in response_iterator:
df_hold_list.append(pd.DataFrame(page['InstanceTypes']))
df_instance_specs = pd.concat(df_hold_list, axis=0).reset_index(drop=True)
df_instance_specs['Spot'] = df_instance_specs['SupportedUsageClasses'].apply(lambda x: 1 if 'spot' in x else 0)
df_instance_spot_specs = df_instance_specs.loc[df_instance_specs['Spot']==1].reset_index(drop=True)
#unapck memory and cpu dictionaries
df_instance_spot_specs['MemSize'] = df_instance_spot_specs['MemoryInfo'].apply(lambda x: x.get('SizeInMiB'))
df_instance_spot_specs['vCPUs'] = df_instance_spot_specs['VCpuInfo'].apply(lambda x: x.get('DefaultVCpus'))
df_instance_spot_specs['Processor'] = df_instance_spot_specs['ProcessorInfo'].apply(lambda x: x.get('SupportedArchitectures'))
#look at instances only between 30MB and 70MB
instance_list = df_instance_spot_specs['InstanceType'].unique().tolist()
#---------------------------------------------------------------------------------------------------------------------
# You can use this section by itself to get the instancce type and availability zone and loop through the instance you want
# just modify instance_list with one instance you want informatin for
#look only in us-east-1
client = boto3.client('ec2', region_name='us-east-1')
prices = client.describe_spot_price_history(
InstanceTypes=instance_list,
ProductDescriptions=['Linux/UNIX', 'Linux/UNIX (Amazon VPC)'],
StartTime=(datetime.now() -
timedelta(hours=1)).isoformat(),
# AvailabilityZone='us-east-1a'
MaxResults=1000)
df_spot_prices = pd.DataFrame(prices['SpotPriceHistory'])
df_spot_prices['SpotPrice'] = df_spot_prices['SpotPrice'].astype('float')
df_spot_prices.sort_values('SpotPrice', inplace=True)
#---------------------------------------------------------------------------------------------------------------------
# merge memory size and cpu information into this dataframe
df_spot_instance_options = df_spot_prices[['AvailabilityZone', 'InstanceType', 'SpotPrice']].merge(df_instance_spot_specs[['InstanceType', 'MemSize', 'vCPUs',
'CurrentGeneration', 'Processor']], left_on='InstanceType', right_on='InstanceType')

Remove first and last IP from netaddr result

I am writing a script to print all IPs in CIDR notaion, but I do not want to print first and last IPs as they are not usable.
from netaddr import IPNetwork
ipc = raw_input('Enter The IP Range ')
n = 0
for ip in IPNetwork(ipc):
n = n + 1
print '%s' % ip
print 'Total No of IPs are ' + str(n)
This means that if I give 12.110.34.224/27 I should get 30 IPs as result, removing first and last IPs as /27 means 32 IPs.
That should do it.
for ip in list(IPNetwork(ipc))[1:-1]: