gcp create metric health checks from health check logs - google-cloud-platform

I'm trying to create a metric on health check for a backend service (load balancer). I need this metric to trigger alerts on failed health checks.
From: https://cloud.google.com/monitoring/api/v3/kinds-and-types
Emulating string-valued custom metrics
String values in custom metrics are not supported, but you can
replicate string-valued metric functionality in the following ways:
Create a GAUGE metric using an INT64 value as an enum that
maps to a string value. Externally translate the enum to a string
value when you query the metric.
Create a GAUGE metric with a BOOL value and a label whose
value is one of the strings you want to monitor. Use the boolean to
indicate if the value is the active value.
For example, suppose you want to create a string-valued metric called "status" with possible options OK, OFFLINE, or PENDING. You could make a GAUGE metric with a label called status_value. Each update would write three time series, one for each status_value (OK, OFFLINE, or PENDING), with a value of 1 for "true" or 0 for "false".
Using Terraform, I tried this, but not sure if it's really converting the values "UNHEALTHY" and "HEALTHY" to 0/1s. I tried to switch metric_type to GAUGE instead of DELTA, but the error from Terraform said I needed to use DELTA, and DISTRIBUTION is required for the value_type. Has anybody tried the docs above where it says GAUGE metric with BOOL value? Don't we need some kind of map of strings to boolean?
Here is my terraform:
resource "google_logging_metric" "logging_metric" {
name = var.name
filter = "logName=projects/[project_id]/logs/compute.googleapis.com%2Fhealthchecks"
metric_descriptor {
metric_kind = "DELTA"
value_type = "DISTRIBUTION"
labels {
key = "status"
value_type = "STRING"
description = "status of health check"
}
display_name = var.display_name
}
value_extractor = "EXTRACT(jsonPayload.request)"
label_extractors = {
"status" = "EXTRACT(jsonPayload.healthCheckProbeResult.healthState)"
}
bucket_options {
linear_buckets {
num_finite_buckets = 3
width = 1
offset = 1
}
}
}

Related

Setting a dynamic limit_amount in AWS Budget Billing Module

I am setting up alerting in AWS, using AWS Budgets to trigger an alert if an account’s cost is exceeding x% or x amount of the cost by x date of the month, to identify when spikes in price occur.
resource "aws_budgets_budget" "all-cost-budget" {
name = "all-cost-budget"
budget_type = "COST"
limit_amount = "10"
limit_unit = "USD"
time_unit = "DAILY"
notification {
comparison_operator = "GREATER_THAN"
threshold = "100"
threshold_type = "PERCENTAGE"
notification_type = "ACTUAL"
subscriber_email_addresses = ["email address"]
}
}
We currently do not have a specific limit amount, and would like to set it based on the previous month's spending.
Is there a way to do this dynamically within AWS and Terraform?
You can setup a Lambda function which would automatically execute at the start of every month and update the budget value. The AWS quickstart for Landing Zone has a CloudFormation template which does something similar to what you have described, setting the budget as the rolling average of the last three months (Template, Documentation). You will need to convert the CloudFormation template to Terraform and tweak the criteria to match your requirements. You might also want to consider using FORECASTED instead of ACTUAL.

Terraform nest for loop

I am struggling a bit here and am wondering if it would be even possible. I have a vars decalred as shown below:
variable "subnets" {
type = list(object({
name = string
cidr_block = string
}))
default = [
{
name = private
cidr_block = 10.0.0.1/24
},
{
name = public
cidr_block = 10.0.0.2/24
}
]
}
and then I use a data source to query zones in the current region
data aws_availability_zones available {}
now what I'm trying to do is create the above subnets in each az zone and I don't seem able to combine the zones to the above var.
what I am trying is
resource aws_suubnet subnet {
for each = {for idx,az.name in data.aws_availability_zones.available.names : idx => az.name}
vpc_id = var.vpc_id
availability_zone = data.aws_availability_zones.available.names[each.key]
cidr_block = (this is where I want to query my var.subnets but I don't seem to be able to do another for
here)
}
What I am hoping to end up with is 6 subnets 3 private and 3 public with one of each in each of the zones. Would appreciate any help here. Thanks
I think your intent here is to dynamically select two of the available availability zones and declare a subnet in each.
This is possible to do and I will show a configuration example below but first I want to caution that this a potentially-risky design because the set of availability zones can vary over time, and so you might find that without any direct changes to your configuration a later Terraform plan proposes to recreate one or both of your subnets in different availability zones.
For that reason, I'd typically suggest making the assignment of subnets to availability zones something you intentionally choose and encode statically in your configuration, rather than selecting them dynamically, to ensure that your configuration's effect remains stable over time unless you intentionally change it.
With that caveat out of the way, I do still want to answer the general question here, because this general idea of "zipping together" two collections of different lengths can arise in other situations, and so knowing a pattern for it might still prove useful, including if you ultimately decide to make the list of availability zones a variable rather than a data source lookup.
variable "subnets" {
type = list(object({
name = string
cidr_block = string
}))
}
data "aws_availability_zones" "available" {
}
locals {
# The availability zones are returned as an unordered
# set, so we'll sort them to be explicit that we're
# depending on one particular ordering.
zone_names = sort(data.aws_availabililty_zones.available.names)
subnets = tolist([
for i, sn in var.subnets : {
name = sn.name
cidr_block = sn.cidr_block
zone = element(local.zone_names, i)
}
])
}
The last expression in the above example relies on the element function, which is similar to indexing like local.zone_names[i] but instead of returning an error if i is too large it will instead wrap around and reselect the same items from the zone list again.

How can I Query on TTL in dynamoDB?

I have setup a TTL attribute in my dynamoDB table. when i push records in a get the current date (using js sdk in node) and add a value to it (like 5000). It is my understanding that when that date is reached aws will purge the record but only within 48 hours. during that time the record could be returned as the result of a query.
I want to filter out the expired items so that if they are expired but not deleted they won't be returned as part of the query.
here is what i am using to try to do that:
var epoch = Math.floor(Date.now() / 1000);
console.log("ttl epoch is ", epoch);
var queryTTLParams = {
TableName : table,
KeyConditionExpression: "id = :idval",
ExpressionAttributeNames:{
"#theTTL": "TTL"
},
FilterExpression: "#theTTL < :ttl",
ExpressionAttributeValues: {
":idval": {S: "1234"},
":ttl": {S: epoch.toString()}
}
};
i do not get any results. I believe the issue has to do with the TTL attribute being a string and me trying to do a < on it. But i didn't get to decide on the datatype for the TTL field - aws did that for me.
How can i remedy this?
According to the Enabling Time to Live AWS documentation, the TTL should be set to a Number attribute:
TTL is a mechanism to set a specific timestamp for expiring items from your table. The timestamp should be expressed as an attribute on the items in the table. The attribute should be a Number data type containing time in epoch format. Once the timestamp expires, the corresponding item is deleted from the table in the background.
You probably just need to create a new Number attribute and set the TTL attribute to that one.

AWS DynamoDB - How to achieve in 1 call: Add value to set, if set exists - or else instantiate set with value?

I have a users table, there is an attribute called friends, which will be a set of the id's of all the user's friends.
Initially I tried instantiating the friends attribute to an empty set when the user is created, but I get an error that you can't have an empty attribute.
So the only solution I could find if someone has no friends yet is to read the attribute on the user, if it does not exist, SET the attribute to a [new] set with the friend they are adding. If it does exist, then just perform an update with an ADD, which adds the new friend to the set.
I don't want to have to make two calls to AWS for this.
Is there a way to create the set if it doesn't exist, and if it does, add to it - all in just 1 call?
For SET data type (from DynamoDB API Reference):
ADD - If the attribute does not already exist, then the attribute and its values are added to the item. If the attribute does exist,
then the behavior of ADD depends on the data type of the attribute:
If the existing data type is a set, and if the Value is also a set,
then the Value is added to the existing set. (This is a set operation,
not mathematical addition.) For example, if the attribute value was
the set [1,2], and the ADD action specified [3], then the final
attribute value would be [1,2,3]. An error occurs if an Add action is
specified for a set attribute and the attribute type specified does
not match the existing set type. Both sets must have the same
primitive data type. For example, if the existing data type is a set
of strings, the Value must also be a set of strings. The same holds
true for number sets and binary sets.
Example:-
First update:-
The country attribute is not present in the table. The updateItem created the new attribute country with the values (IN, UK) provided.
var params = {
TableName : "Movies",
Key : {
"yearkey" : 2014,
"title" : "The Big New Movie 2"
},
UpdateExpression : "ADD country :countries",
ExpressionAttributeValues: {
':countries': docClient.createSet(["IN", "UK"])
},
ReturnValues : "UPDATED_NEW"
};
Second update:-
This time updateItem added the new value "US" and ignored the existing value "IN".
var params = {
TableName : "Movies",
Key : {
"yearkey" : 2014,
"title" : "The Big New Movie 2"
},
UpdateExpression : "ADD country :countries",
ExpressionAttributeValues: {
':countries': docClient.createSet(["IN", "US"])
},
ReturnValues : "UPDATED_NEW"
};
Here is an example of using aws command line.
aws dynamodb update-item --table-name users \
--key '{"userId": {"S": "my-user-id"}}' \
--update-expression "ADD friends :friends" \
--expression-attribute-values '{":friends": {"SS": ["friend1-id", "friend2-id"]}}'
SS indicates a set of strings (no duplicate allowed).
No need to use SET in update-expression for non-existing case; ADD will handle it.
See aws docs here.

AWS boto tags returning 0 instances when filtering by tag verified to exist

I'm using boto to return instances with a cluster_id tag which is a string uuid that uniquely identifies a cluster.
I'm trying to use boto to return the instances with that tag to ensure the cluster has been provisioned and is ready. Thus, when the number of individual instances with the cluster_id tag matches the expected number the cluster is ready and my program can begin the next step of automation.
These instances are in an autoscalling group but im not sure why boto returns 0. I have verified the cluster_id is the same in the program, and the same in aws for each instance. Reservations just returns 0.
Python Code
ec2_conn = boto.connect_ec2(aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)
reservations = ec2_conn.get_all_instances(filters={"tag:cluster_id":str(cluster_id_tag)})
instances = [i for r in reservations for i in r.instances]
number_of_instances = len(instances)
cluster_id var in the program = 50a5fab0-e166-11e5-9ee9-a45e60e4b9b1
ASG tags:
ElasticClientNode no Yes
Name elasticsearch-loading-master-nodes-cluster Yes
a_or_b a Yes
cluster_id 50a5fab0-e166-11e5-9ee9-a45e60e4b9b1 Yes
version 1.0 Yes
Instance Tags
ElasticClientNode no Show Column
Name elasticsearch-loading-master-nodes-cluster Hide Column
a_or_b a Show Column
aws:autoscaling:groupName elasticsearch Show Column
cluster_id 50a5fab0-e166-11e5-9ee9-a45e60e4b9b1 Show Column
version 1.0 Show Column
the answer was using connect_to_region not connect_ec2
ec2_conn = boto.ec2.connect_to_region("us-west-2",
aws_access_key_id=aws_access_key_id,
aws_secret_access_key=aws_secret_access_key)