Get total count of list in Terraform - list

I am wondering if it's possible to get the total count of a list in terraform, I've looked at the tf website and don't see anything for the total count just the use of count.index.
An example list
var "testList"
type = "list"
default [
{
type = "test1"
},
{
type = "test2"
}
]
So here I want to get 2 as the total count of testlist
Thanks

you can use terraform built-in function length() to get the count.
count = "${length(var.testList)}"
For details, please go through terraform documents:
terraform length list

Related

How can i extract a log that contains a variable with REGEXP_EXTRACT in google_logging_metric?

I am really new to GCP and creating metrics. We use Grafana to display the count of event logs with the help of "google_logging_metric" created.
My use case was Let's say we have a log
The Number is {variable}"
Possible values for variable is a 5 digit Number and there will be multiple occurrences of logs with each variable.
I am creating Metric through terraform as follows
resource "google_logging_metric" "" {
name = ""
project = var
filter = "resource.type=\"k8s_container\" resource.labels.container_name=\"\" jsonPayload.message=~\"(The Number is {something should be added here?})\""
metric_descriptor {
metric_kind = "DELTA"
value_type = "INT64"
display_name = ""
labels {
key = "event"
value_type = "STRING"
description = ""
}
}
label_extractors = {
event = "REGEXP_EXTRACT(jsonPayload.message, \"(The Number is {something should be added here?})\")"
}
}
What i like to do was to group the log occurrences like "The Number is XXXXX", "The Number is YYYYY", "The Number is ZZZZZ" on grafana. Can anyone suggest How i can achieve this? Do I have to modify the metric or something on grafana dashboard?
Cloud Logging supports regular expression so if the log entries you need to filter varies between numbers from 1 to 3, you can try something similar as below:
jsonPayload.message =~ "The Number is\s*[1-3]"
Here is the official documentation about Cloud Logging regex: https://cloud.google.com/blog/products/management-tools/cloud-logging-gets-regular-expression-support

REST Api pagination Loop... Power Query M language

I am wondering if anyone can help me with api pagination... I am trying to get all records from an external api but it restricts me with only getting maximum of 10. There are around 40k records..
The api also does not shows "no.of pages"(response below). hence i cant get my head around a solution.
There is NO "skip" or "count" or "top" supported either.. i am stuck...and i dont know how to create a loop in M language until all records are fetched. Can someone help me write a code or how it can look like
Below is my code.
let
Source = Json.Document(
Web.Contents(
"https://api.somedummy.com/api/v2/Account",
[
RelativePath ="Search",
Headers =
[
ApiKey = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXx",
Authorization = "XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX",
#"Content-Type" = "application/json"
],
Content=
Json.FromValue(
[key="status", operator="EqualTo", value="Active", resultType="Full"]
)
]
)
)
in
Source
and below is output
"data": {
"totalCount": 6705,
"page": 1,
"pageSize": 10,
"list":[
This might help you along your way. While I was looking into something similar for working with Jira, I found some helpful info from two individuals in the Atlassian Community site. Below is what I think might be a relevant snippet from a query I developed with the assistance of their posts. (To be clear this snippet is their code, which I used in my query.) While I'm providing more of the query (the segment of which is also comprised of their code) below, I think the key part that relates to your particular issue is this part.
yourJiraInstance = "https://site.atlassian.net/rest/api/2/search",
Source = Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt="0"]])),
totalIssuesCount = Source[total],
// Now it is time to build a list of startAt values, starting on 0, incrementing 100 per item
startAtList = List.Generate(()=>0, each _ < totalIssuesCount, each _ +100),
urlList = List.Transform(startAtList, each Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt=Text.From(_)]]))),
// ===== Consolidate records into a single list ======
// so we have all the records in data, but it is in a bunch of lists each 100 records
// long. The issues will be more useful to us if they're consolidated into one long list
I'm thinking that maybe you could try substituting pageSize for maxResults and totalCount for totalIssuesCount. I don't know about startAt. There must be something similar available to you. Who knows? It could actually be startAt. I believe your pageSize would be 10 and you would increment your startAt by 10 instead of 100.
This is from Nick's and Tiago's posts on this thread. I think the only real difference may be that I buffered a table. (It's been a while and I did not dig into their thread and compare it for this answer.)
let
// I must credit the first part of this code -- the part between the ********** lines -- as being from Nick Cerneaz (and Tiago Machado) from their posts on this thread:
// https://community.atlassian.com/t5/Marketplace-Apps-Integrations/All-data-not-displayed-in-Power-BI-from-Jira/qaq-p/723117.
// **********
yourJiraInstance = "https://site.atlassian.net/rest/api/2/search",
Source = Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt="0"]])),
totalIssuesCount = Source[total],
// Now it is time to build a list of startAt values, starting on 0, incrementing 100 per item
startAtList = List.Generate(()=>0, each _ < totalIssuesCount, each _ +100),
urlList = List.Transform(startAtList, each Json.Document(Web.Contents(yourJiraInstance, [Query=[maxResults="100",startAt=Text.From(_)]]))),
// ===== Consolidate records into a single list ======
// so we have all the records in data, but it is in a bunch of lists each 100 records
// long. The issues will be more useful to us if they're consolidated into one long list
//
// In essence we need extract the separate lists of issues in each data{i}[issues] for 0<=i<#"total"
// and concatenate those into single list of issues .. from which then we can analyse
//
// to figure this out I found this post particulary helpful (thanks Vitaly!):
// https://potyarkin.ml/posts/2017/loops-in-power-query-m-language/
//
// so first create a single list that has as its members each sub-list of the issues,
// 100 in each except for the last one that will have just the residual list.
// So iLL is a List of Lists (of issues):
iLL = List.Generate(
() => [i=-1, iL={} ],
each [i] < List.Count(urlList),
each [
i = [i]+1,
iL = urlList{i}[issues]
],
each [iL]
),
// and finally, collapse that list of lists into just a single list (of issues)
issues = List.Combine(iLL),
// Convert the list of issues records into a table
#"Converted to table" = Table.Buffer(Table.FromList(issues, Splitter.SplitByNothing(), null, null, ExtraValues.Error)),
// **********

Get an element from a list Terraform

I have a small question about Terraform but I can't find a solution online. Imagine the following scenario:
data "aws_vpc" "vpc_name" {
id = var.vpc_id
}
data "aws_subnet_ids" "vpc_subnets" {
vpc_id = data.aws_vpc.vpc_name.id
}
And imagine I create an EC2 instance like this:
resource "aws_instance" "ec2_test" {
...
ami = ...
instance_type = ...
subnet_id = element(tolist(data.aws_subnet_ids.vpc_subnets.ids), 1)
...
In the subnet, I have selected an element using the element function and converted the list of subnet ids to a list.
Now, I have two questions:
Is it possible to select the first subnet without doing the conversion to list (tolist)? I wonder why I can do something like data.aws_subnet_ids.vpc_subnets[1] or something like that?
How can I retrieve a random id from the list of subnets?
Regards,
You need the tolist operation, because aws_subnet_ids.ids is a set. You could clean it up a little and say tolist(data.aws_subnet_ids.vpc_subnets.ids)[0].
To get a random number, use the random provider.
Okay, but what if you have the following stupid thing going on:
output "ids" {
value = values(tomap({
for k, public in aws_subnet.public : k => public.id
}))
}
I am trying to get any value from this list that I generated after maybe 5 hours of my life. :) I fully get it, that this might not be seen at all, but imagine you have 3 subnets for public, 3 for private, 3 for something else. You try to get the 3 public going on. The case is on random. Each time a random value from these 3 gets picked up. Reason you might ask trying to outsmart yourself.

In terraform, how do I output an aws_route53_record alias value?

Terraform v0.12.x
I'm creating an AWS Route53 record, like this, and it's created with no issues.
data "aws_route53_zone" "zone" {
name = "my.domain.com."
private_zone = true
}
resource "aws_route53_record" "record" {
zone_id = data.aws_route53_zone.zone.zone_id
name = "${var.record_name}.${data.aws_route53_zone.zone.name}"
type = "A"
alias {
name = var.alias_record
zone_id = var.alias_zone_id
evaluate_target_health = false
}
}
Now I want to output the value the alias's name, and I tried
output "alias_name" {
value = aws_route53_record.record.alias.name
}
or
output "alias_name" {
value = aws_route53_record.record.alias["name"]
}
but get the error
Block type "alias" is represented by a set of objects, and set elements do not
have addressable keys. To find elements matching specific criteria, use a
"for" expression with an "if" clause.
What's the correct syntax?
The alias is set of objects and sets are:
a collection of unique values that do not have any secondary identifiers or ordering.
So you can't index them. Thus in your case to output the alias values you can use:
output "alias_name" {
value = aws_route53_record.record.alias.*.name
}
# or
output "alias_name2" {
value = aws_route53_record.record.alias[*].name
}
The error message Terraform returned here is a generic one it returns for all block types that are represented as a set of objects, and so it's giving you some general advice for selecting a set element based on its value or nested values. For example:
output "example" {
# Select the first element from the
# set whose name is "foo".
value = [
for a in aws_route53_record.record.alias : a
if a.name == "foo"
][0]
}
That advice is not actually suitable for the specific problem you have here, because you are not "finding elements matching specific criteria", but rather just trying to select the one element you know you wrote, because you've statically defined only a single alias block anyway so you know that it will always be there and always be the right one.
To do that, one option is to explicitly convert the set to a list using the tolist function. The documentation warns that the resulting elements will not be in any particular order, but when there's only one element anyway there can therefore only be one ordering of that element, and so this is safe:
output "example" {
# Select the first element from the
# set whose name is "foo".
value = tolist(aws_route53_record.record.alias)[0].name
}
Providers typically use a set representation for a particular block type if the underlying API considers the objects to be unordered and to have no specific unique identifier. The result is unfortunately then less convenient to use, but using this accurate modelling of the underlying behavior avoids problems where a configuration might work at first but then misbehave later if the remote API starts returning items in a different (arbitrary) order.
The choice of sets for this alias block is a little odd since it's defined to only allow zero or one blocks anyway; I suspect this was a historical design quirk that the provider is now preserving for backward compatibility.
Combining both Marcin's and Martin's answers (thanks to both), here's what I ended up using
output "alias_name" {
value = tolist(aws_route53_record.record.alias.*.name)[0]
}
Marcin's answer gave
alias_name = [
"my-record.domain.com",
]
While Martin's answer gave
alias_name = {
"evaluate_target_health" = false
"name" = "my-record.domain.com"
"zone_id" = "Z1234ABCDE"
}
But combining the two answers like I have above gives
alias_name = my-record.domain.com

MongoDB MapReduce update in place how to

*Basically I'm trying to order objects by their score over the last hour.
I'm trying to generate an hourly votes sum for objects in my database. Votes are embedded into each object. The object schema looks like this:
{
_id: ObjectId
score: int
hourly-score: int <- need to update this value so I can order by it
recently-voted: boolean
votes: {
"4e4634821dff6f103c040000": { <- Key is __toString of voter ObjectId
"_id": ObjectId("4e4634821dff6f103c040000"), <- Voter ObjectId
"a": 1, <- Vote amount
"ca": ISODate("2011-08-16T00:01:34.975Z"), <- Created at MongoDate
"ts": 1313452894 <- Created at timestamp
},
... repeat ...
}
}
This question is actually related to a question I asked a couple of days ago Best way to model a voting system in MongoDB
How would I (can I?) run a MapReduce command to do the following:
Only run on objects with recently-voted = true OR hourly-score > 0.
Calculate the sum of the votes created in the last hour.
Update hourly-score = the sum calculated above, and recently-voted = false.
I also read here that I can perform a MapReduce on the slave DB by running db.getMongo().setSlaveOk() before the M/R command. Could I run the reduce on a slave and update the master DB?
Are in-place updates even possible with Mongo MapReduce?
You can definitely do this. I'll address your questions one at a time:
1.
You can specify a query along with your map-reduce, which filters the set of objects which will be passed into the map phase. In the mongo shell, this would look like (assuming m and r are the names of your mapper and reducer functions, respectively):
> db.coll.mapReduce(m, r, {query: {$or: [{"recently-voted": true}, {"hourly-score": {$gt: 0}}]}})
2.
Step #1 will let you use your mapper on all documents with at least one vote in the last hour (or with recently-voted set to true), but not all the votes will have been in the last hour. So you'll need to filter the list in your mapper, and only emit those votes you wish to count:
function m() {
var hour_ago = new Date() - 3600000;
this.votes.forEach(function (vote) {
if (vote.ts > hour_ago) {
emit(/* your key */, this.vote.a);
}
});
}
And to reduce:
function r(key, values) {
var sum = 0;
values.forEach(function(value) { sum += value; });
return sum;
}
3.
To update the hourly scores table, you can use the reduceOutput option to map-reduce, which will call your reducer with both the emitted values, and the previously saved value in the output collection, (if any). The result of that pass will be saved into the output collection. This looks like:
> db.coll.mapReduce(m, r, {query: ..., out: {reduce: "output_coll"}})
In addition to re-reducing output, you can use merge which will overwrite documents in the output collection with newly created ones (but leaving behind any documents with an _id different than the _ids created by your m-r job), replace, which is effectively a drop-and-create operation and is the default, or use {inline: 1}, which will return the results directly to the shell or to your driver. Note that when using {inline: 1}, your results must fit in the size allowed for a single document (16MB in recent MongoDB releases).
(4.)
You can run map-reduce jobs on secondaries ("slaves"), but since secondaries cannot accept writes (that's what makes them secondary), you can only do this when using inline output.