Related
could anyone please help with my issue? I´ve created couple of DENEB visuals which seem to be working fine both in PBI Desktop and service however the one I´m sharing doesn´t work in PBI service, it shows as blank.
Do you know by chance what might be the problem?
Here is the JSON that I´m using:
{
"data": {"name": "dataset"},
"transform": [
{
"joinaggregate": [
{
"op": "sum",
"field": "NrOfSfhifts",
"as": "TotalOrigin"
}
]
},
{
"joinaggregate": [
{
"op": "sum",
"field": "NrOfSfhifts",
"as": "TotalOriginGrouped"
}
],
"groupby": ["NrOfSfhifts"]
},
{
"calculate": "round(datum.TotalOriginGrouped/datum.TotalOrigin * 100)",
"as": "PercentOfTotal"
},
{
"aggregate": [
{
"op": "average",
"field": "PercentOfTotal",
"as": "Percento"
}
],
"groupby": ["Dispatcher"]
},
{
"calculate": "sequence(1,datum.Percento+1)",
"as": "S"
},
{"flatten": ["S"]},
{
"window": [
{"op": "row_number", "as": "id"}
],
"sort": [
{
"op": "sum",
"field": "TotalOriginGrouped",
"order": "ascending"
}
]
},
{
"calculate": "ceil (datum.id / 10)",
"as": "row"
},
{
"calculate": "datum.id - datum.row * 10",
"as": "col"
}
],
"mark": {
"type": "circle",
"filled": true,
"tooltip": true,
"stroke": "black",
"strokeWidth": 2
},
"encoding": {
"x": {
"field": "col",
"type": "ordinal",
"axis": null,
"sort": "x"
},
"y": {
"field": "row",
"type": "ordinal",
"axis": null,
"sort": "y"
},
"color": {
"field": "Dispatcher",
"type": "nominal",
"sort": [
{
"op": "sum",
"field": "TotalOriginGrouped",
"order": "descending"
}
],
"scale": {
"range": [
"#FFD300",
"#ed3419",
"lightgray",
"white",
"black",
"olive",
"lightblue"
]
},
"legend": {
"orient": "right",
"offset": 10,
"labelOffset": 3,
"titlePadding": 5,
"titleFontSize": 10
}
},
"size": {"value": 330},
"tooltip": [
{
"field": "Dispatcher",
"type": "nominal"
},
{
"field": "Percento",
"type": "quantitative",
"format": "0",
"formatType": "pbiFormat"
}
]
}
}
Thank you!
That is my code :).
If this is working in the desktop but not the service, then your admin has probably disabled non-native visuals. You should ask them to enable certified visuals at the very least as there is no danger from those.
I am trying to create a Gantt chart and I want to color a single task with two colors, based on a percentage complete. Say, make the complete part green and the remaining part orange.
How can I achieve this?
Below is a sample code, also available in the editor here.
{
"data": {
"values": [
{"Description": "Task 1", "Start": "2023-01-05", "End": "2023-01-10", "Percentage complete": 0},
{"Description": "Task 2", "Start": "2023-01-01", "End": "2023-01-15", "Percentage complete": 75},
{"Description": "Task 3", "Start": "2023-01-01", "End": "2023-01-03", "Percentage complete": 100}
]
},
"layer": [
{
"mark": "bar",
"encoding": {
"y": {
"field": "Description",
"type": "ordinal",
"stack": null
},
"x": {
"field": "Start",
"type": "temporal"
},
"x2": {
"field": "End",
"type": "temporal"
}
}
}
]
}
The expected result should look like this.
I tried looking at folding, transforming, and scale. But as I am new to Vega-lite, to no avail.
You have two options.
Reshape your data upstream. Your partially coloured bars should be rendered as two bars stacked - one for incomplete and one for complete.
Use Reactive Geometry as described here. This may need Vega rather than VL.
Here it is using reactive geometry.
{
"$schema": "https://vega.github.io/schema/vega/v5.json",
"background": "white",
"padding": 5,
"width": 200,
"style": "cell",
"data": [
{
"name": "source_0",
"values": [
{
"Description": "Task 1",
"Start": "2023-01-05",
"End": "2023-01-10",
"Percentatecomplete": 0
},
{
"Description": "Task 2",
"Start": "2023-01-01",
"End": "2023-01-15",
"Percentatecomplete": 0.75
},
{
"Description": "Task 3",
"Start": "2023-01-01",
"End": "2023-01-03",
"Percentatecomplete": 1
}
]
},
{
"name": "data_0",
"source": "source_0",
"transform": [
{"type": "formula", "expr": "toDate(datum[\"Start\"])", "as": "Start"},
{"type": "formula", "expr": "toDate(datum[\"End\"])", "as": "End"},
{
"type": "filter",
"expr": "(isDate(datum[\"Start\"]) || (isValid(datum[\"Start\"]) && isFinite(+datum[\"Start\"])))"
}
]
}
],
"signals": [
{"name": "y_step", "value": 20},
{
"name": "height",
"update": "bandspace(domain('y').length, 0.1, 0.05) * y_step"
}
],
"marks": [
{
"name": "layer_0_marks",
"type": "rect",
"style": ["bar"],
"from": {"data": "data_0"},
"encode": {
"update": {
"fill": {"value": "#4c78a8"},
"x": {"scale": "x", "field": "Start"},
"x2": {"scale": "x", "field": "End"},
"y": {"scale": "y", "field": "Description"},
"height": {"signal": "max(0.25, bandwidth('y'))"}
}
}
},
{
"type": "rect",
"from": {"data": "layer_0_marks"},
"encode": {
"update": {
"x": {"field": "x"},
"y": {"field": "y"},
"fill": {"value": "red"},
"width": {"signal": "(datum.x2 - datum.x) * datum.datum.Percentatecomplete"},
"height": {"signal": "max(0.25, bandwidth('y'))"}
}
}
}
],
"scales": [
{
"name": "x",
"type": "time",
"domain": {"data": "data_0", "fields": ["Start", "End"]},
"range": [0, {"signal": "width"}]
},
{
"name": "y",
"type": "band",
"domain": {"data": "data_0", "field": "Description", "sort": true},
"range": {"step": {"signal": "y_step"}},
"paddingInner": 0.1,
"paddingOuter": 0.05
}
],
"axes": [
{
"scale": "x",
"orient": "bottom",
"gridScale": "y",
"grid": true,
"tickCount": {"signal": "ceil(width/40)"},
"domain": false,
"labels": false,
"aria": false,
"maxExtent": 0,
"minExtent": 0,
"ticks": false,
"zindex": 0
},
{
"scale": "x",
"orient": "bottom",
"grid": false,
"title": "Start, End",
"labelFlush": true,
"labelOverlap": true,
"tickCount": {"signal": "ceil(width/40)"},
"zindex": 0
},
{
"scale": "y",
"orient": "left",
"grid": false,
"title": "Description",
"zindex": 0
}
]
}
I am new to CloudWatch MatchExpression. I’m trying to plot the percent of 5xx errors. The following is the widget I constructed in Ruby with Math expression:
{:title=>"5xx Errors", :view=>"timeSeries", :stacked=>false, :start=>"-P7D", :period=>300, :yAxis=>{"left"=>{:min=>0, :max=>100}}, :annotations=>{:horizontal=>[{:color=>"#ff7f00", :label=>"10", :value=>10}, {:color=>"#ff0000", :label=>"50", :value=>50}]}, :metrics=>[[{"id"=>"percent_5xx_error", "expression"=>"100*(5xx/(2xx+3xx+4xx+5xx))", "label"=>"IAD", "accountId"=>"967992492170", "region"=>"us-east-1"}], ["TangerineBox", "StatusCode2xx", "ConsoleName", "fsx-console", {"accountId"=>"967992492170", "region"=>"us-east-1", "label"=>"IAD", "id"=>"2xx", "stat"=>"Sum", "visible"=>false}], ["TangerineBox", "StatusCode3xx", "ConsoleName", "fsx-console", {"accountId"=>"967992492170", "region"=>"us-east-1", "label"=>"IAD", "id"=>"3xx", "stat"=>"Sum", "visible"=>false}], ["TangerineBox", "StatusCode4xx", "ConsoleName", "fsx-console", {"accountId"=>"967992492170", "region"=>"us-east-1", "label"=>"IAD", "id"=>"4xx", "stat"=>"Sum", "visible"=>false}], ["TangerineBox", "StatusCode5xx", "ConsoleName", "fsx-console", {"accountId"=>"967992492170", "region"=>"us-east-1", "label"=>"IAD", "id"=>"5xx", "stat"=>"Sum", "visible"=>false}]]}
But Its giving me the error : MetricWidget/metrics/1 should not have more than 1 item, when I try to embed the graph in wiki.
I opened up the graph is AWS CloudWatch console too and this is the following I get:
{
"title": "5xx Errors",
"view": "timeSeries",
"stacked": false,
"period": 300,
"yAxis": {
"left": {
"min": 0,
"max": 100
}
},
"annotations": {
"horizontal": [
{
"color": "#ff7f00",
"label": "10",
"value": 10
},
{
"color": "#ff0000",
"label": "50",
"value": 50
}
]
},
"metrics": [
[ { "id": "percent_5xx_error", "expression": "100*(5xx/(2xx+3xx+4xx+5xx))", "label": "IAD", "accountId": "967992492170", "region": "us-east-1" } ],
[ "TangerineBox", "StatusCode2xx", "ConsoleName", "fsx-console", { "accountId": "967992492170", "label": "IAD", "id": "2xx", "stat": "Sum", "visible": false } ],
[ ".", "StatusCode3xx", ".", ".", { "accountId": "967992492170", "label": "IAD", "id": "3xx", "stat": "Sum", "visible": false } ],
[ ".", "StatusCode4xx", ".", ".", { "accountId": "967992492170", "label": "IAD", "id": "4xx", "stat": "Sum", "visible": false } ],
[ ".", "StatusCode5xx", ".", ".", { "accountId": "967992492170", "label": "IAD", "id": "5xx", "stat": "Sum", "visible": false } ]
],
"width": 1401,
"height": 754,
"region": "us-east-1"
}
Can someone please help me to debug this issue.
First thing to look are the IDs. IDs need to start with a lower case letter. Try changing 2xx, 3xx ..., to something like m2xx, m3xx, ...
I have data with multiple dimensions, stored in the Druid cluster. for example, Data of movies and the revenue they earned from each country where they were screened.
I'm trying to build a query that the answer to be returned will be a table of all the movies, the total revenue of each of them, and the revenue for each country.
I succeeded to do it in Turnilo - it generated for me the following Druid query -
[
[
{
"queryType": "timeseries",
"dataSource": "movies_source",
"intervals": "2021-11-18T00:01Z/2021-11-21T00:01Z",
"granularity": "all",
"aggregations": [
{
"name": "__VALUE__",
"type": "doubleSum",
"fieldName": "revenue"
}
]
},
{
"queryType": "topN",
"dataSource": "movies_source",
"intervals": "2021-11-18T00:01Z/2021-11-21T00:01Z",
"granularity": "all",
"dimension": {
"type": "default",
"dimension": "movie_id",
"outputName": "movie_id"
},
"aggregations": [
{
"name": "revenue",
"type": "doubleSum",
"fieldName": "revenue"
}
],
"metric": "revenue",
"threshold": 50
}
],
[
{
"queryType": "topN",
"dataSource": "movies_source",
"intervals": "2021-11-18T00:01Z/2021-11-21T00:01Z",
"granularity": "all",
"filter": {
"type": "selector",
"dimension": "movie_id",
"value": "some_movie_id"
},
"dimension": {
"type": "default",
"dimension": "country",
"outputName": "country"
},
"aggregations": [
{
"name": "revenue",
"type": "doubleSum",
"fieldName": "revenue"
}
],
"metric": "revenue",
"threshold": 5
}
]
]
But it doesn't work when I'm trying to use it as a body for a Postman query - I got
{
"error": "Unknown exception",
"errorMessage": "Unexpected token (START_ARRAY), expected VALUE_STRING: need JSON String that contains type id (for subtype of org.apache.druid.query.Query)\n at [Source: (org.eclipse.jetty.server.HttpInputOverHTTP); line: 2, column: 3]",
"errorClass": "com.fasterxml.jackson.databind.exc.MismatchedInputException",
"host": null
}
How should I build the corresponding query so that it works with Postman?
I am not familiar with Turnilo but have you tried using the Druid Console to write SQL and convert to Native request with the "Explain SQL query" option under the "Run/..." menu?
Your native queries seem to be doing a Top N instead of listing all movies, so I think the SQL might be something like:
SELECT movie_id, country_id, SUM(revenue) total_revenue
FROM movies_source
WHERE __time BETWEEN '2021-11-18 00:01:00' AND '2021-11-21 00:01:00'
GROUP BY movie_id, country_id
ORDER BY total_revenue DESC
LIMIT 50
I don't have the data source to test, but tested with sample wikipedia data with similar query structure:
SELECT namespace, cityName, sum(sum_added) total
FROM "wikipedia" r
WHERE cityName IS NOT NULL
AND __time BETWEEN '2015-09-12 00:00:00' AND '2015-09-15 00:00:00'
GROUP BY namespace, cityName
ORDER BY total DESC
limit 50
which results in the following Native query:
{
"queryType": "groupBy",
"dataSource": {
"type": "table",
"name": "wikipedia"
},
"intervals": {
"type": "intervals",
"intervals": [
"2015-09-12T00:00:00.000Z/2015-09-15T00:00:00.001Z"
]
},
"virtualColumns": [],
"filter": {
"type": "not",
"field": {
"type": "selector",
"dimension": "cityName",
"value": null,
"extractionFn": null
}
},
"granularity": {
"type": "all"
},
"dimensions": [
{
"type": "default",
"dimension": "namespace",
"outputName": "d0",
"outputType": "STRING"
},
{
"type": "default",
"dimension": "cityName",
"outputName": "d1",
"outputType": "STRING"
}
],
"aggregations": [
{
"type": "longSum",
"name": "a0",
"fieldName": "sum_added",
"expression": null
}
],
"postAggregations": [],
"having": null,
"limitSpec": {
"type": "default",
"columns": [
{
"dimension": "a0",
"direction": "descending",
"dimensionOrder": {
"type": "numeric"
}
}
],
"limit": 50
},
"context": {
"populateCache": false,
"sqlOuterLimit": 101,
"sqlQueryId": "cd5aabed-5e08-49b7-af63-fe82c125d3ee",
"useApproximateCountDistinct": false,
"useApproximateTopN": false,
"useCache": false
},
"descending": false
}
I have the following body for my AWS_CloudWatch_Resource on terraform:
dashboard_body = jsonencode(
dashboard_body = jsonencode(
{
"widgets": [
{
"type": "metric",
"x": 0,
"y": 0,
"width": 9,
"height": 6,
"properties": {
"view": "bar",
"stacked": false,
"metrics": [
[ "AWS/AutoScaling", "GroupDesiredCapacity", "AutoScalingGroupName", "Momo-Test-ASG1" ],
[ ".", "GroupMaxSize", ".", "." ],
[ ".", "GroupTotalCapacity", ".", "." ],
[ ".", "GroupTotalInstances", ".", "." ],
[ ".", "GroupInServiceInstances", ".", "." ]
],
"region": "eu-central-1",
"title": "ASG1 statistics"
}
},
{
"type": "metric",
"x": 9,
"y": 0,
"width": 9,
"height": 6,
"properties": {
"view": "bar",
"stacked": false,
"metrics": [
[ "AWS/AutoScaling", "GroupDesiredCapacity", "AutoScalingGroupName", "Momo-Test-ASG2" ],
[ ".", "GroupMaxSize", ".", "." ],
[ ".", "GroupTotalCapacity", ".", "." ],
[ ".", "GroupTotalInstances", ".", "." ],
[ ".", "GroupInServiceInstances", ".", "." ]
],
"region": "eu-central-1",
"period": 300,
"title": "ASG2 statistics"
}
},
{
"type": "explorer",
"x": 0,
"y": 6,
"width": 24,
"height": 15,
"properties": {
"metrics": [
{
"metricName": "CPUUtilization",
"resourceType": "AWS::EC2::Instance",
"stat": "Average"
},
{
"metricName": "NetworkIn",
"resourceType": "AWS::EC2::Instance",
"stat": "Average"
},
{
"metricName": "DiskReadOps",
"resourceType": "AWS::EC2::Instance",
"stat": "Average"
},
{
"metricName": "DiskWriteOps",
"resourceType": "AWS::EC2::Instance",
"stat": "Average"
},
{
"metricName": "NetworkOut",
"resourceType": "AWS::EC2::Instance",
"stat": "Average"
}
],
"aggregateBy": {
"key": "*",
"func": "AVG"
},
"labels": [
{
"key": "aws:autoscaling:groupName",
"value": "Momo-Test-ASG1"
},
{
"key": "aws:autoscaling:groupName",
"value": "Momo-Test-ASG2"
}
],
"widgetOptions": {
"legend": {
"position": "bottom"
},
"view": "timeSeries",
"stacked": false,
"rowsPerPage": 40,
"widgetsPerRow": 3
},
"period": 300,
"splitBy": "",
"title": "Average ASG1 and ASG2"
}
},
{
"type": "metric",
"x": 0,
"y": 21,
"width": 6,
"height": 6,
"properties": {
"metrics": [
[ { "expression": "AVG(METRICS())", "label": "Average", "id": "e1" } ],
[ "CWAgent", "mem_used_percent", "InstanceId", "i-0f67225a5c04aebf9", "AutoScalingGroupName", "Momo-Test-ASG2", "ImageId", "ami-0502e817a62226e03", "InstanceType", "t2.micro", { "yAxis": "left", "id": "m1" } ],
[ "...", "i-00198c860886391f4", ".", "Momo-Test-ASG1", ".", ".", ".", ".", { "id": "m2" } ]
],
"view": "timeSeries",
"stacked": false,
"region": "eu-central-1",
"period": 300,
"stat": "Average",
"title": "mem_used_percent"
}
}
]
}
)
}
As you can see, I am having the same widget for my Momo-Test-ASG1 ( first Autoscaling group ) and Momo-Test-ASG2 ( second Autenter code here scaling group ).
If I would have many ASGs to test, It would be problematic to hardcode the same thing for a lot of groups.
Is there any solution to make terraform read the ASGs from a list? instead of having to reproduce the same parts?
HashiCorp Terraform 0.12 Preview: For and For-Each
Using for
locals {
asg_names = [
"Momo-Test-ASG1",
"Momo-Test-ASG2",
]
}
locals {
body = [for asg_name in local.asg_names :
{
type: "metric",
x: 0,
y: 0,
width: 9,
height: 6,
properties: {
view: "bar",
stacked: false,
metrics: [
[ "AWS/AutoScaling", "GroupDesiredCapacity", "AutoScalingGroupName", asg_name ],
[ ".", "GroupMaxSize", ".", "." ],
[ ".", "GroupTotalCapacity", ".", "." ],
[ ".", "GroupTotalInstances", ".", "." ],
[ ".", "GroupInServiceInstances", ".", "." ]
]
region: "eu-central-1",
title: asg_name
}
}
]
}
resource "aws_cloudwatch_dashboard" "main" {
dashboard_name = "my-dashboard"
dashboard_body = jsonencode({
widgets: concat(local.body, [{
type: "text",
x: 0,
y: 7,
width: 3,
height: 3,
properties: {
markdown: "Hello world"
}
}])
})
}