I am using cube.js to compare the change in data over the time by plotting it as a line graph .
Step 1 :
After generating cube.js schema successfully , data looks like this:
Step 2 :
Now, while I am trying to check the line graph, it's showing the line as below . No line is formatted. Unfortunately, it's not working for the bar graph also .
Moreover, in SQL the data type for the value is : float(10,10) and timestamp
Apart from that, cube.js console has not error trace , rather its working fine :
Performing query: scheduler-0070c129-f83a-45db-ae09-aac6f9858200
Executing SQL: scheduler-0070c129-f83a-45db-ae09-aac6f9858200
--
SELECT FLOOR((UNIX_TIMESTAMP()) / 10) as refresh_key
Moreover , I tried as below : [all time ,w/o grouping and pivot settings as I need ] , yet no luck ,
However, If I add measure count , the count is plotting the lie not the expected y-axis data as I configured in pivot settings.
My question is : what's going wrong ?
My goal was to generate a line graph for the change of a numerical value over time:
x-axis: date/time.
y-axis: my numerical value.
Cube.js Generated the following schema for my data.
The problem with this schema was that String Type was assigned to the age dimension(clearly should be a Number). Moreover ,there are no measures for filed age ,which I am trying to plot.
cube(`ConceptDrifts`, {
sql: `SELECT * FROM cube.concept_drifts`,
preAggregations: {
},
joins: {
},
measures: {
count: {
type: `count`,
drillMembers: [date]
},
testCount: {
sql: `test_count`,
type: `sum`
}
},
dimensions: {
age: {
sql: `age`,
type: `string`
},
maxAge: {
sql: `max_age`,
type: `string`
},
sex: {
sql: `sex`,
type: `string`
},
sexSd: {
sql: `sex_sd`,
type: `string`
},
date: {
sql: `date`,
type: `time`
}
},
dataSource: `default`
});
Therefore, I changed the schema at /cube/conf/schema# manually
Added new measures a:
ag :{
type : `number`,
sql : `age`,
drillMembers : [age]
}
And, changed the type (as number ) in dimensions :
dimensions: {
age: {
sql: `age`,
type: `number`
},
maxAge: {
sql: `max_age`,
type: `number`
},
sex: {
sql: `sex`,
type: `number`
},
sexSd: {
sql: `sex_sd`,
type: `number`
},
date: {
sql: `date`,
type: `time`
}
},
dataSource: `default`
});
As a result, the graph looks like below :
More reference :
Data Schema Concepts
Drilldowns
Related
Imagine a simple line graph plotting a person count (y-axis) against a custom time value (x-axis), as such:
Suppose you have another dimension, say specific groupings of people, how do you draw a separate line on this graph for each group?
You have to use the PivotConfig here an example I used in Angular
(EDIT) Here is the Query
Query = {
measures: ['Admissions.count'],
timeDimensions: [
{
dimension: 'Admissions.createdDate',
granularity: 'week',
dateRange: 'This quarter',
},
],
dimensions: ['Admissions.status'],
order: {
'Admissions.createdDate': 'asc',
},
}
(END EDIT)
PivotConfig = {
x: ['Admissions.createdDate.day'],
y: ['Admissions.status', 'measures'],
fillMissingDates: true,
joinDateRange: false,
}
Code to extract data from resultset :
let chartData = resultSet.series(this.PivotConfig).map(item => {
return {
label: item.title.split(',')[0], //title contains "ADMIS, COUNT"
data: item.series.map(({ value }) => value),
}
})
Result Object (not the one in the chart):
[{
"label": "ADMIS",
"data": [2,1,0,0,0,0,0]
},{
"label": "SORTIE",
"data": [2,1,0,0,0,0,0]
}]
Here is what the output looks like!
The chart renderer in the Developer Playground is meant to be quite simplistic; I'd recommend creating a dashboard app or using one of our frontend integrations in an existing project to gain complete control over chart rendering.
I need to execute query like this:
select * from table where sampling_date like "2020-05-%"
To do this, I'm calling for
db.query({
TableName: "Tubes",
Select: "ALL_ATTRIBUTES",
IndexName: "sampling_date_idx",
KeyConditionExpression: " sampling_date > :sampling_date ",
ExpressionAttributeValues:{ ':sampling_date': {'S': '2020-05-'}}
}, function(error: AWSError, data: QueryOutput){
console.log(error, data);
})
And I get this error message:
{"errorType":"Error","errorMessage":"{\"message\":\"Query key condition not supported\",\"code\":\"ValidationException\",
My table:
this.tubes = new dynamodb.Table(this, "tubes", {
tableName: "Tubes",
billingMode: dynamodb.BillingMode.PAY_PER_REQUEST,
partitionKey: {
name: "id",
type: dynamodb.AttributeType.STRING
},
pointInTimeRecovery: true,
removalPolicy: cdk.RemovalPolicy.RETAIN
});
this.tubes.addGlobalSecondaryIndex({
indexName: "sampling_date_idx",
sortKey: {
name: 'sampling_date_srt',
type: AttributeType.STRING
},
partitionKey: {
name: "sampling_date",
type: AttributeType.STRING,
},
})
I think there are two issues in your current code -
In KeyConditionExpression, there must be an equality condition on a single partition key value. In your case, it must include "sampling_date = :sampling_date".
Please read "KeyConditionExpression" section in -
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Query.html
In short, you only can perform equality test against partition key.
I am not sure which language you use. I suspect your syntax for ExpressionAttributeValues is not correct.
The syntax given in AWS doc is -
"ExpressionAttributeValues": {
"string" : {
"B": blob,
"BOOL": boolean,
"BS": [ blob ],
"L": [
"AttributeValue"
],
"M": {
"string" : "AttributeValue"
},
"N": "string",
"NS": [ "string" ],
"NULL": boolean,
"S": "string",
"SS": [ "string" ]
}
}
In your case, it may be something like -
"ExpressionAttributeValues": {
":sampling_date": {"S": "2020-05-01"}
}
My experience is in C#, it may be something like -
ExpressionAttributeValues = new Dictionary<string, AttributeValue>()
{
{ ":sampling_date", new AttributeValue{S = "2005-05-01"} }
}
To solve your problem, you may need to use another attribute as the index's partition key. sampling_date only can be used as sort key.
sampling_date is the partition key for your GSI sampling_date_idx.
DynamoDB documentation says that in key condition expressions:
You must specify the partition key name and value as an equality condition.
Source: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Query.html#Query.KeyConditionExpressions
So sampling_date can only be used with the "equal to" comparison operator. None of the other operators like less than, greater than, between, contains, begins with, etc. can be used with sampling_date.
However, these operators can be used with a sort key!
So if you can redesign your table and/or indexes such that sampling_date becomes a sort key of some index, you can use begins_with on it.
Here's a suggestion:
Create a GSI with partition key = sampling_year & sort key = sampling_date.
Then if your table has the following items:
{
"id": "id1",
"sampling_year": 2020,
"sampling_date": "2020-04-01"
}
{
"id": "id2",
"sampling_year": 2020,
"sampling_date": "2020-05-01"
}
{
"id": "id3",
"sampling_year": 2020,
"sampling_date": "2020-06-01"
}
And you use the following Node.js code:
let AWS = require("aws-sdk")
let dc = new AWS.DynamoDB.DocumentClient()
dc.query({
TableName: "Tubes",
IndexName: "sampling_year-sampling_date-index",
KeyConditions: {
"sampling_year": {
ComparisonOperator: "EQ",
AttributeValueList: [2020]
},
"sampling_date": {
ComparisonOperator: "BEGINS_WITH",
AttributeValueList: ["2020-05-"]
}
}
}
You'll get your desired output:
{
"id": "id2",
"sampling_year": 2020,
"sampling_date": "2020-05-01"
}
Try
KeyConditionExpression: `begins_with(sampling_date, :sampling_date)`
See available condition expressions here...
https://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_Condition.html
I have power bi client filter code below:
const basicFilter: pbi.models.IBasicFilter = {
$schema: "http://powerbi.com/product/schema#basic",
target: {
table: "Store",
column: "Count"
},
operator: "In",
values: [1,2,3,4],
filterType: pbi.models.FilterType.BasicFilter
}
in my scenario a table can have multiple columns, so if I want to filter by multiple columns of the table then how can I do? In the above code only one column like Count is working, but how to configure for multiple columns?
You must define a filter for each of your conditions and pass an array with all your filters in ReportConfiguration.filters property:
var embedConfig = {
...
filters: [basicFilter1, basicFilter2, filter3]
};
or to report.setFilters method:
report.setFilters([basicFilter1, basicFilter2, filter3])
.catch(errors => {
// Handle error
});
I have a question about creating the custom visualization in Power BI.
I want to implement a "total row" functionality which is available in the built-in matrix visualization. The main concept is to automatically sum-up every value and group it by the rows. This is how it's looks like on the matrix visualization:
But, to be honest, I don't know how to achieve this. I try different things but I can't receive this grouped values in the dataViews.
I tried to analyze the built-in matrix.ts code but it's quite different that the custom visualizations code. I found the customizeQuery method which set the subtotalType property to the rows and columns - I tried to add this in my code but I don't see any difference in the dataViews (I don't found the grouped value).
Currently my capabilities.dataViewMappings is set like this:
dataViewMappings: [
{
conditions: [
{ 'Rows': { max: 3 } }
],
matrix: {
rows: {
for: { in: 'Rows' },
},
values: {
for: { in: 'Values' }
},
},
}
]
Does anyone know how we could achieve this "total row" functionality?
UPDATE 1
I already found the solution: when we implement the customizeQuery method (in the same way as the customizeQuery method in the matrix.ts code), and then add the reference to it in the powerbi.visuals.plugins.[visualisationName+visualisationAddDateEpoch].customizeQuery then it works as expected (I receive in the dataViews[0].matrix.row.root children elements that has the total values from row).
The only problem now is that I don't know exactly how to add correctly this reference to the customizeQuery method. For example the [visualisationName+visualisationAddDateEpoch] is Custom1451458639997, and I don't know what those number will be (I know only the name). I created the code in my visualisation constructor as below (and it's working):
constructor() {
var targetCustomizeQuery = this.constructor.customizeQuery;
var name = this.constructor.name;
for(pluginName in powerbi.visuals.plugins) {
var patt = new RegExp(name + "[0-9]{13}");
if(patt.test(pluginName)) {
powerbi.visuals.plugins[pluginName].customizeQuery = targetCustomizeQuery;
break;
}
}
}
But in my opinion this code is very dirty and inelegant. I want to improve it - what is the correct way to tell the Power BI that we implement the custom customizeQuery method and it should use it?
UPDATE 2
Code from update 1 works only with the Power BI in the web browser (web based). On the Power BI Desktop the customizeQuery method isn't invoked. What is the correct way to tell the Power BI to use our custom customizeQuery method? In the code from PowerBI-visuals repository using PowerBIVisualPlayground we could declare it in the plugin.ts file (in the same way like the matrix visual is done):
export let matrix: IVisualPlugin = {
name: 'matrix',
watermarkKey: 'matrix',
capabilities: capabilities.matrix,
create: () => new Matrix(),
customizeQuery: Matrix.customizeQuery,
getSortableRoles: (visualSortableOptions?: VisualSortableOptions) => Matrix.getSortableRoles(),
};
But, in my opinion, from the Power BI Dev Tools we don't have access to add additional things to this part of code. Any ideas?
It seems you're missing the columns mapping in your capabilities. Take a look at the matrix capabilities (also copied for reference below) and as a first step adopt that structure initially. The matrix calculates the intersection of rows and columns so without the columns in capabilities its doubtful you'll get what you want.
Secondly, in the matrix dataview passed to Update you'll get a 'DataViewMatrixNode' with isSubtotal: true Take a look at the unit tests for matrix to see the structure.
dataViewMappings: [{
conditions: [
{ 'Rows': { max: 0 }, 'Columns': { max: 0 }, 'Values': { min: 1 } },
{ 'Rows': { min: 1 }, 'Columns': { min: 0 }, 'Values': { min: 0 } },
{ 'Rows': { min: 0 }, 'Columns': { min: 1 }, 'Values': { min: 0 } }
],
matrix: {
rows: {
for: { in: 'Rows' },
/* Explicitly override the server data reduction to make it appropriate for matrix. */
dataReductionAlgorithm: { window: { count: 500 } }
},
columns: {
for: { in: 'Columns' },
/* Explicitly override the server data reduction to make it appropriate for matrix. */
dataReductionAlgorithm: { top: { count: 100 } }
},
values: {
for: { in: 'Values' }
}
}
}],
I'm trying to figure out how to add a filter onto a crossfilter group that is not related to a dimensional filter. Let's look at an example:
var livingThings = crossfilter({
// Fact data.
{ name: “Rusty”, type: “human”, legs: 2 },
{ name: “Alex”, type: “human”, legs: 2 },
{ name: “Lassie”, type: “dog”, legs: 4 },
{ name: “Spot”, type: “dog”, legs: 4 },
{ name: “Polly”, type: “bird”, legs: 2 },
{ name: “Fiona”, type: “plant”, legs: 0 }
}); //taken from http://blog.rusty.io/2012/09/17/crossfilter-tutorial/
if we were to make a dimension on type and a group of that dimension:
var typeDim = livingThings.dimension(function(d){return d.type});
var typeGroup = typeDim.group();
we would expect typeGroup.top(Infinity) to output
{{human:2},
{dog:2},
{bird:1},
{plant:1}}
My question is how can we filter the data such that they include only 4 legged creatures in this grouping? I also don't want to use dimension.filter... because i don't want this filter to be global, just for this one grouping. In other words
var filterDim = livingThings.dimension(function(d){return d.legs}).filterExact(4);
is not allowed.
I'm thinking of something similar to what I did to post-filter dimensions as in https://stackoverflow.com/a/30467216/4624663
basically I want to go into the internals of the typeDim dimension, and filter the data before it is passed into the groups. Creating a fake group that calls typeDim.group().top() will most likely not work as the individual livingThings records are already grouped by that point. I know this is tricky: thanks for any help.
V
Probably best to use the reduceSum functionality to create a pseudo-count group that only counts records with 4 or more legs:
var livingThings = crossfilter({
// Fact data.
{ name: “Rusty”, type: “human”, legs: 2 },
{ name: “Alex”, type: “human”, legs: 2 },
{ name: “Lassie”, type: “dog”, legs: 4 },
{ name: “Spot”, type: “dog”, legs: 4 },
{ name: “Polly”, type: “bird”, legs: 2 },
{ name: “Fiona”, type: “plant”, legs: 0 }
}); //taken from http://blog.rusty.io/2012/09/17/crossfilter-tutorial/
var typeDim = livingThings.dimension(function(d){return d.type});
var typeGroup = typeDim.group().reduceSum(function(d) {
return d.legs === 4 ? 1 : 0;
});
That will sum across a calculated value that will be 1 for records with 4 legs and 0 for records with ... not 4 legs. In other words, it should just count 4-legged creatures.
I think, this is what you are looking for. Comment back if I'm wrong.
var dimByLegs = livingThings.dimension(function(d){return d.legs});
dimByLegs.filterExact(4);
var dogs = dimByLegs.group();
dimByLegs.top(Infinity).forEach(function(d){console.log(d.type, d.legs);});
dimByLegs.dispose();