There is a "__time" field (in milliseconds) in the raw druid schema, while now I need add several "derived" fields to day|month|year of "__time" to be grouped by, how to add them in superset "List Druid Column" tab?
As suggested in the description below "Dimension Spec JSON" you have to define a valid Dimension Spec as specified in druid docs, have a look at the section "Time Format Extraction Function", there is an example that suits you.
Make sure to add the extranction JSON as a Druid Column and not a Druid Metric, and the name in the JSON and the name of the column match!
Related
my REST data source looks like this:
REST
But apex can't recognize:
"categories": {
"names": ["XYZ", "ABC"]
}
It looks like this
DATA PROFILE
DATA PROFILE 2
It looks like this.
Anyone had a problem with the parser?
Thank you in advance
APEX REST Data Sources cannot deal with nested arrays - as all APEX components work on flat, table-like data, REST Data Sources want to treat REST response data the same way.
In your case, the top-level information (which your screen shots indicate) is a single row, with multiple attributes (which then map to columns in APEX). Your "categories" attribute would then be a "nested table"; as it contains two values for the single row.
The same situation applies if the JSON contains an array at the top level; APEX treats then each array member as a "row" and the attributes of each member as "columns". However, if one of these attributes is -again- an array, we have the nested table again.
What you can do is to manually add the categories column to the Data Profile and to choose the "JSON Document" type. So, navigate to your REST Data Source and the Data Profile. Edit the data profile and add a new column.
Column Type: Data
Column Name: {as you wish}
Selector: categories
Data Type: JSON Document
When using the REST Source, e.g. in a report, the CATEGORIES column will contain ["XYZ","ABC"].
I hope this helps
Would like to know if I understand the following correctly:
In order to customize charts we can use CSS templates
with JSON I can only work with data and connection (so if I want to customize something it's only with CSS)
If I want custom charts I should dive in echarts/preset/custom viz via npm yo etc.
Can I do a dynamic column name based on parameter (like in tableau for example) (so if I choose "%" column name be like "%_of_money" and "number" => "number_of_money")?
For dynamic columns i think you could create the columns as metrics. Refer to tutorial on this blog.
https://preset.io/blog/understanding-superset-semantic-layer/
Within the Toad Database Schema Browser I am trying to copy data from one table to another under the option Data -> Copy to another schema but, under the tables tab, I don't know how to configure the where clause to specify which destination table I want for the data copy.
Toad Version 14.1
As far as I can tell, WHERE clause is used to "copy" only those rows which satisfy the condition. You don't select tables here.
Answer to your problem is:
destination schema should already have a target table (whose name and description are exactly the same as source table's). Or,
simply check the create destination tables if needed checkbox in "Before copy" section and TOAD will create it for you
When a user imports the schema from XSD, is it possible to change the value of any property such as regular expression, minimum occurrence, or maximum occurrence for the list of items in HCL OneTest Data?
Yes, you can modify the schema in HCL OneTest Data after importing any schema.
I have a datastore entity which has a column name timestamp. It was supposed to be a timestamp type but it is a string type as of now. Now, this column has values in 2 formats. YYYY-MM-DDTHH:MM:SSZ, YYYY-MM-DDTHH:MM:SS-offset_hours.
In our code, we are doing sorting on timestamp. Which is essentially sorting the "string". Now the question is, how can i convert this "string" column into "Timestamp".
Do i have to do any conversion for existing values which are in different format? How can i do it in terraform?
Google datastore has no notion of schema migrations, you're going to have to write a taskqueue job to do it.
The proper way would be to create a new column called timestamp_2 and backfill it. Here is an article GCP wrote:
https://cloud.google.com/appengine/articles/update_schema