Can columns in google chart be assigned multiple roles? - google-visualization

Currently I have columns in google chart with one role already assigned as "tooltip" but I also need to assign "certainty" roles to the columns. Is that possible?

Related

Can I add a text field to an AWS CloudWatch dashboard to substitute a value into dashboard widgets?

I have an AWS CloudFront dashboard which monitors aspects of my datacenter infrastructure on AWS. I would like to insert a text field into this dashboard to parameterize the queries/titles of widgets on the dashboard, to easily run slightly different queries by entering different values into the text field.
Can this be done? If so, how?
See image below for more context:

Why do some metrics missing in cloudwatch metrics view?

I am using cloudwatch metric view to view dyanmodb metrics. When I search ReadThrottleEvents, only a few tables or index shown in the list. I wonder why the metrics are not visible for all tables? Is there any configuration I need to configure in order to view them?
Below is a screenshot of searching this metrics and I expect every table index should be shown in the list. But I only got 2 results.
If there is no data, they don't show:
Metrics that have not had any new data points in the past two weeks do not appear in the console. They also do not appear when you type their metric name or dimension names in the search box in the All metrics tab in the console, and they are not returned in the results of a list-metrics command. The best way to retrieve these metrics is with the get-metric-data or get-metric-statistics commands in the AWS CLI.

Is it possible to make a custom metric about a BigQuery SQL query?

Because there is no GCP Monitoring metric that can count the total number of rows in a BigQuery table I want to create a custom metric for that. My goal is to use that metric for visualization in a dashboard.
According to the documentation (https://cloud.google.com/monitoring/custom-metrics/creating-metrics#monitoring_create_metric-java) you can write a java script using the Monitoring API to create a custom metric.
In addition to that I was wondering if I can use the BigQuery API and do a simple sql query to count the number of rows in a BigQuery table and use this as data points for the custom metric.
Is this possible?
Thanks for any answer in advance.
Yes, of course, you can add a side jobs that create these custom metrics. In addition, on BigQuery, when you perform a select count(*) from TABLE, you pay nothing because this data is retrieve from the table metadata.

How to assign column-level restriction on BigQuery table in asia-east1 location

I want to restrict access to certain PII columns of my BigQuery tables. My tables are present in location: asia-east1. The BigQuery 'Policy Tag' feature can create policy tags for enforcing column restrictions only in 'US' and 'EU' regions. When I try to assign these policy tags to my asia-east1 tables, it fails with error:
BigQuery error in update operation: Policy tag reference projectsproject-id/locations/us/taxonomies/taxonomy-id/policyTags/policytag-id
should contain a location that is in the same region as the dataset.
Any idea on how I can implement this column level restriction for my asia-east1 BigQuery tables?
Summarising our discussion from the comment section.
According to the documentation, BigQuery provides fine grained access to sensitive data based on type or data classification of the data. In order to achieve this, you can use Data Catalog to create a the taxonomy and policy for your data.
Regarding the location of the Policy tags, asia-east1. Currently, this feature is on Beta. This is a launch stage where the product is available for broader testing and use and new features/updates might be still taking place. For this reason, Data Catalog locations are limited to the ones listed here. As shown in the link, asia-east1 end point has Taiwan as the region.
As an addition information, here is a How to guide to implement Policy Tags in BigQuery.

How do I count the "role" instances in a cluster using RapidMiner

I have a RapidMiner flow that takes a dataset and clusters it. In the output I can see my role, but I can't figure out a way to count the role per cluster. How can I count the number of roles per cluster. I've looked at the Aggregate node but my role isn't an available attribute.
Essentially, I'm trying to figure out if the clusters say anything about the role. I also use Weka and they call this "Classes to clusters evaluation". It basically shows how the class (or role) breakdown per cluster.
My current flow:
Only two attributes are available. My role isn't one of them.
There are 34 total attributes. I want to aggregate by ret_zpc
RapidMiner has the concept of roles. An attribute can be one of regular, id, cluster or label (and some others). There's even an operator, Set Role that allows the role to be changed. Outside RapidMiner, role, label and class get used interchangeably.
For your question, the Aggregate operator is what you need. Assuming you have an attribute in your example set with role Cluster and another with role Label you select these attributes as the ones to group by. For aggregation attribute, choose another attribute and select count as the aggregation function.
In your case, the attributes you want are not being populated in the drop downs but they can still be used. You just have to type them in manually and explicitly add them to the selection criteria. This absence of attributes can sometimes happen if RapidMiner cannot see any metadata for the attributes. If you change the Read CSV operator so that it has an explicit mapping you should find that the attributes appear for selection.