Setting PowerBi filter on Report Load and maintaining dataColor order - powerbi

I'm trying to set a PowerBi report filter on load but also maintain my dataColors array position. I've created a video to illustrate my issue - I hope that's allowed...
https://www.loom.com/share/40f0040311ee4487a46a0ad23c6ea1c9
When I apply a filter the behaviour differs to selecting the filter in the UI. I guess it's because as there is only one strand of data on load that it takes the first data colour from the array but I'd like to maintain this order.
Any help appreciated - cheers!
Rob

Currently, it is not possible to maintain the dataColors array position w.r.t filters / data. The dataColors array within the themes file will be applied sequentially. Color at position one will be applied to the very first data strand that was specified in this scenario.

Related

MarkLogic Optic javaScript Geospatial Difference

I want to reduce the selected items by their distance from a point using MarkLogic Optic.
I have a table with data and a lat long
const geoData = op.fromView("namespace","coordinate");
geoData.where(op.le(distance(-28.13,153.4,geoData.col(lat),geoData(long)),100))
The distance function I have already written and utilises geo.distance(cts.point(lat,long),cts.point(lat,long)) but the geoData.col("lat") passes an object that describes the full names space of the col not the value.
op.schemaCol('namespace', 'coordinate', 'long')
I suspect I need to do a map/reduce function but MarkLogic documentation gives the normal simplistic examples that are next to useless.
I would appreciate at some help.
FURTHER INFORMATION
I have mostly solved most of this problem except that some column have null values. The data is sparse and not all rows have a long lat.
So when the cts.points runs in the where statement and two null values are passed it raises an exception.
How do I coalesce or prevent execution of cts.points when data columns are null? I dont want to reduce the data set as the null value records still need to be returned they will just have a null distance.
Where possible, it's best to do filtering by passing a constraining cts.query() to where().
A constraining query matches the indexed documents and filters the set of rows to the rows that TDE projected from those documents before retrieving the filtered rows from the indexes.
If the lat and long columns are each distinct JSON properties or XML elements in the indexed documents, it may be possible to express the distance constraint using techniques similar to those summarized here:
http://docs.marklogic.com/guide/search-dev/geospatial#id_42017
In general, it's best to use map/reduce SJS functions for postprocessing on the filtered result set because the rows have to be retrieved to the enode to process in SJS.
Hoping that helps,

Filled Maps in Power BI not working (despite normal maps working!!)

would like to thank you advance.
New to Power BI but have found that filled Maps does not working despite working on same data with normal Maps.
I am trying to produce a filled map using US States (full state name used).
Could this be a bug with Power BI (as this seems very intuitive for it not to work) or am I missing something?
Screenshots below:
1. Screenshot shows that Maps is working and is picking up on the State names.
Screenshot shows when chart type is switched to Filled Maps, State data is not represented as a filled map.
Hi Simon were you ever able to get the filled maps working? If so do you care to share what you did? I have attempted and am not able to get it to work.
Take a look at this post.
Things to try:
Provide some measure or value to the Color saturation field of your filled map visualisation.
Set the data category of your State property to State/Province (Modelling tab)
Change your state names to use geo location terms, e.g. Washington->Washington, DC.
Include country name: Southampton->Southampton, England.
Specify latitude/longitude as well as above.

How to Query Large Sharepoint 2013 Lists in Infopath 2010?

I'm designing an Infopath form to help guide people in a data creation process. The form needs to draw from a Sharepoint list that contains around 19,000 rows, each with six columns that contain attributes (Column 1 = Attribute A, Column 2 = Attribute B, etc.) I've reduced the first three columns to their own lists, which contain only a few hundred unique entries each, if that. When I get to Column 4, there are 8,000 unique entries, which makes querying the list outright impossible
In an attempt to get around the item limitation, I've created an Infopath form with a data connection to the list (which does not automatically query when the form is loaded). Additionally, I've added drop downs that sets values for the queryFields of the secondary data source (one for Column 1, another for Column 2, and another for Column 3). On the last drop down, I set an action to query the database, but I still get the error regarding limitations and that rules cannot be applied.
Is there any way to "pre-filter" the data connection so that I can bypass the limitation by only drawing the data I need? Am I going about this the right way?
Any guidance would be greatly appreciated.
Are you able to add indexes to your list columns that you intend to query on? I've found that I can get around the error message on list limits if I go to the list and add an index for the columns that I will be setting as query fields prior to running my query data connection.

what does the attribute selection in preprocess tab do in weka?

I cant seem to find out what attribute selection filter does in pre process tab? someone could please tell me in simple language as im new to weka
when i apply it to my dataset it seems to remove a couple of attributes but im unsure why
A real data set may contain many attributes. Applying any data mining process on this data set (e.g. finding clusters, generating a classification model ...) may take very long time.
Instead of that, we can select some attributes(dimensions) which is called the most discriminative attributes. These attributes can almost describe the data set with lower number of attributes and this will speed up any process done on the data.
Attribute selection tab contains many different methods for selecting these attributes. One of them is CFS Feature Set Evaluation This filter gives you the attributes that have higher correlation with the class label which makes them discriminative attributes.

Google Analytics exclude empty custom variable in a custom report

I have a custom variable set for all visitors; for our registered users it's some value, for unregistered users, it's empty.
I can find unregistered users in an advanced segment using the settings Exclude Custom Variable (Value 02) Matching Regexp .+ -- works brilliantly.
But I need a report of unregistered visitors for a dashboard, and tried to do the same thing with a filter. I have a metric of Visits and a dimension of something all vistors will have (e.g. Browser). My filter is identical to the one in the advanced segment, but ... not brilliant. I get no visits. I have tried to Include with a regex ^$ but no love there, either.
Any ideas what I am doing wrong?
To understand your problem and the solution yourself, let me illustrate how the data recording works in any collection process (Google Anlaytics is one of the tools used for data collection and analysis):
To record and analyse data, you first decide what you want to record, and then how. Maybe this how is where Google Analytics comes in for you. So, the data that you want to see is the metric, it can have a name and a (usually numeric) value, and each dimension is how you want to separate or drill down into the various views of the data. As an example, if you want to know how many visitors visited your site everyday, and you want to be able to see through which source they came, Daily Visitor Count is your metric and Source is your dimension.
The important thing to understand here is that Dimensions and Metrics are not bound together. What I mean here is that just because you decided that Daily Visitor Counts should be viewable by Source, doesn't add a source to every updation of the Daily Visitor Count metric. In order to view the metric by the dimenision, you need to update a value for the dimension every time you record the metric.
If you don't record a dimension for a metric, then you cannot obtain the value of the metrics for which you didn't record a dimension by applying a filter on the dimension. Because, using a dimension filter only lets you access the values recorded for the dimension, and not all metrics, because, dimensions don't contain values of metrics, only metrics can optionally contain values for dimensions.
So when you query "dimension equals regex +*", it works, with both include and exclude, but you cannot query metrics with empty dimension using a dimensional filter. The best way would be to only add a standard or default value for the dimension every time you record the metric so that you can separate, something like (not set) or unknown.
Hope that helps. :)
I just hope you understand what you were trying to do is conceptually wrong, though it could still have been made technically feasible.