Aqua Data studio, compare results, column names case-sensitive - case-insensitive

Any Aqua Data studio users out here know how to turn off case-sensitivity when comparing results?
e.g. in one Query, column 1 is called "test", in another one it's called "TEST", then Aqua datastudio does not identify these columns when comparing results. How can I turn this off?
I can ignore upper/ lower case in the result set, but not in the column names.
Renaming every column each time manually is a pain. Somebody knows?

For Results Compare, cant you change your SQL Query to the same case, using Upper or using ALIAS for the column name ? I used for e.g. UPPER("category") AS CATEGORY and this solved the problem you are having.
For Schema Compare do the below
Under File->Options, Compare, enable option to Ignore Case
When you perform a Schema Compare, Under Object Alignment, you can select to Ignore Case

Related

Custom Column in Power BI

Can someone please help me with custom column code in Power BI Query Editor.
I wish to get a value "Latest" in custom column against "CRTG-0006" as 0006 is the highest number in the column. CRTG-0006 is the latest version of the Program CRTG & CRTG-0001 being the first version of the Program. In future, let's say if I add "CRTG-0007" file in the root folder, Custom Column return value shall be "Latest" for "CRTG-0007" and shall be "null" for "CRTG-0006" and all previous versions of the Program.
Given your layout,
Assume your program and versions column header is Program
Assume your preceding step is #"Changed Step
Then, in the Add Custom Column dialog you can use this formula:
if [Program] = List.Max(#"Changed Type"[Program]) then "Latest" else null
This works because your format is such that simple alphabetical sorting will work. The program name/abbreviation is always the same, and the version number is padded with zero's
By the way, if you have multiple programs in the Program column, I would group by the program base name, and apply the same algorithm in a custom aggregation.
It's a bit hard to tell what you want but you can right click then split the column on - to get the number. You can then convert type to numerical and sort or take the List.Max([ColumnName])

How to make power query case insensitive for purpose of duplicate removal?

In power bi, I open the transform data, and import a table.
The table has name column with data like following:
Product 1
product 1
When I remove the duplicates, the power query is keeping both the above treating both as unique values being case sensitive.
How can I make power query case insensitive for purpose of duplicate removal?
Since you posted no code, I am assuming you did this from the UI. So:
Go into the Advanced Editor
Locate the line that starts with Table.Distinct
Change the equation criteria to something like: Table.Distinct(previousStep, {"ColumnName",Comparer.OrdinalIgnoreCase})
Be sure to add this in the correct location.
Check MS Help for the command.
If you can't figure it out, post the relevant M-Code.

Negative filtering by filter_box or some other mechanism

Let's say I have a column named Column1. There are more than 10k different values for this column, but my goal is to display on a dashboard all data except few of them. Is it possible to achieve it in Superset? As far as I understand the only one option to filter dashboard is a filter_box, and I have to choose values explicitly in filterbox, so no way to use a negative filter. Is it true, or there is some hidden mechanism?
You can use the limit selector values option to provide the filter out values you dont need by specifying the column name and the list of values you would like to ignore using the appropriate condition like *equals, not equals, etc

Merge cells with similar but different data, different spelling

I am trying Tableau with data extracted from Salesforce. The input includes a "Country" record were the row have different spellings for the same thing.
Example: Cananda, CANADA, CAnada etc.
Is there a way to fix this in Tableau?
The easiest solution is create a group field based on your Country field.
Select Country in the data pane on the left side bar, right click and choose Create Group. Select elements that you want to group together put them into a single group, say Canada, that contains all variations of spelling.
This new group field initially has a name of Country (group). You may want to rename it Country_Corrected. (Or even better, rename the first field, Country_Original, and call the group field simply Country. Then you can hide Country_Original)
Groups are implemented using SQL case statements. They have many uses, but one application is to easily tolerate some inconsistent spellings in your data source without having to change your data. In general, you can specify several transformations like this that take effect at query and visualization time. For very large data sets, or for very complicated transformations, you may eventually want to push some of them upstream in your data pipeline to get better performance. But make those optimizations later when you've proven the necessity.
If the differences are just in case (upper vs lower), you can right-click the Country dimension, and create a calculated field called something like "New Country", and use the following formula to make the case consistent:
upper([Country])
Use this new "New Country" calc dimension instead of your "Country" dimension, and it will group them all without case sensitivity, and display as uppercase. Or you can use "lower" instead of "upper" if preferred.

Select Statement Vs Find in Ax

while writing code we can either use select statement or select field list or find method on table for fetching the records.
I wonder which of the statement helps in better performance
It really depends on what you actually need.
find() methods must return the whole table buffer, that means, all of the columns are projected into the buffer returned by it, so you have the complete record selected. But sometimes you only need a single column, or just a few. In such cases it can be a waste to select the whole record, since you won't use the columns selected anyway.
So if you're dealing with a table that has lots of columns and you only need a few of them, consider writing a specific select statement for that, listing the columns you need.
Also, keep in mind that select statements that only project a few columns should not be made public. That means that you should NOT extract such statements into a method, because imagine the surprise of someone consuming that method and trying to figure out why column X was empty...
You can look at the find() method on the table and find out the same 'select'-statement there.
It can be the same 'select; statement as your own an the performance will be the same in this case.
And it can be different select statement then your own and the performance will be depend on indexes on the table, select statement, collected statistics and so on.
But there is no magic here. All of them is just select statement - no matter which method do you use.