When I define a report region's SQL as SELECT * FROM some_table, all is fine until new columns are added to some_table -- then it breaks with a "ORAxxx No data found" error. It is easy to remediate, as it's enough to Apply Changes on the region again, even without making any changes. However, it does not make for a robust application.
Is there some combination of parameters that would allow SELECT * that does not break with new columns? It would be enough to apply any default formatting or heading to the new columns.
I'm aware I could construct the column list from data dictionary and then concatenate everything into the SELECT statement to evaluate, but this seems rather inelegant.
Normally is not recommended to use SELECT * queries because:
Returns all the columns, then the optimizer have less play to do.
It makes less robust the applications because adding new columns changes the result of the query giving unexpected results. Without SELECT *, I mean giving exactly the columns you need, adding new columns does not matter to the application.
Anyway, remember that creating a SELECT * for a view, oracle create the view replacing the * for all the columns, may be APPEX is making the same thing.
Currently your region source is (I presume) set to "Use Query-Specific Column Names and Validate Query". This means that a report column is defined explicitly for each column in the query, and the SQL is expected to be static.
If you change the region source to "Use Generic Column Names (parse query at runtime only)", then it will still work after a new column is added, with the column title defaulting to the column name.
There is another property "Maximum number of generic report columns" that defaults to 60 and must be set to a value big enough to accommodate any future columns added to the table.
Related
I'm not quite sure whether the description for the column I'm working on is proper, so bear with me.
I got an Interactive Grid that I'm adapting, and it's supposed to work such as this:
It should select a name on the first column (an autocomplete field), followed by 3 columns, each with a checkbox. I need to order the data in the grid by the name in the first column. Problem is, I can't use an "order by" in the select statement, so I need to use APEX's "Column sorting".
The column for the name, however, isn't shown in the list to select the order by value. I only get the 3 checkboxes as an option to order it.
I tried having a copy of the name column, but this time, hidden (and not an autocomplete field), but it doesn't work either. Is there a workaround for this?
The IG will only allow you to sort by columns that have been enabled for sorting. By default, only columns of certain data types and maximum lengths are enabled for sorting, but you can override this for each column in your IG definition.
so, I got 3 xlsx full of data already treated, so I pretty much just got to display the data using the graphs. The problem seems to be, that Powerbi aggregates all numeric data (using: count, sum, etc.) In their community they suggest to create new measures, the thing is, in that case I HAVE TO CREATE A LOT OF MEASURES...Also, I tried to convert the data to text and even so, Powerbi counts it!!!
any help, pls?
There are several ways to tackle this:
When you pull a field into the field well for a visualisation, you can click the drop down in the field well and select "Don't summarize"
in the data model, select the column and on the ribbon select "don't summarize" as the summarization option in the Properties group.
The screenshot shows the field well option on the left and the data model options on the right, one for a numeric and one for a text field.
And, yes, you never want to use the implicit measures, i.e. the automatic calculations that Power BI creates. If you want to keep on top of what is being calculated, create your own measures, and yes, there will be many.
Edit: If by "aggregating" you are referring to the fact that text values will be grouped in a table (you don't see any duplicates), then you need to add a column with unique values to the table so all the duplicates of the text values show up. This can be done in the data source by adding an Index column, then using that Index column in the table and setting it to a very narrow with to make it invisible.
In Power BI, I've got some query tables generated from imported data. All the data comes in as type 'Any', and I'm trying to automatically detect the type of the data in each column.
Some of the queries generate tables with columns based on the in-coming data - I don't know what the columns are going to be until the query runs and sets up the table (data comes from an Azure blob). As I will have quite a few tables to maintain, which columns can change (possibly new columns being added) with any data refresh, it would be unmanageable to go through all of them each time and press 'Detect Data Type' on the columns.
So I'm trying to figure out how I can do a 'Detect Data Type' in the query formula language to attach to the end of the query that generates the table columns. I've tried grabbing the first entry in a column and do Value.Type(column{0}), however this seems to come out as 'Text' for a column which has integers in it. Pressing 'Detect Data Type' does however correctly identifies the type as 'Whole Number'.
Does anyone know how to detect a column's entry types?
P.S. I'm not too worried about a column possibly holding values of different data types
You seem to have multiple issues here. And your solution will be fragile, there's a better way. But let's first deal with column type detection. Power Query uses the 'any' data type as it's go to data type. You can write a function that samples the rows of a column in a table does a best match data type detection then explicitly sets the data type of the column. This is probably messy and tricky since you need to do it once per column. This might be workable for a fixed schema but for a dynamic schema you'll run into a couple of things very quickly. First you'll need to write some crazy PQ code to list all the columns and run you function on each. This will work the first time, but might break in subsequent refreshes because data model changes are not allowed during refresh. If you're using a tool like Power BI Desktop, you'll be able to fix things up. If you publish your report to the Power BI service, you'll just see refresh errors.
Dynamic Schemas will suffer the same data model change issue I mentioned above.
The alternate solution that you won't have problems with is using a Direct Query data source instead of using Power Query. If you load your data into Azure SQL or a Tabular Model, the reporting layer will get the updated fields automatically so you don't have to try to work around using PQ.
The default behaviour when importing data from a database table (such as SQL Server) is to bring in all columns and then select which columns you would like to remove.
Is there a way to do the reverse? ie Select which columns you want from a table? Preferably without using a Native SQL solution.
M:
let
db = Sql.Databases("sqlserver.database.url"){[Name="DatabaseName"]}[Data],
Sales_vDimCustomer = db{[Schema="Sales",Item="vDimCustomer"]}[Data],
remove_columns = Table.RemoveColumns(Sales_vDimCustomer,{"Key", "Code","Column1","Column2","Column3","Column4","Column5","Column6","Column7","Column8","Column9","Column10"})
in
remove_columns
The snippet above shows the connection and subsequent removal.
Compared to the native SQL way way:
= Sql.Database("sqlserver.database.url", "DatabaseName", [Query="
SELECT Name,
Representative,
Status,
DateLastModified,
UserLastModified,
ExtractionDate
FROM Sales.vDimCustomer
"])
I can't see much documentation on the }[Data], value in the step so was hoping maybe that I could hijack that field to specify which fields from that data.
Any ideas would be great! :)
My first concern is that when this gets compiled down to SQL, it gets sent as two queries (as watched in ExpressProfiler).
The first query removes the selected columns and the second selects all columns.
My second concern is that if a column is added to or removed from the database then it could crash my report (additional columns in Excel Tables jump your structured table language formulas to the wrong column). This is not a problem using Native SQL as it just won't select the new column and would actually crash if the column was removed which is something I would want to know about.
Ouch that was actually easy after I had another think and a look at the docs.
let
db = Sql.Databases("sqlserver.database.url"){[Name="DatabaseName"]}[Data],
Sales_vDimCustomer = Table.SelectColumns(
(db{[Schema="Sales",Item="vDimCustomer"]}[Data],
{
"Name",
"Representative",
"Status",
"DateLastModified",
"UserLastModified",
"ExtractionDate"
}
)
in
Sales_vDimCustomer
This also loaded much faster than the other way and only generated one SQL requested instead of two.
I am using Visual Studios BIDS to modify an existing OLAP cube.
In SSMS: There is an underlying fact table (FactTableMain) with a very fine grain that contains 10 different measures to track the status of an application (they act almost like a flag). The measures either have the individual's ID value or are NULL.
In SSAS Visual Studio OLAP:
There are 10 measure groups. Each measure group is based on a DSV named query that selects 1 of the FactTableMain measures where MeasureName IS NOT NULL.
A drill action for each measure group with only the PersonName and PersonID columns being returned.
The drills for each measure group:
shows duplicates (as not all fact table columns are return columns for the drill)
Do not return the expected number of rows that the measure count displays
I have tried:
multiple MDX conditions using filter and distinct on the drill through action, but they either make no difference or the action disappears entirely
Create a junk drill dimension that selects the distinct IDs from the FactTableMain and set that as the only return column for the drill through action (made no difference to drill through return rows)
Creating New (Standard) Action as a rowset and dataset, using MDX action expressions
I think I need a New (Standard) Action with an MDX Action expression with these properties:
Target type = Cells
Target object = All cells
Actions Content Type = Rowset
My current MDX query does return results, but only for the first measure's overall total and it is not formatted correctly at all. It does not work if I select a different measure in the client application, rerun the query, and drill again. I have searched and searched, but I am out of ideas and sitting in a black pit of doom. :(
My current MDX query is:
WITH
SET [person] AS
NonEmpty([person].[person].[person])
MEMBER CurrentMeasure AS
[Measures].CurrentMember
SELECT
NonEmpty
(
Filter
(
[Quarter].[Quarter].[Quarter].MEMBERS
,[Quarter].[Quarter].CurrentMember
)
) ON COLUMNS
,(
[person]
,NonEmpty([person].[person ID].[ID])
) ON ROWS
FROM [Applications];
Goal:
I would ultimately like the drill action to be dynamic enough to know the current measure the user is selecting and filtered by the user's dimension selection for rows/columns.
Questions:
Is there a way to filter distinct or non empty rows using a condition for the original drill through action? I know there are drill limitations, but is there something that would workaround the drill's limitations?
How can I create a Standard Rowset action that is dynamically to the user's selections (my goal).
Any ideas?
A URL action type is not an option for our business needs.
EDIT: I removed everything unnecessary from the DSV and am selecting only distinct rows. Each ID can have more than 1 application and an application can have more than 1 area of interest. Now the drills return 1 row per ID, application, and area of interest. We only want the drill to return the distinct IDs, no matter the number of applications or areas of interest. I am not sure where to go from here. Can I filter our the application number and/or areas of interest dimensions in the drill?
I believe that you are going too fast too quick.
The DSV should show the data without duplication in the browser. If it's not, go back to the DSV and check what it is. Maybe create a view (an Indexed view) on top of the fact table, so you can make sure that you query only the data that you want. Also: are you sure that your dimensions are linked correctly? Sometimes duplication appears due to dimensions not being set up correctly with wrong keys for linkage.
In MDX:
If you create a Calculation in the Calculation tab you can do drill in it. Otherwise, you'll have to write the correct MDX query each and every time.
HTH.
See the very last example at:
http://asstoredprocedures.codeplex.com/wikipage?title=Drillthrough&referringTitle=Home
You have to deploy that ASSP assembly to SSAS. It is used to pickup the current context on all attributes during execution of the action. But it will return totals by employee for whatever measure the user launched the action from.
"select {[Measures].CurrentMember} on 0, NON EMPTY [person].[person].[person].Members on 1 from (select (" + ASSP.CurrentCellAttributes([Measures].CurrentMember) + ") on 0 from [Application])"