Up front: this isn't about PowerBI tiles or bringing visualizations into PowerApps. There is a PowerBI data connector that provides a method called ExecuteDatasetQuery that allows for passing in a DAX query for, ostensibly, returning the data from a published dataset. It takes three parameters: workspaceGuid, datasetGuid, and queryText (with an optional object for serializer settings).
There is no query I can send this thing that doesn't return a giant empty table and I have no idea what I'm doing wrong. My queries, which work fine in other systems that do the same thing (JavaScript API calls, PowerAutomate, PowerBI Desktop), all produce a table with no columns and no values in those columns but with a number of rows equal to the rows I'd expect to get back from a query. The result, viewed in PowerApps, looks like this:
And, just for fun, I've converted the return to a JSON string and can confirm that the return is...
just empty. I can find no documentation of merit for the PowerBI connector or this method, so no luck there. Just wondered if anyone's had any experience with this thing and can maybe point me in the right direction. For reference, the query I'm trying to pass in (that works everywhere else) is:
DEFINE
VAR _reqs = SELECTCOLUMNS(MyTable,
"ReqNum",[Title],
"BusinessArea",[BusinessArea],
"Serial1",[Serial1],
"Serial2",[Serial2],
"Department",[Department],
"OM",[OM],
"Requestor",[Requestor],
"StrategicObjective",[ITStrategicObjective],
"Area",[Area],
"ProductLine",[ProductLine],
"ProjectManager",[ProjectManager],
"BusinessLiaison",[BusinessLiaison],
"Customer",[Customer],
"SolutionArchitect",[SolutionArchitect],
"VicePresident",[VicePresident],
"Created",DATEVALUE([Created])
)
EVALUATE
_reqs
ORDER BY
[Created] DESC
But the PowerApps method returns the same empty table even with something as simple as EVALUATE(MyTable).
Related
Please excuse my lack of knowledge in explaining my problem as i have only just started learning Power Bi.
I am attempting to return data by using a dynamic variable within my source url.
Source = Json.Document(Web.Contents("https://api.****.com/jobs/{ID}/invoices", [Headers=[Authorization="Bearer "&GetToken()]]))
I have successfully returned the data i needed from multiple queries Blank Query 1 Query Names
However, i am trying to run a final query in which a job ID needs to be specified.
Source = Json.Document(Web.Contents("https://api.****.com/jobs/{ID}/invoices", [Headers=[Authorization="Bearer "&GetToken()]]))
With the bold item being the variable.
I have successfully returned values by hard coding the variable (seen below).
Hard coded variable
However, i would like to make dynamic in that it will return the values for all the Job ID's witin the "jobs" table.
Job Id's
I don't know if what im asking is possible, or if my explanation is good enough, but any help would be greatly appreciated!
What you are looking for is a custom function.
Make a function out of your above query by adding (ID) => in the first line and separating "ID" in your URL string.
(ID) =>
let
Source = Json.Document(Web.Contents("https://api.****.com/jobs/{" & ID & "}/invoices", [Headers=[Authorization="Bearer "&GetToken()]]))
in
Source
Of cause you can add all your other transformation steps too.
Now take your JobIDs table and add a column by invoking a custom function, select the above function and take the ID parameter from your ID column.
For every row you'll get a separate table and all that's left is simply expanding these tables into your query.
This will solve your problem.
SOLVED USING A DIFFERENT APPROACH (see at the end)
I am trying to combine some queries into one by using the Table.Combine() function.
If I explicitly write the name of each query (e. g., Table.Combine({#"Name of query 1", #"Name of query 2"})) and then apply the changes, everything works fine.
However, since I want to make it dynamic, instead of writing a list of names, I pass the function a list of tables generated in a previous step:
So after I get this table, the next step is: = Table.Combine(PreviousStep[Value]). Note that Value is the name of the column that contains the tables. Apparently, by doing so this column of a table containing tables is converted to a list containing tables. This works fine (I can preview the resultset) until I hit that Apply changes button. When I do it, this message pops up:
I had a look at these threads: https://community.powerbi.com/t5/Desktop/We-cannot-convert-the-value-null-to-type-Table/td-p/391064, https://community.powerbi.com/t5/Desktop/We-cannot-convert-the-value-null-to-type-table/m-p/346056, but it didn't work. I've tried other approaches as well.
Further information:
Power BI Desktop version: 2.106.582.0 64-bit (June 2022)
Data source: combining existing queries that come from a single Excel file.
Steps followed to get that list of tables that I pass the Table.Combine() function:
let
Origen = #sections[Section1],
#"Convertido en tabla" = Record.ToTable(Origen),
#"Errores quitados" = Table.RemoveRowsWithErrors(#"Convertido en tabla", {"Value"}),
Personalizado1 = Table.SelectRows(#"Errores quitados", each Text.StartsWith([Name], "COMPRAS Y GASTOS")),
Personalizado2 = Table.Combine(Personalizado1[Value])
in
Personalizado2
I access all the queries I have (with the #sections keyword), convert it to a table, remove possible errors, filter to get the queries I want (the ones starting by "COMPRAS Y GASTOS") and then try to combine the queries).
A DIFFERENT APPROACH
What I wanted to do was merge tables that came from an Excel file, each of them referring to a year (2019, 2020, 2021, 2022). But I also wanted the combined table to update when new sheets were added on Excel (2023, 2024...).
I've tried many different approaches, like generating a dynamic list (from 2019 until the current year)... but for some reason none of them worked, even though the code apparently is correct.
So my new approach has been to create a sufficient amount of Excel sheets for the coming years (that are now empty, but when the new year comes the information will be filled in there), to create the queries referring to those sheets (they return empty tables) and merging those existing (but empty) tables with the ones from 2019-2022. This way, when data from 2023 is filled in in the sheet, the query is updated and it works.
It's a shame I couldn't actually solve the original problem I had, but this approach works.
I have a slight issue with my tables in POWERBI. In short, I have a missing link in one of my relations. As a result, instead of returning NOTHING which is logical and actually what I would like, it returns EVERYTHING.
A bit more details, I have the multiple tables with relations between them. The problem is that I have a few task_group pointing toward shipments that do not exist. In my visualization, I am trying to access data (a count of the number of Packages linked to a shipment) that is linked to a shipment. The logical thing for me would be that "If there is no shipment fitting the number that is given in the shipment table, then you cannot count the number of packages linked to that shipment".
But PowerBI beg to differ. His idea is "If I cannot find a shipment to link to package, i'm going to take every single package regardless of shipment". As a result, a group of task that do not have any package end up showing as having all the packages instead. How can I tell powerbi to return nothing if he doesn't find anything instead of returning everything?
Image of my relationships
I think Power BI behaves slightly unintuitively where there are nulls on one side of a join.
Have you tried filtering to only include where shipment_id is not blank?
If the problem is you having NULLs in one side of the relationship, the best way to tackle this would be to replace the NULLs with something else. Now, you can do it in two ways:
Edit the Shipment number NULLs to something else in the Power query while importing (Some number which is not likely to be an actual shipment, maybe 0)
Create a calculated field in DAX replacing the blanks/NULLs and use that in the relationship instead
But I think you may have NULLs in both the sides of the relationship. That is the only explanation I can think of, why Power BI is behaving this way. Either way, the above solutions should fix it.
I'm using Pentaho PDI 7.1. I'm trying to convert data from Mysql to Mysql changing the structure of data.
I'm reading the source table (customers) and for each row I've to run another query to calculate the balance.
I was trying to use Database value lookup to accomplish it but maybe is not the best way.
I've to run a query like this to get the balance:
SELECT
SUM(
CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END
)
FROM Movimento WHERE contoFidelizzato_id = ?
I should set the parameter taking it from the previous step. Some advice?
The Database lookup value may be a good idea, especially if you are used to database reasoning, but it may result in many queries which may not be the most efficient.
A more PDI-ish style would be to make the query like:
SELECT contoFidelizzato_id
, SUM(CASE WHEN direzione='ENTRATA' THEN -importo ELSE +importo END)
FROM Movimento
GROUP BY contoFidelizzato_id
and use it as the info source of a Lookup Stream Step, like this:
An even more PDI-ish style would be to divert the source table (customer) in two flows : one in which you keep the source rows, and one that you group by contoFidelizzato_id. Of course, you need a formula, or a Javascript, or to put a formula in the SQL of the Table input to change the sign when needed.
Test to know which strategy is better in your case. You'll soon discover that the PDI is very good at handling large data.
What I have is a BigQuery table(>5mil rows).
I need to fetch this data in batches and process it inside AppEngine, python.
The only way to fetch from a table that I know is to run SELECT query on this table and then iterate the result using tokens fetch_data returns.
It looks like this:
query = u"""\
SELECT url FROM %s
""" % (query_table)
query_job = client.run_async_query(str(uuid.uuid4()), query)
query_job.begin()
wait_for_job(query_job, 1)
query_results = query_job.results()
rows, total_rows, next_token = query_results.fetch_data(max_results=per_page, page_token=page_token)
This works on smaller tables, but on larger ones like mine it asks to allow large requests and specify target table. But this makes no sense to me. For to simply fetch data from a table I have to copy it to another table?
What you are running into is described in this documentation. In summary, apart from the limit on how much data can be fetched at a time, there is a point where your results become "large results." This is when your results are more than 128MB compressed as described here. When your results are classified as large, you can only store the result of a query in a table in Big Query.
Unfortunately I'm not sure there's a nice way to do what you want without reducing how many rows you are retrieving at once. What you'll likely need to do is explore the exporting data documentation for big query.
You should use tabledata.list API for fetching data from table.
Using parameters (startIndex or pageToken) and maxResults you can control size of page you fetch.
I think this is exactly what you need link, as far as I understood you can't get a large result of a query but you can get the entire table data to your app no mater how big it is, thats why you need to put the large result in a table and then get this table data to your app and do whatever you want with it
good luck :)