"sequence contains no matching element" on Group By operations in Power Query - powerbi

Power BI newbie question here.
Whenever I add a Group By step with a Text.Combine() or a Max() aggregate, applying changes or refreshing data results in the aforementioned exception.
My datasource is a D365 dataverse connection, all queries run just fine until I add a step to group and aggregate. As an example, starting with a very simple query with 2 columns (demandId, kor_subcontractorbillnumber) I want to concatenate in a csv column all billNumbers related to a given demandId :
= Table.Group(#"Table Buffer", {"demandId"}, {{"BillNumbers", each Text.Combine([kor_subcontractorbillnumber],", "), type nullable text}})
As seen in the attached screenshot, the preview on screen seems correct : the expected result is displayed in the BillNumbers column, and no error is reported in the column quality indicators. All is fine...until I click Apply, which raises the exception.
I tried to clean the columns as much as possible before grouping (removing empty values, errors, duplicates, etc.), as well as adding an extra step to store results in a table buffer before grouping but with no luck.
Browsing through SO I found that similar issues could be related to :
Wrong relationship cardinalities : does not apply here I guess since everything is correct in the buffer table until I group
Power Bi Desktop update : some users have reported in the past that an update broke something and gave the same exception. In my case, the issue started occurring after upgrading to July 2022 version and unfortunately it seems I can't downgrade to a previous version. I've started using PowerBi in June and do not have much experience to detect whether the july update actually broke something, though some reports ceased functioning short time after the update.
Even stranger : If I remove the last step (Group By) and I create a new query referencing this one... I can add a Group By step and apply my changes...until I Refresh my report : at this point all the embedded queries fail with the same exception, even those absolutely unrelated with my changes.
Could anyone explain me what I'm doing wrong or if you have experienced the same behavior with the last version of Power Bi desktop ( 2.107.841.0 64-bit), which could point me to the right direction ?
Thanks for your help !

After many tries, I eventually stumbled upon a workaround: instead of the Group By step, I clicked on the very last step of my query and selected 'Extract Previous'. This created a new query (result of all previous steps), and I was able to perform my Group By on this new query without any errors.
I have no idea how this is different from adding the Group By at the end of the first query... but the exception is gone. Kind of a code smell anyway...I mark my own question as answered in case it can help someone, but I'd more than happy if someone could shed some light on the underlying reason of this issue.

Related

POWER QUERY [Expression.Error] Cannot convert the value null to type Table

SOLVED USING A DIFFERENT APPROACH (see at the end)
I am trying to combine some queries into one by using the Table.Combine() function.
If I explicitly write the name of each query (e. g., Table.Combine({#"Name of query 1", #"Name of query 2"})) and then apply the changes, everything works fine.
However, since I want to make it dynamic, instead of writing a list of names, I pass the function a list of tables generated in a previous step:
So after I get this table, the next step is: = Table.Combine(PreviousStep[Value]). Note that Value is the name of the column that contains the tables. Apparently, by doing so this column of a table containing tables is converted to a list containing tables. This works fine (I can preview the resultset) until I hit that Apply changes button. When I do it, this message pops up:
I had a look at these threads: https://community.powerbi.com/t5/Desktop/We-cannot-convert-the-value-null-to-type-Table/td-p/391064, https://community.powerbi.com/t5/Desktop/We-cannot-convert-the-value-null-to-type-table/m-p/346056, but it didn't work. I've tried other approaches as well.
Further information:
Power BI Desktop version: 2.106.582.0 64-bit (June 2022)
Data source: combining existing queries that come from a single Excel file.
Steps followed to get that list of tables that I pass the Table.Combine() function:
let
Origen = #sections[Section1],
#"Convertido en tabla" = Record.ToTable(Origen),
#"Errores quitados" = Table.RemoveRowsWithErrors(#"Convertido en tabla", {"Value"}),
Personalizado1 = Table.SelectRows(#"Errores quitados", each Text.StartsWith([Name], "COMPRAS Y GASTOS")),
Personalizado2 = Table.Combine(Personalizado1[Value])
in
Personalizado2
I access all the queries I have (with the #sections keyword), convert it to a table, remove possible errors, filter to get the queries I want (the ones starting by "COMPRAS Y GASTOS") and then try to combine the queries).
A DIFFERENT APPROACH
What I wanted to do was merge tables that came from an Excel file, each of them referring to a year (2019, 2020, 2021, 2022). But I also wanted the combined table to update when new sheets were added on Excel (2023, 2024...).
I've tried many different approaches, like generating a dynamic list (from 2019 until the current year)... but for some reason none of them worked, even though the code apparently is correct.
So my new approach has been to create a sufficient amount of Excel sheets for the coming years (that are now empty, but when the new year comes the information will be filled in there), to create the queries referring to those sheets (they return empty tables) and merging those existing (but empty) tables with the ones from 2019-2022. This way, when data from 2023 is filled in in the sheet, the query is updated and it works.
It's a shame I couldn't actually solve the original problem I had, but this approach works.

POWERBI: take while table when there is no available relation

I have a slight issue with my tables in POWERBI. In short, I have a missing link in one of my relations. As a result, instead of returning NOTHING which is logical and actually what I would like, it returns EVERYTHING.
A bit more details, I have the multiple tables with relations between them. The problem is that I have a few task_group pointing toward shipments that do not exist. In my visualization, I am trying to access data (a count of the number of Packages linked to a shipment) that is linked to a shipment. The logical thing for me would be that "If there is no shipment fitting the number that is given in the shipment table, then you cannot count the number of packages linked to that shipment".
But PowerBI beg to differ. His idea is "If I cannot find a shipment to link to package, i'm going to take every single package regardless of shipment". As a result, a group of task that do not have any package end up showing as having all the packages instead. How can I tell powerbi to return nothing if he doesn't find anything instead of returning everything?
Image of my relationships
I think Power BI behaves slightly unintuitively where there are nulls on one side of a join.
Have you tried filtering to only include where shipment_id is not blank?
If the problem is you having NULLs in one side of the relationship, the best way to tackle this would be to replace the NULLs with something else. Now, you can do it in two ways:
Edit the Shipment number NULLs to something else in the Power query while importing (Some number which is not likely to be an actual shipment, maybe 0)
Create a calculated field in DAX replacing the blanks/NULLs and use that in the relationship instead
But I think you may have NULLs in both the sides of the relationship. That is the only explanation I can think of, why Power BI is behaving this way. Either way, the above solutions should fix it.

Kibana: can I store "Time" as a variable and run a consecutive search?

I want to automate a few search in one, here are the steps:
Search in Kibana for this ID:"b2c729b5-6440-4829-8562-abd81991e2a0" which will return me a bunch of logs. Of these logs I need to take the first and the last timestamp:
I now would like to store these two data FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524 in 2 variables
Run a second search in Kibana for the word "fail" in between these two variable of time
How to automate the whole process without need of copy/paste and running a second query?
EDIT:
SHORT STORY LONG: I work in a company that produce a software for autonomous vehicles.
SCENARIO: A booking is rejected and we need to understand why.
WHERE IS THE PROBLE: I need to monitor just a few seconds of logs on 3 different machines. Each log is completely separated, there is no relation between the logs so I cannot write a query in discover, I need to run 3 separated queries.
EXAMPLE:
A booking was rejected, so I open Chrome and I search on "elk-prod.myhost.com" for the BookingID:"b2c729b5-6440-4829-8562-abd81991e2a0" and I have a dozen of logs returned during a range of 2 seconds (FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524).
Now I need to know what was happening on the car so I open a new Chrome tab and I search on "elk-prod.myhost.com" for the CarID: "Tesla-45-OU" on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
Now I need to know why the server which calculate the matching rejected the booking so I open a new Chrome tab and I search for the word CalculationMatrix always on the time range FROM: September 3rd 2019, 21:28:22.155, TO: September 3rd 2019, 21:28:23.524
CONCLUSION: I want to stop to keep opening Chrome tabs by hand and automate the whole thing. I have no idea around what time the book was made so I first need to search for the BookingID "b2c729b5-6440-4829-8562-abd81991e2a0", then store the timestamp of first and last log and run a second and third query based on those timestamps.
There is no relation between the 3 logs I search so there is no way to filter from the Discover, I need to automate 3 different query.
Here is how I would do it. First of all, from what I understand, you have three different indexes:
one for "bookings"
one for "cars"
one for "matchings"
First, in Discover, I would create three Saved Searches, one per index pattern. Then in Visualize, I would create a Vertical bar chart on the bookings saved search (Bucket X-Axis by date_histogram on the timestamp field, leave the rest as is). You'll get a nice histogram of all your booking events bucketed by time.
Finally, I would create a dashboard and add the vertical bar chart + those three saved searches inside it.
When done, the way I would search according to the process you've described above is as follows:
Search for the booking ID b2c729b5-6440-4829-8562-abd81991e2a0 in the top filter bar. In the bar chart histogram (bookings), you will see all documents related to the selected booking. On that chart, you can select the exact period from when the very first booking document happened to the very last. This will adapt the main time picker at the top and the start/end time will be "remembered" by Kibana
Remove the booking ID from the top filter (since we now know the time range and Kibana stores it). Search for Tesla-45-OU in the top filter bar. The bar histogram + the booking saved search + the matchings saved search will be empty, but you'll have data inside the second list, the one for cars. Find whatever you need to find in there and go to the next step.
Remove the car ID from the top filter and search for ComputationMatrix. Now the third saved search is going to show you whatever documents you need to see within that time range.
I'm lacking realistic data to try this out, but I definitely think this is possible as I've laid out above, probably with some adaptations.
Kibana does work like this (any order is ok):
Select time filter: https://www.elastic.co/guide/en/kibana/current/set-time-filter.html
Add additional criteria for search like for example field s is b2c729b5-6440-4829-8562-abd81991e2a0.
Add aditional criteria for search like for example field x is Fail.
Additionaly you can view surrounding documents https://www.elastic.co/guide/en/kibana/current/document-context.html#document-context
This is how Kibana works.
You can prepare some filters beforehands, save them and then use them if you want to automate the process of discovering somehow.
You can do that in Discover tab in Kibana using New/Save/Open options.
Edit:
I do not think you can achieve what you need in Kibana. As I mentioned earlier one option is to change the data that is comming to Elasticsearch so you can search for it via discover in Kibana. Another option could be builiding for example Java application, that is using Elasticsearch - then you can write algorithm that returns the data that you want. But i think it's a big overhead and I recommend checking the data first.
Edit: To clarify - you can create external Java let's say SpringBoot application that uses Elasticsearch - all the data that you need is inside it.
But in this option you will not use Kibana at all.
You can export the result to csv or what you want in the code.
SpringBoot application can ask ElasticSearch for whatever it needs, then it would be easy to store these time variables inside of Java code.
EDIT: After OP edited question to change it dramatically:
#FrancescoMantovani Well the edited version is very different from where you first posted here How to automate the whole process without need of copy/paste and running a second query? and search for word fail in a single shot. In accepted answer you are still using a three filters one at a time so it is not one search, but three.
What's more if you would use one index, and send data from multiple hosts via filebeat you don't even to have to create this dashboard to do that. Then you can you can select the exact period from when the very first document happened to the very last regarding filter and then remove it and add another filter that you need - it's simple as that. Before you were writing about one query,
How to automate the whole process without need of copy/paste and
running a second query?
not three. And you don't need to open new tab in Chrome each time you want to change filter just organize the data by for example using filebeat as mentioned before.
There is no relation between the 3 logs
From what you wrote the realation exist and it is time.
If the data is in for example three diferent indicies (cause documents don't have much similiar data) you can do it like that:
You change them easily in dicover see:
You can go to discover select index 1 search, select time range that you need, when you change index the time range is still the one you selected, you only need to change filter - you will get what you need.

Missing values in PowerBI slicer

I have a problem in my cube with a dimension value as a slicer or a filter. It has no and name and I want to slice on name, but some of the values are missing. It is the same problem if I use no. If I use no as a filter and use advanced filter to search for the specific no the data works, but the slicer for name is still empty. The strange thing is that I can see the no and name fine in the data area, just not in the filter or if I use a slicer.
I have the same problem in PowerBI desktop and in the online version.
Edited:
I have several facts using the same dimensions. I have found that a disabling a specific relationship from a fact one of the dimensions makes the problem disappear. The number of values in the slicer are a bit too many, but at least now it doesn't exclude some of the values I can see. The only problem is that I need the relationship. I have checked if there should be a problem with the values for the relationship such as missing or null values, all the values are there.
My colleague found the solution for my dashboard: when you go to the options of the X axis there is a Start and End box. In my End box there was a number, which caused the last value to disappear. When I removed this number (and Auto appeared), my graph was complete again.
Removing and adding the problematic fact table again solved the problem.
It was quite hard to find out what was causing the problem to begin with. I went about it by first creating a minimal cube to make sure that the core data was correct. After verifying that, I opened the problematic cube and starting removing facts and dimensions until the problem disappeared.

Sitecore Fast Query Get Descendants Query Syntax Inconsistent

I came across an old project previously written for Sitecore 6.4 and now updated to Sitecore 7.2.
There is a fast query that does not return results:
1. fast:/sitecore/content/Home/About Us/News//*[##templatename='Newsletter']
I tried to tweak the query and these two are working fine:
2. fast:/sitecore/content/Home/About Us/News/descendant::*[##templatename='Newsletter']
3. fast:/sitecore/content/Home/About Us/News/Newsletters//*[##templatename='Newsletter']
The Newsletter items are not direct children of Newsletters item either, there is another layer in between.
So Why the query 1 does not work while 2 & 3 return exactly what I need?
Check your web.config setting for FastQueryDescendantsDisabled.
Rebuild your Descendents table via Control Panel, Databases, Clean Up Databases
Reference: http://sdn.sitecore.net/upload/sdn5/developer/using%20sitecore%20fast%20query/using%20sitecore%20fast%20query001.pdf
Check to see if the number of descendants of "News" is under whatever the maximum returned items size is set to in the web config (<setting name="Query.MaxItems" value="100" />).
While you aren't returning items while looking at the descendants, it is possible that the older version of Sitecore only looks at the max number of items (which would be a bug). The fact that query 3 works and query 1 doesn't is says to me that this is likely the issue. Query 2 is trying to do the same thing, but queries 1 and 3 also happen to be the same syntax. Since all three are meant to do the exact same thing (the only exception being that /News/Newsletters is looking from deeper root item), I would expect that this is a bug.
You can test this by setting the value of the Query.MaxItems setting to a ridiculously high number, like 5000 (note that you should change the value back after testing, as this can greatly degrade performance). If query 1 now returns items, then this is your issue. Otherwise, try setting the value higher. If it still doesn't return values after that, then this is not your issue.
Let me know if you have any questions. Good luck and happy coding! :)