I have a problem, where I can't create a connection between two columns of two tables. It says one of the columns should have unique values. It should, I made sure I queried distinct values. I trimmed the data, I also changed everything to lowercase. I even went ahead and used remove duplicates in PowerQuery (when I selected all the columns and even on the single column).
The problem still exists and I can't create a connection.
Does anyone have any tips?
Ok, so I'm ashamed to say this, but I spent 6 hours on this today. And then I realised the message might just be wrong and there is something else going on. There was a row where the value was null, so "". That caused the problem. And it wasn't even duplicated. And removing the blanks in PowerQuery didn't work either.
So I removed the "" directly in the query and now everything works as it should.
Related
Power BI newbie question here.
Whenever I add a Group By step with a Text.Combine() or a Max() aggregate, applying changes or refreshing data results in the aforementioned exception.
My datasource is a D365 dataverse connection, all queries run just fine until I add a step to group and aggregate. As an example, starting with a very simple query with 2 columns (demandId, kor_subcontractorbillnumber) I want to concatenate in a csv column all billNumbers related to a given demandId :
= Table.Group(#"Table Buffer", {"demandId"}, {{"BillNumbers", each Text.Combine([kor_subcontractorbillnumber],", "), type nullable text}})
As seen in the attached screenshot, the preview on screen seems correct : the expected result is displayed in the BillNumbers column, and no error is reported in the column quality indicators. All is fine...until I click Apply, which raises the exception.
I tried to clean the columns as much as possible before grouping (removing empty values, errors, duplicates, etc.), as well as adding an extra step to store results in a table buffer before grouping but with no luck.
Browsing through SO I found that similar issues could be related to :
Wrong relationship cardinalities : does not apply here I guess since everything is correct in the buffer table until I group
Power Bi Desktop update : some users have reported in the past that an update broke something and gave the same exception. In my case, the issue started occurring after upgrading to July 2022 version and unfortunately it seems I can't downgrade to a previous version. I've started using PowerBi in June and do not have much experience to detect whether the july update actually broke something, though some reports ceased functioning short time after the update.
Even stranger : If I remove the last step (Group By) and I create a new query referencing this one... I can add a Group By step and apply my changes...until I Refresh my report : at this point all the embedded queries fail with the same exception, even those absolutely unrelated with my changes.
Could anyone explain me what I'm doing wrong or if you have experienced the same behavior with the last version of Power Bi desktop ( 2.107.841.0 64-bit), which could point me to the right direction ?
Thanks for your help !
After many tries, I eventually stumbled upon a workaround: instead of the Group By step, I clicked on the very last step of my query and selected 'Extract Previous'. This created a new query (result of all previous steps), and I was able to perform my Group By on this new query without any errors.
I have no idea how this is different from adding the Group By at the end of the first query... but the exception is gone. Kind of a code smell anyway...I mark my own question as answered in case it can help someone, but I'd more than happy if someone could shed some light on the underlying reason of this issue.
I'm using Django 2.2 and my question is: does transaction.atomic roll back increments to a pk sequence?
Below is the background bug I wrote up that led me to this issue
I'm facing a really weird issue that I can't figure out and I'm hoping someone has faced a similar issue.
An insert using the django ORM .create() function is returning django.db.utils.IntegrityError: duplicate key value violates unique constraint "my_table_pkey" DETAIL: Key (id)=(5795) already exists.
Fine. But then I look at the table and no record with id=5795 exists!
SELECT * from my_table where id=5795;
shows (0 rows)
A look at the sequence my_table_id_seq shows that it has nonetheless incremented to show last_value = 5795 as if the above record was inserted. Moreover the issue does not always occur. A successful insert with different data is inserted at id=5796. (I tried reset the pk sequence but that didn't do anything, since it doesnt seem to be the problem anyway)
I'm quite stumped by this and it has caused us a lot of issues on one specific table. Finally I realize the call is wrapped in transaction.atomic and that a particular scenario may be causing a double insert with the same pk.
So my theory is: The transaction atomic is not rolling back the increment of the
Postgres sequences do not roll back. Every time they are touched by a statement they advance whether the statement succeeds or not. For more information see Notes section here Create Sequence.
I have a slight issue with my tables in POWERBI. In short, I have a missing link in one of my relations. As a result, instead of returning NOTHING which is logical and actually what I would like, it returns EVERYTHING.
A bit more details, I have the multiple tables with relations between them. The problem is that I have a few task_group pointing toward shipments that do not exist. In my visualization, I am trying to access data (a count of the number of Packages linked to a shipment) that is linked to a shipment. The logical thing for me would be that "If there is no shipment fitting the number that is given in the shipment table, then you cannot count the number of packages linked to that shipment".
But PowerBI beg to differ. His idea is "If I cannot find a shipment to link to package, i'm going to take every single package regardless of shipment". As a result, a group of task that do not have any package end up showing as having all the packages instead. How can I tell powerbi to return nothing if he doesn't find anything instead of returning everything?
Image of my relationships
I think Power BI behaves slightly unintuitively where there are nulls on one side of a join.
Have you tried filtering to only include where shipment_id is not blank?
If the problem is you having NULLs in one side of the relationship, the best way to tackle this would be to replace the NULLs with something else. Now, you can do it in two ways:
Edit the Shipment number NULLs to something else in the Power query while importing (Some number which is not likely to be an actual shipment, maybe 0)
Create a calculated field in DAX replacing the blanks/NULLs and use that in the relationship instead
But I think you may have NULLs in both the sides of the relationship. That is the only explanation I can think of, why Power BI is behaving this way. Either way, the above solutions should fix it.
I have a problem in my cube with a dimension value as a slicer or a filter. It has no and name and I want to slice on name, but some of the values are missing. It is the same problem if I use no. If I use no as a filter and use advanced filter to search for the specific no the data works, but the slicer for name is still empty. The strange thing is that I can see the no and name fine in the data area, just not in the filter or if I use a slicer.
I have the same problem in PowerBI desktop and in the online version.
Edited:
I have several facts using the same dimensions. I have found that a disabling a specific relationship from a fact one of the dimensions makes the problem disappear. The number of values in the slicer are a bit too many, but at least now it doesn't exclude some of the values I can see. The only problem is that I need the relationship. I have checked if there should be a problem with the values for the relationship such as missing or null values, all the values are there.
My colleague found the solution for my dashboard: when you go to the options of the X axis there is a Start and End box. In my End box there was a number, which caused the last value to disappear. When I removed this number (and Auto appeared), my graph was complete again.
Removing and adding the problematic fact table again solved the problem.
It was quite hard to find out what was causing the problem to begin with. I went about it by first creating a minimal cube to make sure that the core data was correct. After verifying that, I opened the problematic cube and starting removing facts and dimensions until the problem disappeared.
I am trying to retrieve a single row from a table. This row contains filed that hold foreign keys into another table, which in turns is related to yet another table. I am trying to get just one row returned, yet, the problem is, it returns not only the row but ALL the objects that are jointly related to that table as well. As I have to deal with a fairly large amount of data, the returned object is very cumbersome as it contains all the related data as well. In some cases my script simply times out because there is just far too much data to grab.
My question is; is there a way to retrieve just a single record without the associated fluff with it? I am basically accessing the table via the entityManager from the repository, then trying to get my record by using the ->find($id) method.
I am sure this is something stupidly simple but I can't seem to figure this out. Thanks in advance for any help, it is much appreciated.
Doctrine 2 use "lazy loading", it means that the associated objects are not really retrieved from the database while you don't try to access them.
So the find($id) is just fine.