Convert excel formula to DAX / M - powerbi

I am trying to work out if this is possible or not in DAX or M.
Basically I want to replicate this:
=IF(T9>0,T9-1,$Q$6)
Which is the formula in T10. So it is counting down by one if the value above it is not 0, otherwise put in a value and start counting down again.
Here is some data and expected outcome:
When the stock on hand drops below 5000 it triggers the lead time count down to start. When that hits 0, it adds stock to the SOH balance, 4000 in this case. Since the stock is below its reorder point, it puts starts the countdown again.

We would need the data in order to answer your question properly, if you can't share the dataset, add the index column using
Transform data > Add Column > Index Column

Related

Summing totals from same date ranges, and getting the range value for those totals

I have the following data currently in PowerBI
This is totals for multiple entities that are under the same location (ID 1).
What I would like to produce is the following
Which is the summation of the 3 totals over time, with start and end dates of when they applied.
I will eventually try to use this to show a trending chart over time for how the totals changed.
Is something like this even possible in PowerBI and/or DAX to first produce the results and then report a trend line like that? The trend line in this example would just have the 3 data points.
The only thing I can think of right now is to extrapolate out each range to 1 day at a time per original screenshots rows, and make the granularity of the chart daily, instead of ranges like this. Then the summation becomes a lot simpler as its just be ID and Date. Only concern is the data volumes that will be produced by extrapolating this out by days like that.

If statement based on presence of duplicates

If the data in column A is found in multiple rows, look at the data in column C for those duplicate rows. Whichever is highest value in C, return the value from the respective row but column B. In my picture, I'm trying to populate the stuff in yellow automatically, ideally with formulas in excel. Any help is greatly appreciated.
enter image description here
My first attempt was this (a formula that you may copy on cell D2):
=INDEX($A$2:$C$9,MATCH(MAX(IF($A$2:$A$9=A2,$C$2:$C$9)),$C$2:$C$9,0),2)
This is what it does: the combo INDEX-MATCH does what VLOOKUP does, but it is more efficient than VLOOKUP. Basically it commands Excel to navigate the $A$2:$C$9 range and then find the following match:
Find the row with the MAX price for the same Item (this part: MAX(IF($A$2:$A$9=A2,$C$2:$C$9)));
Then return whatever value is on column B, at that row.
Albeit this formula seemed to work, I tried something out: what if, by some unfortunate coincidence, the MAX price for two items was the same?
This is what happens when CDE888 sells for 217
Thus, one can tell the formula above is wrong and needs a fix. This is the new formula:
INDEX($A$2:$C$9,MATCH(A2&MAX(IF($A$2:$A$9=A2,$C$2:$C$9)),$A$2:$A$9&$C$2:$C$9,0),2)
This time, the formula looks for a value that is composed of the Item code AND its highest price.
The rest works exactly as the first formula.
One last word: I wrote this formula on cell D2, then dragged the formula down.

How to make Google Sheets search a range and return -all- rows that are "partial duplicates"

I'm trying to return "duplicates" from a range. In this case a duplicate is when there exists more than one row that has the same data in the first and last column (the data in the middle columns needs to be returned, but is irrelevant in terms of having useful data for the search to be performed on).
For a small example data set and desired output see this sheet.
My current incomplete solution path is as follows:
I use
=QUERY({SourceData!A2:E,ARRAYFORMULA(IF(LEN(SourceData!A2:A),COUNTIFS(SourceData!A2:A&SourceData!E2:E,SourceData!A2:A&SourceData!E2:E,ROW(SourceData!A2:A),"<="&ROW(SourceData!A2:A)),))},"select Col1, Col2, Col3, Col4, Col5 where Col6 > 1")
where the ARRAYFORMULA appends a rolling count column to the end of the range and then QUERY the rows of the original range where the rolling count is above 1.
However, this only gives me the subsequent rows and not the first of the duplicates. (In the example it only gives me the second row of the matching pair and not the first.)
I'm tempted to limit the QUERY output to just column 1 and then wrap that output in a JOIN to make the output conditions of another QUERY. But given the size of the actual data set and the sheer number of IMPORTRANGEs and QUERYs I've already got going I'm starting to worry about efficiency. (I've got 12 Google Sheet documents all importing from a 13th Google Sheet document then the 13th document pulls and combines data from the 12 other sheets and spits subsets of the combined data set back to each of the 12 other documents.) The whole thing won't be usable if a user has to wait multiple minutes while all the functions resolve. Plus I'm sure someone out there has a more elegant way of getting this done that would be helpfully enlightening to an amateur such as me.
Advice is appreciated! Thank you for your time.
try:
={SourceData!A1:E1;
ARRAYFORMULA(FILTER(SourceData!A2:E, REGEXMATCH(SourceData!A2:A&SourceData!E2:E,
TEXTJOIN("|", 1, FILTER(SourceData!A2:A&SourceData!E2:E,
COUNTIFS(SourceData!A2:A&SourceData!E2:E, SourceData!A2:A&SourceData!E2:E,
ROW(SourceData!A2:A), "<="&ROW(SourceData!A2:A))>=2)))))}

Calculated field Sub total in pivot table is not displaying correct value

I am working on QuickSight in AWS. I am trying to achieve weighted average value in a Pivot table.
I am using SPICE data to create this analysis.
I have created a calculate field (WAM) in analysis with formula "percentOfTotal(sum(upb),[{pool_num}]) * sum({remaining_terms})".
This gives me the desired value on each row level, but the sub total of a particular column is not reflecting the total of values in the calculate field, rather it displays the sum of original values in the "remaining_terms" field.
Please see below image for the same. Can some one please through some light on this ?
Thanks in advance for your help
Please note that I have tried the same in Excel pivot table and it works perfectly.
Try to remove the 2nd argument from the percentOfTotal function. For example, just do:
percentOfTotal(sum(upb))
I am not 100% this will work but one thought it that it would match the remaining_terms value if the percentageOfTotal was 1 (i.e. 100%) and you may not need to provide a partition argument in a pivot table since pivot tables implicitly provide partitions.
I have solved the problem in a different way. See below what I have done.
WAM = percentOfTotal(sum(upb),[{pool_num}]) * sum({remaining_terms}).
It looks like QuickSight treats the subtotal as a row and the above function is applied on the subtotal, hence it is converted as
(1186272.5 / 1186272.5) * 31 = 31.
I have tried to produce the desired result by introducing another custom field with formula
SUM_WAM = sumOver({WAM},[{pool_num}]).
This gives me the output I need, but in a column. See the screen shot attached

Remove Rows With Similar Values in Power BI / Power Query

I am working with a data set that has some duplicate rows. The rows are not straight duplicates, but have a time stamp less than a second apart. I'd like to remove these duplicates, but the question is how.
My current plan is to add two new columns, which are copies of the time stamp column but one has a second added to it and the other has a second removed from it. I can then add steps to remove rows which have all other values the same, but have the same time stamp as time stamp plus one or minus one. Doing one after the other should eliminate duplicates but not remove truly unique rows.
How can I accomplish this in Power Query?
I think your "current plan" approach is good - I would apply that in a separate Query, started "By Reference" to the original - I'd call it something like Non-duplicated time stamps.
I would duplicate the original time stamp column and then add the new +/- 1 minute columns. I would use Unpivot Only Selected Columns on the 3 added time stamp columns to convert them from columns to rows. Then I would select the generated Value column and apply Keep Duplicates. That will keep just the first row of any duplicates found amongst the 3 time stamps.
Then back in the original query, I would add a Merge Queries step to connect it to the Non-duplicated time stamps query. I would match on the original time stamp column, possibly on other columns if required. The Join Kind would be Left Anti (rows only in first). That should remove your duplicates.