Hi I have an RTF file which took around 3 hrs 15 mins time to generate the PDF report. I want to reduce the time using BI publisher in MS-word.
I expecting it should take much more less time than 3 hrs 15 mins
In this RTF I have created a repeating group within a repeating group and displayed the data with certain manipulations
SAmple XML
Tested on Linux Machine
Machine Details :
Number of cores : 16
RAM : 65.80 GB
Related
I have a Power BI heat map that measures the uptime of equipment for the current hour and each of the 8 hours prior:
The problem lies in the fact that a given hour may not always have records; in the above example, the machine was running from 4 AM to 6 AM non-stop and did not have any records for the 5 AM hour to indicate that it had been up and running that entire hour. The -3 hour should be showing 100% but it is incorrectly showing 0%.
This visualization has a separate measure for each hour, calculating the uptime for each hour. For example, Hour -3 has a calculation as follows:
-3 = IF(ISBLANK(CALCULATE(COUNTROWS(UptimeCombined), FILTER(UptimeCombined, UptimeCombined[Hours_Offset] = 3))) = TRUE,
(CALCULATE('UptimeCombined'[Uptime %], FILTER(UptimeCombined, UptimeCombined[Hours_Offset] > 3 && UptimeCombined[ShiftDateTime] = MAX(UptimeCombined[ShiftDateTime])))),
(CALCULATE('UptimeCombined'[Uptime %], FILTER('UptimeCombined', 'UptimeCombined'[Hours_Offset] = 3))))
If the row count comes back BLANK, then I need to look BACKWARD in time (max record where Hour_Offset > 3) and find the status (either "RN" or "DN") of whatever the last record was and display that uptime value for that hour (100% or 0%, respectively) but I can't get that part of the measure to work properly; the -3 hour should show 100% because the latest record before then had status of "RN" (in the -4 hour). Here is what the data looks like:
What is the correct DAX I need in Line 2 of this measure to set the uptime to either 100% or 0%, based on the latest record from before that hour?
I ended up filling in the data gaps in the source view on the SQL Server side before the data ever hits Power BI... much simpler that way than trying to mess with complex calculations and measures.
I am working with a set of data called Response time and it is currently in the format of hh:mm:ss but the field is a text field in Power BI Power Query Editor. I am trying to convert it to straight time so I can get an average from it but I keep getting errors due to many of the times being higher than 24 hours. I have results of like 37:50:00 and 40:00:00 and those error out while times like 22:24:00 work fine but it is converted to the actual time on a clock.
How do I keep the design of the data but get it into a Time data type so that I can pull an average?
I have an explore like the following -
Timestamp Rate Count
July 1 $2.00 15
July 2 $2.00 12
July 3 $3.00 20
July 4 $3.00 25
July 5 $2.00 10
I want to get the below results -
Rate Number of days Count
$2.00 3 37
$3.00 2 45
How can I calculate the Number of days column in the the table calculation? I don't want the timestamp to be included in the final table.
First of all— is rate a dimension? If so, and you have LookML access, you could create a "Count Days" measure that's just a simple count, and then return Rate, Count Days, and Count. That would be really simple.
If you can't do that, this hard to do with just a table calculation, since what you're asking for is to change the grouping of the data. Generally, that's something that's only possible in SQL or LookML, where you can actually alter the grouping and aggregation of the data.
With Table Calculations, you can make operations on the data that's been returned by the query, but you can't change the grouping or aggregation of it— So the issue becomes that it's quite difficult to take 3 rows and then use a table calculation to represent those as 1 row.
I'd recommend taking this to the LookML or SQL if you have developer access or can ask someone who does. If you can't do that, then I'd suggest you look at this thread: https://discourse.looker.com/t/creating-a-window-function-inside-a-table-calculation-custom-measure/16973 which explains how to do these kinds of functions in table calculations. It's a bit complex, though.
Once you've done the calculation, you'd want to use the Hide No's from Visualization feature to remove the rows you aren't interested in.
I have entries that are uniquely identified by a variety of fields and that I pull in from Excel. Entries relate to the daily amount of work done, and people working on a specific area of a plant. Each entry has a work done field (measurement of the work done on that area), and a manpower count. The productivity per area is calculated by work done divided by manpower.
Date Area Work Done Manpower Productivity
2017/02/01 Pipe 50 25 2
2017/02/01 Valve 22 2 11
2017/02/01 Machine 54 2 22
I want to display the work done and manpower as bars in power BI, and the average productivity per day as a line. The problem is that the real productivity for the day (total work done divided by total manpower) is not the average of the individual productivity per area. Thus, I want to be able to create a line that total work done and manpower per day, and divides them to get the productivity, then only displays the productivity.
How can I do this in power BI?
I am currently developing a sentiment index using Google search frequencies taken from Google Trends.
I am using Stata 12 on Windows.
My approach is as following:
I downloaded approx ~150 business-related search queries from Googletrends from Jan 2004 to Dec 2013
I now want to construct an index using the 30 at that point in time most relevant queries related to the market I observe
To achieve that I want to use monthly expanding backward rolling regressions of each query on the market
Thus I need to regress 150 items one-by-one on the market 120 times (12 months x 10 years), using different time windows and then extract the 30 queries with the most negative t-test.
To exemplify the procedure, if I would want to construct the sentiment for January 2010 I would regress the query terms on the market during the period from Jan 2004 to December 2009 and then extract the 30 queries with the most negative t-statistic.
Now I am looking for a way to make this as automatized as possible. I guess should be able to run the 150 items at once, and I can specify the time window using the time stamps. Using Excel commands and creating a do-file with all the regression commands in it (which would be quite large) I could probably create the regressions relatively efficiently (although it depends on how much Stata can handle - any experience on that?).
What I would need to make the data extraction much easier is a command which I can use to rank the results of the regression according to their t-statistics. Does someone have an efficient approach to this? Or has general advice?
If you are using Stata, once you run a ttest, you can type return list and you will get scalars that stata stores. Once you run a loop you can store these values in a number of different ways. check out the post command.