I am just beginning in microstrategy and have created few reports. I need a help here. I have created 10 reports say A B C D ....etc. I have got 5 end users. I want to know how many end users have have accessed these reports .Also there are some intelligence cubes and I want to know how many end users have created reports on their own.
I want to build as a report with these data. Where do I find these information?
Any guidance will be useful. I am using MST 9.4.1.
The statistics you are looking for are collected by the MicroStrategy Enterprise Manager.
Enterprise Manager is made of two components:
An ETL process that loads usage data in his own data warehouse
A MicroStrategy project with several precooked reports to analyze the data collected.
Related
I'm trying to find the best approach to delivering a BI solution to 400+ customers which each have their own database.
I've got PowerBI Embedded working using service principal licensing and I have the PowerBI service connected to my data through the On Premise Data Gateway.
I've build my first report pointing to 1 of the customer databases. Which works lovely.
What I want to do next, when embedding the report, is to tell PowerBI, for this session, to get the database from a different database.
I'm struggling to find somewhere where this is explained, or to understand if this is even possible.
I'm trying to avoid creating 400+ WorkSpaces or 400+ Data Sets.
If someone could point me in the right direction, it would be appreciated.
You can configure the report to use parameters and these parameters can be used to configure the source for your dataset:
https://www.phdata.io/blog/how-to-parameterize-data-sources-power-bi/
These parameters can be set by the app hosting the embedded report:
https://learn.microsoft.com/en-us/rest/api/power-bi/datasets/update-parameters-in-group
Because the app is setting the parameter, each user will only see their own data. Since this will be a live connection, you would need to think about how the underlying server can support the workload.
An alternative solution would be to consolidate the customer databases into a single database (just the relevant tables) and use row level security to restrict access for each customer. The advantage to this design is that you take the burden off of the underlying SQL instance and push it into a PBI dataset that is made to handle huge datasets with sub-second response times.
More on that here: https://learn.microsoft.com/en-us/power-bi/enterprise/service-admin-rls
I am working on a PowerBi project and I need some advice/questions on the best way to approach this project. I am tasked to create a dashboard for employee metrics pulled from an onsite SQL Server database. The managers here are going to have access to the PowerBi cloud, so I will end up uploading this to the cloud. There are 10 or so metrics that need to be shown on the dashboard. We have 5000+ employees. My first thought was to create a table and dump all the metrics into a table and set the PowerBi report to import the data, but that seems excessive and a waste of space to upload all that data to the CLOUD because all of the managers don't need access to every employee. They may want to see 1 or 2 employees' metrics on the dashboard.
My second thought is to (and if this is possible) create a stored procedure that will take the employee id and output a dataset for PowerBi to create a visual for. On the dashboard, have a list of employees and when a manager selects one, PowerBi will call the stored procedure with the employee id and the dataset will be returned for PowerBi to decipher into a visual based on my measurements. I guess I would set the PowerBi report connection type as DIRECT QUERY?
Here are my questions:
Is this possible? Is it possible to what I am thinking for my second plan? Is this how DIRECT QUERY works?
If so, how does DIRECT QUERY work with the PowerBi cloud?
What is setup like? Do I just install the PowerBi Data Gateway/configure it like IMPORT DATA and PowerBi does the rest?
A couple of queries:
What is the frequency of data update ?
In case if it is a batch job, it is ideally preferable to import that data from source into powerbi model and do reporting on the imported data as
a) The performance would be quicker
b) There would be no to and for of data across on prem database and cloud
c) the source would not be impacted constantly
So is the ask to have RLS wherein the managers should see only the employees under them?
Then it is pretty easy to implement RLS in imported version rather than in case of direct query.
Also you won't be able to pass parameters to stored procedures, and you can't execute them in direct query mode. You can however, create table valued functions which give you the ability to use table variables and perform other functions that are more complex in nature in Direct Query mode
you can refer this for additional details :
https://community.powerbi.com/t5/Desktop/Can-i-call-Stored-Procedure-with-Direct-Query/m-p/267141#:~:text=%40Pallavi%20you%20won't%20be,nature%20in%20Direct%20Query%20mode.
here is a specific example:
i work for company A. i build a power bi data model using SQL query & it rests on service.
i want data from company B. assume they have a data model on service already. they are a different tenant.
can i connect to company B's data model as a dataflow, source, etc & use it in my data model assuming all necessary permissions are granted?
how do i accomplish this if not? what's the minimal architecture needed please?
thanks for clarifying. most sources on the net don't answer this specific example in my experience & i'm going around in circles with support to get this answered so any help would be appreciated. cheers.
In PowerBI reports you can only connect to one SSAS Model data source at a time. You will need to use some kind of ETL process to pull the data from Company B SQL Server and stage it on Company A SQL Server and then make use of the data in the data model within Company A. I recommend you use SSIS and create a package with a data flow task to move the data you need.
I'm trying to get data from one of the reports available in the google play console. Specifically the user_acquisition report. I set up the data transfer service within the google cloud platform in order to use the BigQuery API.
When querying that specific report the results are partial. Some columns match the results I get when downloading the report manually but other columns just have the value null although the downloaded report shows that there should be numerical values there.
Another peculiar thing is that when specifying a date range for the query (month of may for example) the result will show about 1/3 of the dates in that month but there should be a row for each day of the month.
When looking at the transfer runs history, some of the runs have completed successfully, and some have failed giving the error message: Error code 5 : No files found for any reports. Please make sure you selected the correct Google Cloud Storage bucket and Google Play reports exist. But if no files are found, then how am i getting any results at all?
The users of both the GCP and Google Play Console are the owners of the project, so there shouldn't be any issue with the permissions to access the bucket where the reports are stored.
I tried creating another data transfer service to see if it can even find the reports. It did find some of the files but not the one I'm interested in. The transfer run history shows the same error as mentioned above.
Has anyone had some similar problem before and perhaps can offer some sort of solution? Or maybe just has some insights into why this problem is occurring?
I think the issue could be related with the availability of the desired report, since I've found that only some reports are supported by this service:
Detailed reports (Reviews, Financial reports)
Aggregated reports (Statistics, User acquisition)
Could it happen that the specific report your want to export is not supported?
If that's not the case I think you should file a support case sharing the "Resource name" into the Transfer details of the failed exports (and correct ones for reference). Alternatively of the support ticket you can also report a defect on the transfer service on a Public Issue tracker. The support team can help you to review further the error message.
For context, we would like to visualize our data in google data studio - this dataset receives more entries each week. I have tried hosting our data sets in google drive, but it seems that they're too large and this slows down google data studio (the file is only 50 mb, am I doing something wrong?).
I have loaded our data into google cloud storage --> google bigquery, and connected my google data studio to my bigquery table. This has allowed me to use the google data studio dashboard much quicker!
I'm not sure what is the best way to update our data weekly in google cloud/bigquery. I have found a slow way to do this by uploading the new weekly data to google cloud, then appending the data to my table manually in bigquery, but I'm wondering if there's a better way to do this (or at least a more automated way)?
I'm open to any suggestions, and if you think that bigquery/google cloud storage is not the answer for me, please let me know!
If I understand your question correctly, you want to automate the query that populate your table, which is connected to Data Studio.
If this is the case, then you can use Scheduled Query from BigQuery. Scheduled query allow you to define a query which results can be inserted in a new table. Particularly you can specify different rules for repetition (minimum each 15 minutes) and execution, as well as destination writing options (destination table, writing mode: append, truncate).
In order to use Scheduled Queries your account must have the right permissions. You can have a look at the following documentation to better understand how to use Scheduled Query [1].
Also, please note that at the front end the updated data in the BigQuery table will be seen updated in Datastudio at each refresh (click on refresh button in Datastudio). To automatically refresh the front-end visualization you can use the following plugin [2] or automate the click on the refresh button through Browser console commands.
[1] https://cloud.google.com/bigquery/docs/scheduling-queries
[2] https://chrome.google.com/webstore/detail/data-studio-auto-refresh/inkgahcdacjcejipadnndepfllmbgoag?hl=en