What is the best way to create and run machine learning prediction code (with a training set and predictions) so that you can see the output (predictions) in PowerBI?
Related
I have simple datasets of multiple tools. the problem is to predict the remaining useful life of the tools. In the excel sheet that I got, the data from multiple tools is contained. I'm new to weak and when I try to get support vector regression I cannot run the classification.
I'm looking for advices and comparison between Power BI, Spotfire and Tableau, 3 main data viz tools, on the matter of scalability / re-use of code. I encounter a problem in Power BI (at least, for me, it is a concern) and I'd like to know if transitioning to Spotfire or Tableau would solve my problem.
I work on Power BI Desktop and publish reports to Power BI Service. I don't know at all Spofire nor Tableau.
I have a fairly comprehensive report (pbix file) that has the following specificities :
20 pages
20 tables with each of them having 10 calculated columns
60 measures
connection to 10 SQL tables from a server
that report is made to reflect progress/users/tasks/budget based on data from a specific project ("Project A")
Objective is to have the same report on another project ("Project B"). Actually, it's about 10 similar projects. Project B is using a very similar database, and there are 95% of the data structure in common between Project A and Project B. I can't mix users as well, they really need to be separate projects (user access, data confidentiality).
Right now, to the best of my knowledge, I would copy/paste the Report A.pbix, to make Report B.pbix and I would then have 2 different reports. Maintenance-wise, it is problematic to have to maintain 10 reports that are similar (but not identical).
I currently haven't found a way to build a "code library" that I would easily re-use in reports, that could make life easier to update to a specific formula or measure.
Now, questions to the community :
Q1 : Am I missing something in Power BI about code library or code sharing ?
Q2 : Would Tableau have a feature to help me create "similar dashboards" for 10 similar projects ?
Q3 : Would Spotfire have a feature to help me create "similar dashboards" for 10 similar projects ?
Thanks a lot for reading and looking forwards to your answers !
I have used Spotfire extensively. Since it has a published API, it can be automated using IronPython.
I am a SAS newbie and use EG because i'm not a SAS programmer. There are options in query properties under Code Submission that says 'Use grid if able' and 'Allow parallel processing on same server'.
I have to pull data from large datasets and would love to speed things up on the grid but every time I check these boxes, I get an error. Can someone tell me what I can do to set this up?
Thanks
Jeff
Hi My dataset contains only quantitative data(numerical). It doesn't have any class attributes. The dataset contains with sales of different years. I need to analyze the data in different ways. Can I use WEKA for this analysis? I tried to use WEKA tool. But it seemed I cannot proceed with WKA unless I have class variables for the dataset. Please kindly give me a hint.
This question relates to the SAS Enterprise Miner product specifically.
I have an Enterprise Miner diagram that seems to work perfectly on training data, I run both an Impute and a Neural Network to get back a perfectly reasonable model. When I inspect the output I can see the individual predictions on the model.
I also have data to score, created using the same queries but where the target variable was null. I attempted to generation a prediction using a score node, but for some reason each and every EM_PREDICTION from Score is Identical, even when I open the dataset and visually check that the input variables are different.
I'm at a loss as to what is causing this or how to debug it, has anyone seen this before?
Diagram
Scored view