Export SAS model - sas

I built a predictive model on SAS using their Model Studio. (My first one!)
Everything worked, but I can't publish it, see the model, test any other data source against it, or export it. What do I need to do next?
I built this using a trial of their graphical software, so command prompts probably won't help me much.
Thanks for any input!
enter image description here

From the error message it looks like your Publish Server is not setup/configured. I assume that's because you are using a Trial Version.

Related

Rendering quarto book to an alternative directory than _book

I am using quarto and enjoying it very much. Getting some good feedback on the reports from customers.
I would like to use the same qmd files to generate reports for 10 different customers and be able to publish them. Using RSTudio. I have a parameter which defines the customer. Currently the finished reports are stored in directory _book. I would like to send the reports to folders _bookCustomer1, _bookCustomer2, ... and then publish them as separate 'books' from RStudio. Does anyone know how to do that? Or could somebody recommend an alternative way to publish the finished books (reports) one for each customer?
Thank you,
Phil,

How to use ML.Net PredictionEnginePool with ONNX model?

I am trying to use ML.Net to use ONNX models for prediction behind an API. There are documentation around how to use ML.Net & ONNX models with a console app here however, as it is described in this article, it wouldn't scale well. Since the article, they added PredictionEnginePool which solves the scaling problem, but I cannot make it work with ONNX models. When I try to load the model, it throws two exception:
InvalidOperationException: Repository doesn't contain entry DataLoaderModel\Model.key
Microsoft.ML.RepositoryReader.OpenEntry(string dir, string name)
InvalidOperationException: Could not load legacy format model
Microsoft.ML.ModelOperationsCatalog.Load(Stream stream, out DataViewSchema inputSchema)
The legacy format exception is interesting because I tried two different models, one from Azure Machine Learning Service with AutoML, and one with Scikit trained locally so not sure which part is "legacy".
The missing Model.key might be the hint though, because the zip model file that is used on the MS API documentation doesn't have a single .onnx file but it has folders with binary files and some of the files are actually named Model.key.
My question is:
Has anybody ever used PredictionEnginePool with ONNX models? Is it possible? Or it is not implemented yet? (Not sure if it matters but both are classification models, one SVM and one LightGBM)
*UPDATE
Found a way to do this. So it looks like the Engine Pool only supports models in ML.Net format, however you can open the model as it was described in the console app example and save it in ML.Net format, then you can use it with the engine pool.
There is a similar example for this here.
The OnnxModelConfigurator class opens the ONNX model and saves it in ML.Net format, then in the ctr of Startup.cs you call the configurator to save the model in the right format, and in the ConfigureServices() function you can actually create the pool with the ONNX model.
This works, however by following this approach, the conversion between the formats would be part of the API's source code, so you would need to at least restart the app when you want to use a new model. Which might not be a big deal, if a bit of downtime is ok and even if not, you can avoid it with deployment slots for example. You could also have the conversion as a separate service I guess and then just dump the model file to the API so the pool can detect the new model and use it.
Anyway, thanks for the answers guys!
I have run into your error before, but not using the Pool. If you look at this specific comment and the comments that follow, we resolved the issue by doing a full clean of his project. In that case, he had upgraded to a new version of ML.NET and didn't clean the project so it was causing issues. I am not sure if this will resolve your issue, but I am one of the engineers who works on ML.NET so if it doesn't please feel free to create an issue and we can help you resolve it.
You can also take a look at this guide.
In this case, a model trained using Azure Custom Vision is consumed within an ASP.NET application using PredictionEnginePool.

getgauge How to get All Steps using GelAllStepList?

I want to get all Step informations for scenarios using on BeforeScenario Method. So I tried to get with getAllStepsList() method. But everytime return "0". Could you please help me, how can I do it?
Api.GetAllStepsResponse.getDefaultInstance().getAllStepsList()
Regards,
The API package is meant to be consumed by IDEs which open gauge as a daemon, and communicate with it. What you see is the default value of an empty instance.
The information about steps in a scenario is available with gauge when it parses the spec file. This information is not relayed to the runner or the user code currently. Please feel free to log an issue if you'd like to see such meta information here: https://github.com/getgauge/gauge/issues/new.

Return current path of PowerBI Workbook

Is there a way in a PowerBI Query to return the path of the workbook?
I am looking to create an environment variable that detects whether I am working on the file locally or if it has been deployed to the powerbi.com website.
I'm not aware of a direct way to do this. You might try using the Folder source and see what the difference in result is when you run a refresh in the service (it could be disabled, btw) vs. when you run it on the desktop.
Could you also tell us why you'd like to know this? I'm sure there's a specific problem or scenario you're trying to solve for and maybe there's another way to solve it if not using the technique you're asking about in this question.

Neon toolkit and Gate Web Service

I am trying to run any of the services from gate web service, in neon 2.3.
Even Annie that runs so well in gate doesn't run, or better, it stay for indefinite time processing, a thing that should take no more than a couple of seconds. I run wizard, set input directory, leave file pattern as default and set a folder and name for the output ontology, shouldn't it be enough? Shouldn't i get something, even an error?
I think its the location who's giving me problems.
http://safekeeper1.dcs.shef.ac.uk/neon/services/sardine
http://safekeeper1.dcs.shef.ac.uk/neon/services/sprat
http://safekeeper1.dcs.shef.ac.uk/neon/services/annie
http://safekeeper1.dcs.shef.ac.uk/neon/services/termraider
How can i confirm it? Can i run it offline?
Can anyone give me a hand?
Also, i've seen sprat running on gate, on "SPRAT: a tool for automatic semantic pattern-based ontology population"
Can anyone teach me how, and with what versions?
Thx,
Celso Costa