How to keep a tensorflow session running in memory with django - django

I have an object detection model built with tensorflow and integrated with a Django project. What currently happens is that whenever a request comes to the Django API, a tf session is created and is closed after the detection is done. Is it possible to start the Django server and a tensorflow session with the required inference graphs to reduce object detection time?

A solution would consist in abstracting the logic to run inference using a session in module. In this module, the session, and the graph would be defined once as global variable, and would be transparently accessed by your views, or whatever, using interfaces like a function run_inference.
If you need more fine control over the lifecycle of the graph and/or session you could consider adding functions like reload_graph etc... or implement that within your module, for example using a class dedicated to managing the lifecycle of the tensorflow objects, and running inference.
This looks to me like the best solution. This way you will also be able to have a more robust workflow, and have more control in case for example you want to use multithreading and want more safety with respect to how the inference code is run.

As I commented to the previous solution suggestion, graph/session stopped being the optimal solution a long time ago.
I recommend you to start using mlops, and leave everything preloaded as much as you can, so that when you call the api, django only takes care of generating the predict.
I leave you here my project, I have several with ML/Django/Tflite
If you have any doubt don't hesitate to ask me.
Link to my project: https://github.com/Nouvellie/django-mlops-docker/blob/main/src/main/apps/mlops/utils/model_loader.py

Related

How to use ML.Net PredictionEnginePool with ONNX model?

I am trying to use ML.Net to use ONNX models for prediction behind an API. There are documentation around how to use ML.Net & ONNX models with a console app here however, as it is described in this article, it wouldn't scale well. Since the article, they added PredictionEnginePool which solves the scaling problem, but I cannot make it work with ONNX models. When I try to load the model, it throws two exception:
InvalidOperationException: Repository doesn't contain entry DataLoaderModel\Model.key
Microsoft.ML.RepositoryReader.OpenEntry(string dir, string name)
InvalidOperationException: Could not load legacy format model
Microsoft.ML.ModelOperationsCatalog.Load(Stream stream, out DataViewSchema inputSchema)
The legacy format exception is interesting because I tried two different models, one from Azure Machine Learning Service with AutoML, and one with Scikit trained locally so not sure which part is "legacy".
The missing Model.key might be the hint though, because the zip model file that is used on the MS API documentation doesn't have a single .onnx file but it has folders with binary files and some of the files are actually named Model.key.
My question is:
Has anybody ever used PredictionEnginePool with ONNX models? Is it possible? Or it is not implemented yet? (Not sure if it matters but both are classification models, one SVM and one LightGBM)
*UPDATE
Found a way to do this. So it looks like the Engine Pool only supports models in ML.Net format, however you can open the model as it was described in the console app example and save it in ML.Net format, then you can use it with the engine pool.
There is a similar example for this here.
The OnnxModelConfigurator class opens the ONNX model and saves it in ML.Net format, then in the ctr of Startup.cs you call the configurator to save the model in the right format, and in the ConfigureServices() function you can actually create the pool with the ONNX model.
This works, however by following this approach, the conversion between the formats would be part of the API's source code, so you would need to at least restart the app when you want to use a new model. Which might not be a big deal, if a bit of downtime is ok and even if not, you can avoid it with deployment slots for example. You could also have the conversion as a separate service I guess and then just dump the model file to the API so the pool can detect the new model and use it.
Anyway, thanks for the answers guys!
I have run into your error before, but not using the Pool. If you look at this specific comment and the comments that follow, we resolved the issue by doing a full clean of his project. In that case, he had upgraded to a new version of ML.NET and didn't clean the project so it was causing issues. I am not sure if this will resolve your issue, but I am one of the engineers who works on ML.NET so if it doesn't please feel free to create an issue and we can help you resolve it.
You can also take a look at this guide.
In this case, a model trained using Azure Custom Vision is consumed within an ASP.NET application using PredictionEnginePool.

Sharing code between Blueprint?

How do you share code between Blueprints in Flask?
What structure do you use?
Do you create separate class? If yes how do you pass app or db instances.
What is the best practice?
I think it depends on your code. If this is some kind of helper functions or classes, you may put it into package near your app. If shared code depends on context, perhaps you need to review project structure. As noted by miso in the previous answer:
The documentation of Flask is pretty good. So take a look.
Especially in the section on Blueprints.
But if you sure that structure of your project is good, and you still have a lot of shared code- then, perhaps, it may be usefull to create standalone library or Flask-extension.
Anyway, it all depends on your code.
To access from any module to an application instance, Flask provides the global object current_app, which maintains a reference of one application instance in the current context. This is useful if you want to have various instances of your application running together with different configurations. To use it:
from flask import current_app
From the Flask docs
The application context is created and destroyed as necessary. It
never moves between threads and it will not be shared between
requests. As such it is the perfect place to store database connection
information and other things.
The documentation of Flask is pretty good. So take a look.

test output of clojure functions visually in a browser

I'm developing a Clojure application, which downloads images from the web and analyzes them for certain criteria.
Whatever this might mean, the important part is, that there will be some quite expensive functions in the app, which take a while until they are processed.
In the end, there will be an API that exposes the app's functionality to a web frontend. This is meant to be a second step though.
Since the app has a lot to do with graphics, it makes sense to visualize the outputs of the functions I'm writing during the development process.
Basically I'm looking for an easy way / environment to archive this.
More precisely: Whenever I created a new function, I want to test it's functionality inside a browser: E.g. plot the output, draw some intermediate steps, maybe create some small interactive scripts, that help me to supervise that the algorithms are doing what I intend to. Note: I don't want to to transform the functions to ClojureScript and let them run in the browser, the browser should be just a "display".
Some approaches that came to my mind:
Writing a little backend-server that exposes all the functions of a namespace. So the front-end could access all these functions simply by sending an ajax request to the server, that includes the function and it's parameters in a string, or maybe better in edn format. The back-end receives the request, calls the requested function and sends back the result whenever the calculation is done. Is there maybe already a project, that does exactly this?
Using a project like "Gorilla Repl" This would be a good option, and maybe I'm going to use it. However, I could not yet figure out if it's mechanism covers a way to interactively influence the rendered outputs. It rather works as a worksheet with static renderings.
How would you guys do this? Any suggestions are appreciated.
I have been working on similar problems and I think that you could use the combination of Lighttable + Clojure + Nerdy-painter (plugin).
Nerdy-painter allows you to display images on Lighttable (IDE). Very useful for data exploration or anything that has to do with graphics/plots.
Disclaimer: I am the author of nerdy-painter, still I think the fastest/elegant solution is the one I described above. All other solutions add (IMHO) too much overhead to the development cycle.
A possibility is to use a Jupyter's clojure kernel to interact with clojure. Jupyter runs in a browser and you can add custom binding to simplify access to the DOM.
Here's a clojure kernel: https://github.com/roryk/clojupyter

Sync framework: How to use/sync single data (file) across multiple instances of same application

Currently we have an application (a diagram editor), that have the ability to save and load (serialize) its state in a xml file.
Now we want this application to behave like Microsoft OneNote application. Where multiple users have the ability to access the same file.
Later we may also need to enhance with other things like, (1)what is changed and who changed it, (2)option to resolve conflicts if any.
I came to know about sync framework to resolve this. so far, i have not tried it.
All i want is,
Virtually single file should be edited by multiple instances of
same application.
We need a dll (sync framework) that does following
It takes complete responsibility of file handling.
Using this dll, each instance of the application will notify their own changes.
Each instance of the application should have the ability to detect the changes that is recently made (when, who, what are the changes).
My question:
Will sync framework be suitable for this requirement?
If so, is there a demo application that represents this?
No, Sync Fx cannot handle this. There's a file sync provider, but that's not smart enough to determine what has changed in terms of the actual contents of the file.

How to save data on a persistent store with rubymotion?

I tried to find the best way to store data on a local persistant store but I did not find a lot of resources about this.
I found only :
Motion model
But what is the best gem/way to make a offline app. I mean, I sync with remote one time and after that, my application uses a local storage (cora data, sqlite...) to read data?
Thank you
I use MotionModel (heck, I wrote MotionModel) but I'm biased. It's supposed to be for use-cases where you don't want to set up the Core Data stack. That said, InfiniteRed has done a great job with RMQ so it's likely they did a great job with CDQ, which wraps Core Data.
I suggest you make a play app with each and decide for yourself.
I prefer the way HipByte suggests in the Locations sample.
Check the LocationsStore class, and how they use CoreData in a very simple way.
You could also use CouchbaseLite and leverage it's sync capabilities to make the data available offline. I created an CouchbaseLite RubyMotion Example which is a port of a TodoLite-iOS version of the App. I'm currently working on making the integration nicer and more ruby like, but it works as is.