Tensorboard logging non-tensor (numpy) information (AUC) - python-2.7

I would like to record in tensorboard some per-run information calculated by some python-blackbox function.
Specifically, I'm envisioning using sklearn.metrics.auc after having run sess.run().
If "auc" was actually a tensor node, life would be simple. However, the setup is more like:
stuff=sess.run()
auc=auc(stuff)
If there is a more tensorflow-onic way of doing this I am interested in that. My current setup involves creating separate train&test graphs.
If there is a way to complete the task as stated above, I am interested in that as well.

You can make a custom summary with your own data using this code:
tf.Summary(value=[tf.Summary.Value(tag="auc", simple_value=auc)]))
Then you can add that summary to the summary writer yourself. (Don't forget to add a step).

Related

Hangfire Job Description and Name Customization

I would like, if possible, to have some control on the job description and name. I tried to add the JobDisplayName to the Controller that is activating the job, also to the method that is being called to run in background but no luck.
Also the job description page is very polluted with unnecessary information that i would like to remove, or to format to a readable information.
In the A case, i would like to remove this, or to format it to a more readable format.
In the B case what can i do to output it to a human readable object?
Issue A: there are quite a few bugs that have been opened but most of them suggest there is no way to update the information displayed there, but, you should follow the Best Practices and keep the method and arguments small.
Fix B
Change the return statement of your method that's enqeued to fix the data displayed.

How to log images in middle of `train`

It seems like you can only log data by return values from train. In many workflows, it might make more sense to directly save images in the middle of a train function (e.g. save images sampled by a generative model or from a vision-based MDP).
Is there a simple way to do this? One idea would be to try to find the log-directory and write to it directly, but would this have issues?
I'm guessing you're asking about using logging images in Trainable:_train().
If you're not in local mode, within the trainable, you can access the self.logdir attribute to write images to. This should automatically be sync'ed back to your head node (if you're running remotely).

How to save and restore a tf.estimator.Estimator model with export_savedmodel?

I started using Tensorflow recently and I try to get use to tf.estimator.Estimator objects. I would like to do something a priori quite natural: after having trained my classifier, i.e. an instance of tf.estimator.Estimator (with the train method), I would like to save it in a file (whatever the extension) and then reload it later to predict the labels for some new data. Since the official documentation recommends to use Estimator APIs, I guess something as important as that should be implemented and documented.
I saw on some other page that the method to do that is export_savedmodel (see the official documentation) but I simply don't understand the documentation. There is no explanation of how to use this method. What is the argument serving_input_fn? I never encountered it in the Creating Custom Estimators tutorial or in any of the tutorials that I read. By doing some googling, I discovered that around a year ago the estimators where defined using an other class (tf.contrib.learn.Estimator) and it looks like the tf.estimator.Estimator is reusing some of the previous APIs. But I don't find clear explanations in the documentation about it.
Could someone please give me a toy example? Or explain me how to define/find this serving_input_fn?
And then how would be load the trained classifier again?
Thank you for your help!
Edit: I discovered that one doesn't necessarily need to use export_savemodel to save the model. It is actually done automatically. Then if we define later a new estimator having the same model_dir argument, it will also automatically restore the previous estimator, as explained here.
As you figured out, estimator automatically saves an restores the model for you during the training. export_savemodel might be useful if you want to deploy you model to the field (for example providing the best model for Tensorflow Serving).
Here is a simple example:
est.export_savedmodel(export_dir_base=FLAGS.export_dir, serving_input_receiver_fn=serving_input_fn)
def serving_input_fn():
inputs = {'features': tf.placeholder(tf.float32, [None, 128, 128, 3])}
return tf.estimator.export.ServingInputReceiver(inputs, inputs)
Basically serving_input_fn is responsible for replacing dataset pipelines with a placeholder. In the deployment you can feed data to this placeholder as the input to your model for inference or prediction.

Sharing data across Sitecore pipelines

I´m trying to perform some actions in the pipeline "httpRequestBegin" only when necessary.
My processor is executed after Sitecore resolves the user (processor type="Sitecore.Pipelines.HttpRequest.UserResolver, Sitecore.Kernel" ), as i´m resolving the user too if Sitecore is not able to resolve it first.
Later, i want to add some rendering in the pipeline "insertRenderings", only if actions in the previous pipeline were executed (If i resolved the user, show a message), so i´m trying to save some "flag" in the first step, to check in the second.
My question is, where can I store that flag? I´m trying to find some kind of "per request" cache...
So far, I've tried:
The session: Wrong, it's too early, session doesn't exists yet.
Items (HttpContext.Current.Items): It doesn't work either, my item is not there on the seconds step.
So far i'm using the application cache (HttpContext.Current.Cache) with some unique key, but I don´t like this solution.
Anybody body knows a better approach to share this "flag"?
You could add a flag to the request header and then check it's existence in the latter pipelines, e.g.
// in HttpRequest pipeline
HttpContext.Current.Request.Headers.Add("CustomUserResolve", "true");
// in InsertRenderings pipeline
var customUserResolve = HttpContext.Current.Request.Headers["CustomUserResolve"];
if (Sitecore.MainUtil.GetBool(customUserResolve, false))
{
// custom logic goes here
}
This feels a little dirty, I think adding to Request.QueryString or Request.Params would been nicer but those are readonly. However, if you only need this for a one time deal (i.e. only the first time it is resolved) then it will work since in the next request the Headers are back to default without your custom header added.
HttpContext.Current.Cache or HttpRuntime.Cache could be the fastest solution here. Though this approach would not preserve data when the AppPool gets recycled.
If you add only a few keys to the cache and then maintain them, this solution might work for you. If each request puts an entry into the cache, it may eventually overflow the memory used by worker process in a long run.
As alternative to this you may try to use Sitecore.Context.ClientData property. It uses ClientDataStore that employs a database (look for clientDataStore section in the web.config file) to store data. These entries can survive the AppPool recycle.
Though if you use them a lot, it may become a bottleneck under the load when you need to write to and/or read from the entries.
If you do know that there could be a lot of entries created for sharing purposes, I'd create a scheduled task to clean up the data store from obsolete entries.
I know this is a very old question, but I just want post solution I worked around
Below will hold data per http request basis.
HttpContext.Current.Items["ModuleInfo"] = "Custom Module Info"
we can store data to httpcontext in one sitecore pipeline and retrieve in another...
https://www.codeproject.com/Articles/146455/When-Can-We-Use-HttpContext-Current-Items-to-Store

Can a Custom DataProvider class expose Custom Templates?

I am currently in the process of writing a custom DataProvider. Using the Intergrate External Data documentation.
I've managed to show the external data in the Sitecore back end. However whenever I try to view the data in the items I created, I am getting an error
Null ids are not allowed. <br> Parameter name: displayName
There seems to be precious little on the subject on how to create a custom DataProvider on the Sitecore Developer Network.
The example on their website seems to only show how to import a SINGLE item into a static database. However I am simply trying to merge some items into the hierarchy and I can't find any useful documentation.
It seems that one of your methods that should return an ID doesn't. It might be GetChildIds and/or GetParentId.
Nick Wesselman wrote a good article about it gathering all the information including an example on the Marketplace. I think that is your best start. You can read it here.
Turns out I needed to include at the very least, the Fields->Section->Template in the GetParent method. To be on the safe side I included the Fields/Sections/Templates into my implementations of
GetChildIDs
GetItemDefinition
GetParentID
It wasn't obvious that this was the case, since I had in fact implemented the GetTemplates method correctly, and I had expected that should be enough.