RRDtool and left horizontal scale formatting - rrdtool

Im learning RRDtool. I created a graph:
#!/bin/bash
rrdtool graph /home/pi/rrd/test.png \
--end now --start now-6000s --width 500 --height 400 \
DEF:ds0a=/home/pi/rrd/temperature.rrd:temperature:AVERAGE \
AREA:ds0a#0000FF:"Temperature ('C)\l" \
It looks like this:
How can I format scale to add fractional part?
I want 25.2, 25.4, 25.6 etc. instead of 25 few times.
I have tried option from RRDtool documentation online
--left-axis-format
but my RRDtool has no such option.
There is no problem with
--right-axis-format
it works as I want, but... I want correct format on left side, not right.
Im using 1.4.7 on Raspberry Pi. I was asking on unix.stackexchange.com about this, but there are more questions about RRDtool here, so I moved my question here.

Later versions of RRDTool handle the axis labelling a bit better than earlier ones, so an upgrade might be all that is needed to fix it.
The first thing to try is --alt-y-grid option which changes the default way the Y-axis labels are placed. This might solve your issue.
You can override the automatic Y-axis calculations using something like --y-grid 0.2:5 which will put a tick every 0.2 but only label every 5 ticks, IE at 25, 26, 27 and so on. This will give you a sane but sparsely populated Y-axis.
However, maybe you want a label at every line, but including the decimals. In this case, you can specify the formatting of the Y-axis labels to include a decimal place: --left-axis-format "%.1lf" . You say that your version does not support this, so you might like to consider upgrading.

I've installed rrdtool 1.4.8 on Raspbian using the testing branch. Unfortunately, the --left-axis-format option isn't available in 1.4.8 either. I could see in GIT where the code for --left-axis-format was added but my GIT foo isn't strong enough to figure out what version it was merged with.
Update: --left-axis-format wasn't added until 1.4.9 according to the changelog:
RRDtool 1.4.9 - 2014-09-29
New Features
allows rrdrestore to read input from stdin
add documentation for RRDs::xport
RPN operators MINNAN and MAXNAN
--left-axis-format option to rrd_graph
Updated update: I was able to easily compile rrdtool 1.4.9 from source just following the instructions in the included doc/rrdbuild.pod instruction file.

Related

Configure tensorboard to start with no smoothing -- for certain plots

Can I configure tensorboard to start with no smoothing? The smoothing can hide overfitting. If there was a command line option, or if tensorboard looked in a location like ~/.tensorboard/config.ini that would be great.
As an example
But with the default smoothing it looks like
you've got to look at the second one closely to see your overfitting
or what would be even better, is to configure this per plot. Maybe even when making the SummaryWriter in the code.

Tensorflow runs "Running per image evaluation" indefinitly

I am running my first tensorflow job (object detection training) right now, using the tensorflow API. I am using the ssd mobilenet network from the modelzoo. I used the >>ssd_mobilenet_v1_0.75_depth_quantized_300x300_coco14_sync.config<< as a config-file and as a fine tune checkpoint the >>ssd_mobilenet_v1_0.75_depth_300x300_coco14_sync_2018_07_03<< checkpoint.
I started my training with the following command:
PIPELINE_CONFIG_PATH='/my_path_to_tensorflow/tensorflow/models/research/object_detection/models/model/ssd_mobilenet_v1_0.75_depth_quantized_300x300_coco14_sync.config'
MODEL_DIR='/my_path_to_tensorflow/tensorflow/models/research/object_detection/models/model/train'
NUM_TRAIN_STEPS=200000
SAMPLE_1_OF_N_EVAL_EXAMPLES=1
python object_detection/model_main.py \
--pipeline_config_path=${PIPELINE_CONFIG_PATH} \
--model_dir=${MODEL_DIR} \
--num_train_steps=${NUM_TRAIN_STEPS} \
--sample_1_of_n_eval_examples=$SAMPLE_1_OF_N_EVAL_EXAMPLES \
--alsologtostderr
No coming to my problem, I hope the community can help me with. I trained the network over night and it trained for 1400 steps and then started evaluating per image, which was running the entire night. Next morning I saw, that network only evaluated and the training was still at 1400 steps. You can see part of the console output in the image below.
Console output from evaluation
I tried to take control by using the eval config parameter in the config file.
eval_config: {
metrics_set: "coco_detection_metrics"
use_moving_averages: false
num_examples: 5000
}
I added max_evals = 1, because the documentation says that I can limit the evaluation like this. I also changend eval_interval_secs = 3600 because I only wanted one eval every hour. Both options had no effect.
I also tried other config-files from the modelzoo, with no luck. I searched google for hours, only to find answers which told me to change the parameters I already changed. So I am coming to stackoverflow to find help in this Matter.
Can anybody help me, maybe hat the same experience? Thanks in advance for all your help!
Environment information
$ pip freeze | grep tensor
tensorboard==1.11.0
tensorflow==1.11.0
tensorflow-gpu==1.11.0
$ python -V
Python 2.7.12
I figured out a solution for the problem. The problem with tensorflow 1.10 and after is, that you can not set checkpoint steps or checkpoint secs in the config file like before. By default tensorflow 1.10 and after saves a checkpoint every 10 min. If your hardware is not fast enough and you need more then 10 min for evaluation, you are stuck in a loop.
So to change the time steps or training steps till a new checkpoint is safed (which triggers the evaluation), you have to navigate to the model_main.py in the following folder:
tensorflow/models/research/object_detection/
Once you opened model_main.py, navigate to line 62. Here you will find
config = tf.estimator.RunConfig(model_dir=FLAGS.model_dir)
To trigger the checkpoint save after 2500 steps for example, change the entry to this:
config = tf.estimator.RunConfig(model_dir=FLAGS.model_dir,save_checkpoints_steps=2500).
Now the model is saved every 2500 steps and afterwards an evaluation is done.
There are multiple parameters you can pass through this option. You can find a documentation here:
tensorflow/tensorflow/contrib/learn/python/learn/estimators/run_config.py.
From Line 231 to 294 you can see the parameters and documentation.
I hope I can help you with this and you don't have to look for an answer as long as I did.
Could it be that evaluation takes more than 10 minutes in your case? It could be that since 10 minutes is the default interval for making evaluation, it keeps evaluating.
Unfortunately, the current API doesn't easily support altering the time interval for evaluation.
By default, evaluation happens after every checkpoint saving, which by default is set to 10 minutes.
Therefore you can change the time for saving a checkpoint by specifying save_checkpoint_secs or save_checkpoint_steps as an input to the instance of MonitoredSession (or MonitoredTrainingSession). Unfortunately and best to my knowledge, these parameters are not available to be set as flags to model_main.py or from the config file. Therefore, you can either change their value by hard coding, or exporting them out so that they will be available.
An alternative way, without changing the frequency of saving a checkpoint, is modifying the evaluation frequency which is specified as throttle_secs to tf.estimator.EvalSpec.
See my explanation here as to how to export this parameter to model_main.py.

Is tf.py_func allowed at online prediction time?

Is tf.py_func allowed at online prediction time?
If yes any examples of how to use it?
Does the answer change if I need to install additional pip packages?
My use-case: I work with text, I need to do word stemming (using porter stemmer), I know how to do it using python, tensorflow doesn't have Ops for that. I would like to use the same text processing at training and prediction time - thus I would like to encode it all into a tensorflow graph.
https://www.tensorflow.org/api_docs/python/tf/py_func comes with known limitations and I would like to know if it will work during training and online prediction before I invest more time into it.
Thanks
Unfortunately, no. Py_func can not be restored from a saved model. However, since your use case involves pre-processing, just invoke the py_func explicitly in all three (train, eval, serving) input functions. This won't work if the py_func is in the middle of your graph, but for stemming, it should work just fine.

rrdtool 1.5.4: Several options and operations from manpages do not work

I'm running rrdtool 1.5.4 (currently in Debian Sid repos and the latest version to be found) on my Sid desktop and my Stable server. I would like to use some of the features the manpages - which mention the same version - and the in-application help advertise, but they simply don't appear to work.
Specifically, I'm talking about the --source option to rrdtool create and the --step option to rrdtool tune; furthermore, I would like to modify RRAs with rrdtool tune.
The options, however, simply throw ERROR: unknown option, despite appearing no different from others on the author's github, to be found here: https://github.com/oetiker/rrdtool-1.x under src/rrd_create and rrd_tune, respectively.
If I issue one of the RRA operations with rrdtool tune, say rrdtool tune t.rrd RRA:MAX:0.5:10:10 on an empty RRD, I get exactly the same output as when just running rrdtool tune t.rrd.
Background: I have several hundred RRD files from when I was still learning the concept that are badly configured, and I'd like to either modify them with tune or migrate them to a new RRD with --source. I'm aware of rrdjig, by the way, but have so far been unsuccessful in its use, and the --source option appears to be its intended, more stable replacement.
Found the answer. Apparently, it needs on librrd4 in the same version as the binary to provide all functions, and since it's a Stable system and I don't think rrdtool explicitly specifies the library version it wants, apt-get thought the 1.4.8 from Stable was enough.

Optimizing parameters using CVParameterSelection in Weka Explorer

What I am trying to accomplish is to optimize one parameter at a time, for one learning algorithm. Take for example Ridor and lets say I want to optimize the number of folds (-F) parameter and run it from 2-10 or whatever. I then want output on a format that is easy to parse and then choose a final value myself. I think this should be possible with CVParameterSelection. Even if not I would like help to get it to work on at least a basic level.
I have selected CVParameterSelection as my classifier, and as a parameter to CVParameterSelection I have chosen Ridor as the classifier to optimize. What I have trouble doing is telling CVParameterSelection that it is the -F parameter I want to optimize, and I want to go from 2 to 10 in 1 increments on the format 2 10 9 as per instructions here http://weka.wikispaces.com/Optimizing+parameters. The choice of Ridor and parameter here is completely arbitrary. I want to run any algorithm, with any parameter and have it vary the parameter in a range.
I can not find the ArrayEditor that this tutorial speaks of, I have clicked literally everything everywhere. Nothing that looks like an array editor, nothing that is named ArrayEditor. The total command line per default is weka.classifiers.meta.CVParameterSelection -X 10 -S 1 -W weka.classifiers.rules.Ridor -- -F 3 -S 1 -N 2.0.
I have tried sending -F 2 10 9 on the command line to both CVParameterSelection and Ridor. I have also tried reading section 11.5 on optimizing performance in the Weka book but I do not understand the instructions there either.
This feels like it should be really simple and obvious. Can someone point out what I am doing wrong and post a detailed description of exactly how to do this. Please assume I am a total idiot because it really should not take many many hours to do this.
During your configuration of CVParameterSelection, you will find field named "CVParameters", by clicking it, new window named "weka.gui.GenericArrayEditor" will open. Write inside it your parameter and its range as showing in weka tutorial, finally close this window.