When extracting data "website link trick" (see https://github.com/tensorflow/tensorboard/issues/3543#issuecomment-618527147), the data end on 50 epochs, independent of number of epochs.
For example:
This histogram has 200 epochs and end on 200.
But when opened through "website link trick" as a JSON file, the data have only 50 epochs.
I have tried to diagnose it, and seems that Tensorboard initially loads 50 epochs, and then needs time to load the rest. The histogram graf gets lodade. Nevertheless, the JSON never gets refereshed!
Version of Tensorboard: TensorBoard 2.11.2 (latest)
Related
I don't think it's a bug but it's tough to find the correct answer on the Internet to understand what's happening. So I create an RRD(1minute step) database with 3 RRAs:
RRA:AVERAGE:0.5:1m:1d
RRA:AVERAGE:0.5:1h:6M
RRA:AVERAGE:0.5:1d:1y
So I assume when I update the data point I should have the capability to save 1-year data. However, I can see 24 hours data only whenever long I emit the data points to the RRD database.
This is the rrdtool info output from one RRD database I created: https://gist.github.com/meow-watermelon/206a10a83c937c771f6cfc5fa7a2e948
Is there anything I missed or any unknown corner cases that I hit which caused only 24 hours data is shown?
Thanks.
The RRA consolodated data points (cdp) are only written to the RRA when there are sufficient to make one. Thus, with a 1-minute interval, and an xff of 0.5, you would need to be collecting data every minute for more than 12 hours (plus 1 minute!) to make up a full cdp.
In addition, the cdp update on boundaries relative to UCT; this means that for your largest 1d size RRA, you would need to have at least 12 hours of data collected in the 24 hours prior to 00:00 UCT, and then the next update would write the cdp.
This means that you should collect data at the standard interval (60s) for more than 24 hours before you can be certain of seeing your cdp appear in the largest-granularity RRA; the best test is to collect data every minute for 48 hours and then check your 1d-granularity RRA
I am trying to pull the data from the Check Mk server RRD File for CPU and Memory. From that, am trying to find the MAX and Average value for the particular Host for 1 month period. For the maximum value, I have fetched the file from the Check_mk server using the RRD fetch command and I get the exact value when I compared the output in the check_mk graph but when I try to do the same for the Average value I get the wrong output which does not match the Check_mk graph value and RRD File raw value. Kindly refer to the attached images where I have verified the value for average manually by fetching the data but it shows the wrong output.
Hello #Steve shipway,
Please find the requested data.
1)Structure of RRD File. Attached the image.
2)We are not generating the graph from the Check_mk . We are generating the RRD File using rrdtool dump CPU_utilization.xml > /tmp/CPU_utilization1.xml rrdtool fetch CPU_utilization_user.rrd MAX -r 6h -s Starting Date-e ending date.
Share
enter image description here
I've annotated roughly 15 mins of video with intel's CVAT. - https://github.com/opencv/cvat
When exporting to TFRecord, the file is only about 4mb (should be closer to 200mb at least), and doesn't appear to actually contain any image data. How can I export a TF Record with the image data along with the annotation data?
As of 12/1/2019 - This is not currently supported in intel CVAT.
I was able to achieve my goal, and create tfrecords containing both annotation data and image data by using a combination of ffmpeg to split my original .mov into frames and create_pascal_tf_record.py to generate the tfrecord.
I have created tf records files that are stored on a google storage bucket. I have a code running on ml-engine to train a model using the data in these tf records
Each tf record file contains a batch of 20 examples and is approximately 8Mb size (Mega bytes). There are several thousands of files on the bucket.
My problem is that it litteraly takes forever to start the training. I have to wait about 40 minutes between the moment where the package is loaded and the moment where the training actually starts. I am guessing this is the time necessary to download the data and fill the queues?
The code is (slightly simplified for sake of conciseness):
# Create a queue which will produce tf record names
filename_queue = tf.train.string_input_producer(files, num_epochs=num_epochs, capacity=100)
# Read the record
reader = tf.TFRecordReader()
_, serialized_example = reader.read(filename_queue)
# Map for decoding the serialized example
features = tf.parse_single_example(
serialized_example,
features={
'data': tf.FixedLenFeature([], tf.float32),
'label': tf.FixedLenFeature([], tf.int64)
})
train_tensors = tf.train.shuffle_batch(
[features['data'], features['label']],
batch_size=30,
capacity=600,
min_after_dequeue=400,
allow_smaller_final_batch=True
enqueue_many=True)
I have checked that my bucket and my job share the same region parameter.
I don't understand what is taking so long: it should just be a matter of downloading a few hundreds Mbs (a few tens of tf records files should be enough to have more than min_after_dequeue elements in the queue).
Any idea of what am I missing, or where the problem might be?
Thanks
Sorry, my bad. I was using a custom function to:
Verify that each file passed as a tf record actually exists.
Expand wild-card characters, if any
Turns out this is a very bad idea when dealing with thousands of files on gs://
I have removed this "sanity" check and it's working fine now.
I have a module in GNU Radio that has a sampling rate of 50 samples per second. I am feeding that to a QT Time Sink to visualise it in real time. In a single window, I want 200 samples to be displayed but I want the update to be done every 50 samples. This means that at each instance, I need to display 150 past samples in addition to the 50 current samples.
Are there any options in the Time Sink block to achieve that?
No, there's no such options in the Qt Time Sink.
What you can do, however, is split your sample path into one delayed and one undelayed path, and then use a "patterned interleaver block" to repeat parts of your sample stream.
50 S/s is very low. You'll have a hard time working with this like you probably expect it to work – GNU Radio is a buffer architecture with relatively large pseudo-circular buffers (I wrote about how these work in a blog post), but the takeaway is that GNU Radio will tend to accumulate 4096 or 8192 (depending on the size of the individual sample) and process these at once (see the blog post). Which means that it might happen that you get one "burst" of samples every 80 seconds, then nothing for 80 seconds, then another burst.