Good day everyone!
I have a problem performing batch inference in TensorRT. When the batch size is 1 it works like a charm, but when I change it to any other number it gives out plain garbage.
Step by step, I downloaded TensorRT (5.0) and installed it on my Ubuntu 18.04 laptop with GTX755M. I then built the samples that went with it and tested it on sampleMNIST sample and it worked like a charm. I then proceeded to change every occurrence of mParams.batchSize to 10. Of course I also changed the size of allocated memory and modified result printing along. But after I recompiled the sample I got completely weird results - the output says 80% 7 20% 1 for every given input:
grim#shigoto:~/tensorrt/bin$ ./sample_mnist
Building and running a GPU inference engine for MNIST
Input:
############################
############################
############################
############################
############################
################.*##########
################.=##########
############+###.=##########
###########% ###.=##########
###########% ###.=##########
###########+ *##:-##########
###########= *##= ##########
###########. ###= ##########
##########= =++.-##########
########## =##########
########## :*## =##########
##########:*###% =##########
###############% =##########
################ =##########
################ =##########
###############* *##########
###############= ###########
###############= ###########
###############=.###########
###############++###########
############################
############################
############################
Output:
0:
1: ********
2:
3:
4:
5:
6:
7: **
8:
9:
This output repeats 10 times. I've tried this with different networks but results were similar, most of networks give 1 correct output and plain garbage the other 9 times. The complete sample can be found here. I've tried googling documentation but I can't understand what I'm doing wrong. Could you please tell me what am I doing wrong or how to perform batch inference in TensorRT?
Did you also modified the mnist.prototxt?
Especially this part:
input: "data"
input_shape {
dim: 1
dim: 1
dim: 28
dim: 28
}
I think that should be:
input: "data"
input_shape {
dim: 10
dim: 1
dim: 28
dim: 28
}
Related
I have been experimenting a lot with writing unit tests for alerts as per this: https://prometheus.io/docs/prometheus/latest/configuration/unit_testing_rules/#alerts-yml
I have some simple cases out, but now I am tackling rules that are less trivial. For example this:
abs(
avg_over_time(my_metrics{service_name="aService"}[1m])
-
avg_over_time(my_metrics{service_name="aService"}[3m])
)
/ stddev_over_time(my_metrics{service_name="aService"}[3m])
> 3
I have one file with the above rule and then this is in my test:
- interval: 1m
# Series data.
input_series:
- series: 'my_metrics{service_name="aService"}'
values: '0 0 0 0 1 0 0 0 0 '
alert_rule_test:
- eval_time: 3m
alertname: myalert
exp_alerts:
- exp_labels:
severity: warning
service_name: aService
exp_annotations:
summary: "some text"
description: "some other text"
I am not sure what my series should look like in order to test deviation from the mean. Is it even possible to test such rule?
Thank you
EDIT
I can have a succesful test if I set it > 0 as opposed to >3 I have tried to set a series of this sort:
'10+10x2 30+1000x1000'
but I cannot understand what would be the correct setup to have it triggered
This isn't a direct answer, rather a tip from someone who spent quite some time on these tests. Did you know that apart from testing alert expressions, you can unittest PromQL expressions as well? See how it can be useful:
evaluation_interval: 1m
tests:
- interval: 1m
input_series:
- series: test_metric
values: 1 1 1 10 1 1 1
promql_expr_test:
- expr: avg_over_time(test_metric[1m])
eval_time: 4m
exp_samples:
- value: #5.5
- expr: avg_over_time(test_metric[3m])
eval_time: 4m
exp_samples:
- value: #3.25
- expr: stddev_over_time(test_metric[3m])
eval_time: 4m
exp_samples:
- value: #3.897114317029974
I've split your alert expression into three separate, simple parts. If you run this unittest, you will see the commented-out values in the error message. From here it is not difficult to join pieces together and see why the alert is not happening. You can use that to build a working sequence of values.
Below is a simple dataset based on what I'm working with, followed by a program I wrote. It is just supposed to quickly tell me when one of my data collection teams was at a particular school, and I can ask for the school by name or code.
clear
input float(date group school_code) str15 school
1 1 23 "Lincoln HS"
2 1 21 "Washington HS"
3 1 32 "Clay HS"
1 2 54 "Adams HS"
2 2 11 "Jackson HS"
3 2 15 "Monroe HS"
1 3 27 "Rosevelt HS"
2 3 49 "Grant HS"
3 3 3 "Kennedy HS"
end
Small warning, this program uses the groups command which can be found on SSC.
program define WhenWas
syntax, Group(int) [ School(str) Code(int) ]
version 16
if "`school'" != "" groups date school school_code group if school == "`school'" & group == `group', sepby(date) missing show(freq)
if `code' != . groups date school school_code group if school_code == `code' & group == `group', sepby(date) missing show(freq)
end
But when I run the command to use the program, I get an "Invalid Syntax" error at the syntax line of the program, seemingly before it even begins to go into the commands.
WhenWas, g(2) c(54)
I've tried capitalizing the words in the syntax line, using the full words in the WhenWascommand, etc.
In your program that doesn't work, the problem lies within
syntax, Group(int) [ School(str) Code(int) ]
As code() is an optional option that expects an integer, it must have a specified default. If you don't want to specify a default, then
syntax, Group(int) [ School(str) Code(numlist int max=1) ]
is a way not to do that. but you need to check that a code was specified:
if "`code'" == "" {
di as err "code() not specified"
exit 198
}
as otherwise your next command referring to code will fail. See the help for syntax for more details.
An alternative is to specify a nonsense code as default, which might be -1 if codes are all positive.
I am new to profiling. I am trying to profile my PHP with xdebug.
The cachegrind file is created but has no significant content
I have set xdebug.profiler_enable_trigger = 1
xdebug.profiler_output_name = cachegrind+%p+%H+%R.cg
I call my page with additional GET parameter ?XDEBUG_PROFILE=1
My cachegrind file is generated but has no significant content
Here is my output:
version: 1
creator: xdebug 2.7.0alpha1 (PHP 7.0.30-dev)
cmd: C:\WPNserver\www\DMResources\Classes\VendorClasses\PHPMySQLiDatabase\MysqliDb.php
part: 1
positions: line
events: Time Memory
fl=(1)
fn=(221) php::mysqli->close
1244 103 -14832
fl=(42)
fn=(222) MysqliDbExt->__destruct
1239 56 0
cfl=(1)
cfn=(221)
calls=1 0 0
1244 103 -14832
That's it - I must be missing something fundamental.
I think you hit this bug in xdebug.
As suggested by Derick in the issue tracker, you can workaround this by adding %r to the profiler output name. eg: xdebug.profiler_output_name = cachegrind+%p+%H+%R+%r.cg
(with %r adding a random number to the name)
I am trying to write a regex that will parse the following Cisco log messages correctly:
<191>45902: DC-SWITCH2: Aug 30 18:15:16.478: %SFF8472-3-THRESHOLD_VIOLATION: Te0/2: Rx power high warning; Operating value: -0.8 dBm, Threshold value: -1.0 dBm.
Desired output:
Te0/2: Rx power high warning; Operating value: -0.8 dBm, Threshold value: -1.0 dBm.
And:
<191>45902: DC-SWITCH2: Aug 31 19:17:30.147: sensor num : 10 sensor_value :33, high :110 low:85
Desired output:
sensor num : 10 sensor_value :33, high :110 low:85
I have developed the following regex for the first case, but I cannot fathom how to make the mnemonic %STRING section optional:
>\d+:\s.+?:\s.+?(?=:\s):\s%.+?(?=:\s):?\s(.+)
It returns the desired result for the first example, but for the second I get:
10 sensor_value :33, high :110 low:85
You want to make the part that checks for the %STRING non-capturing.
Something like this:
>\d+:\s.+?:\s.+?(?=:\s):\s(?:%.+?:)?\s(.+)
See https://regex101.com/r/F30ALK/1
Why not try something generic like
\d{2}:\d{2}:\d{2}.\d{3}.*? (\b[A-Za-z].*)
where the required output will be in the Group 1.
Example shown here
I have to crawl Wikipedia to get HTML pages of countries. I have successfully crawled. Now to build clusters, I have to do KMeans. I am using Weka for that.
I have used this code to convert my directory into arff format:
https://weka.wikispaces.com/file/view/TextDirectoryToArff.java
Here is its output:
enter image description here
Then I opened that file in Weka and performed StringToWordVector conversion with these parameters:
Then I performed Kmeans. The output I am getting is:
=== Run information ===
Scheme:weka.clusterers.SimpleKMeans -N 2 -A "weka.core.EuclideanDistance -R first-last" -I 5000 -S 10
Relation: text_files_in_files-weka.filters.unsupervised.attribute.StringToWordVector-R1,2-W1000-prune-rate-1.0-C-T-I-N1-L-S-stemmerweka.core.stemmers.SnowballStemmer-M0-O-tokenizerweka.core.tokenizers.WordTokenizer -delimiters " \r\n\t.,;:\'\"()?!"-weka.filters.unsupervised.attribute.StringToWordVector-R-W1000-prune-rate-1.0-C-T-I-N1-L-S-stemmerweka.core.stemmers.SnowballStemmer-M0-O-tokenizerweka.core.tokenizers.WordTokenizer -delimiters " \r\n\t.,;:\'\"()?!"
Instances: 28
Attributes: 1040
[list of attributes omitted]
Test mode:evaluate on training data
=== Model and evaluation on training set ===
kMeans
Number of iterations: 2
Within cluster sum of squared errors: 1915.0448503841326
Missing values globally replaced with mean/mode
Cluster centroids:
Cluster#
Attribute Full Data 0 1
(28) (22) (6)
====================================================================================
.
.
.
.
.
bolsheviks 0.3652 0.3044 0.5878
book 0.3229 0.3051 0.3883
border 0.4329 0.5509 0
border-left-style 0.4329 0.5509 0
border-left-width 0.3375 0.4295 0
border-spacing 0.3124 0.3304 0.2461
border-width 0.5128 0.2785 1.372
boundary 0.309 0.3007 0.3392
brazil 0.381 0.3744 0.4048
british 0.4387 0.2232 1.2288
brown 0.2645 0.2945 0.1545
cache-control=max-age=87840 0.4913 0.4866 0.5083
california 0.5383 0.5085 0.6478
called 0.4853 0.6177 0
camp 0.4591 0.5451 0.1437
canada 0.3176 0.3358 0.251
canadian 0.2976 0.1691 0.7688
capable 0.2475 0.315 0
capita 0.388 0.1188 1.375
carbon 0.3889 0.445 0.1834
caribbean 0.4275 0.5441 0
carlsbad 0.548 0.5339 0.5998
caspian 0.4737 0.5345 0.2507
category 0.2216 0.2821 0
censorship 0.2225 0.0761 0.7596
center 0.4829 0.4074 0.7598
central 0.211 0.0805 0.6898
century 0.2645 0.2041 0.4862
chad 0.3636 0.0979 1.3382
challenger 0.5008 0.6374 0
championship 0.6834 0.8697 0
championships 0.2891 0.1171 0.9197
characteristics 0.237 0 1.1062
charon 0.5643 0.4745 0.8934
china
.
.
.
.
.
Time taken to build model (full training data) : 0.05 seconds
=== Model and evaluation on training set ===
Clustered Instances
0 22 ( 79%)
1 6 ( 21%)
How to check which DocId is in which cluster? I have searched a lot but didnt find anything.
Also, is there any other good Java Library for Kmeans and agglomerate clustering?